OHDSI Home | Forums | Wiki | Github

How do run an estimation packages so it's also seen in Atlas?

I’m looking at executing an estimation study package that I downloaded from the public Atlas, and I have a question about how the R codes works that I couldn’t find in the book fo OHDSI (or maybe I missed).

From what I understand, there is a CohortsToCreate.csv file that is used to create the cohorts based on the cohort sql file. Also, it seems like a separate cohort table is created from the CreateCohortTable.sql file, and the cohort definition ids that populate the cohort table comes from CohortsToCreate.csv. If I’m just running the R package, it makes sense to me, but how would I view the cohorts that are created from the package within Atlas? Is this possible or does WebAPI/Atlas need to be used to create the cohorts?

@krfeeney @schuemie I know you both are very knowledgeable with study packages.

Thanks!

When you run a CohortMethod analysis, it either reads from the COHORT table in the RESULTS schema (which can be populated by WebAPI/ATLAS), or you can have it read from a temporary cohort-structured table of your specification. If the cohorts don’t already exist, you can create them for purposes of executing the estimation study, but it will not be exposed in ATLAS, because it will not populate the COHORT_DEFINITION table in the admin schema that ATLAS relies on for managing cohorts.

So, short answer, you can use ATLAS to create cohorts that CohortMethod can consume as input, but you shouldn’t run CohortMethod for the purposes of creating cohorts as output to serve as input in ATLAS.

Thanks, @Patrick_Ryan!

That makes sense. I find the visualizations in Atlas to be useful, so ideally we want atlas to be the platform for running packages in the future…

I hear you. :smiley: Though the EvidenceExplorer piece in R Shiny is nifty… if you manage to install all the dependencies to get this package functional.

I agree with Patrick’s recommendation here… but I can’t help but add some wisdom from my “network studies” hat.

Let’s assume you’re getting a study package from public ATLAS and it was designed by someone else. You really need to do some upfront cohort characterization to understand the appropriateness of this study on your data. You’ll have to import the JSON via the utilities tab in your environment and save it in your Atlas. An assumption of the Estimation package is that you’ve validated your data set can support the cohorts you’re being asked to run. It won’t tell you if you’re about to load undersized cohorts into your model… so if you’re lazy (like me) and think, “I’ll run it and see what comes out…” you may actually have it run for a really long time before you get a mysterious error about data frame size (See: CohortMethod: Error in data frame size when Creating cohortMethodData objects).

You should really do some characterization before you hop right into running the package. Using Atlas, I’d suggest loading the file into the Estimation tab and then using the Cohort Definition tab to make sure you have adequate samples.

I’m curious, @cukarthik, what kinds of visualizations are you looking for?

Thanks, @krfeeney. I was mainly referring to the cohort visualizations. Your process of how the study should be run is useful. The characterizations should be done beforehand. It’s good to know about the error :slight_smile:

thanks!

t