OHDSI Home | Forums | Wiki | Github

IUD Study Updates

Hi Everyone,

Thank you very much to those of you who have expressed interest in the IUD study. We have decided to perform two sets of estimation analyses. The first set compares copper intrauterine device (Cu-IUD) to levonorgestrel intrauterine system (LNG-IUS) users with cohort definitions that have been designed for an EHR database. The PLE package for that analysis can be found here: http://atlas-demo.ohdsi.org/#/estimation/cca/262

For those who are interested in running cohort counts for feasibility, they can be found on public ATLAS (http://atlas-demo.ohdsi.org/#/cohortdefinitions).

Cohort # 1771647 (“Cu-IUD”) is the copper IUD cohort: http://atlas-demo.ohdsi.org/#/cohortdefinition/1771647

Cohort #1771648 (“LNG-IUS”) is the levonorgestrel intrauterine system cohort: http://atlas-demo.ohdsi.org/#/cohortdefinition/1771648

Cohort #1771054 (“Alt High Grade Cervical neoplasm”) is the outcome cohort:
http://atlas-demo.ohdsi.org/#/cohortdefinition/1771054

Cohorts# 1772370-75 and 1772392-95 are additional cohorts for sensitivity analyses.

The second analysis compares Cu-IUD to LNG-IUS users with definitions that have been designed for a claims database. The PLE package for that analysis can be found here: http://atlas-demo.ohdsi.org/#/estimation/cca/230.

Cohort # 1772699 (“New users of copper IUD”) is the copper IUD cohort: http://atlas-demo.ohdsi.org/#/cohortdefinition/1772699

Cohort # 1772698 (“New users of LNG IUD”) is the levonorgestrel intrauterine system cohort:
http://atlas-demo.ohdsi.org/#/cohortdefinition/1772698

The outcome cohort is the same. Cohorts # 1772814-23 are additional cohorts for sensitivity analyses.

We intend to distribute an R package soon that consists of the PLE packages and automated non-standard characterizations. Thank you very much for your help and please let me know if I can provide you with any additional information.

Best,
Matt

@mattspotnitz Would you mind if I upload this package to the OHDSI Study Git repo?
This is an OHDSI study (which means a multi-institutional study). And I want OHDSI study leaders to follow our best practice (make the code reproducible and traceable).

3 Likes

Hi @SCYou, You are right. It does belong there. I wanted to get it ready before moving it over, and I’m adding a few other analyses to the package. Hopefully, we’ll be ready in a day or two. Afterwards, we’ll definitely move it over to the OHDSI study repo. In the interim, you can find the repo here (still need to update the new some of the cohort definitions).

Thank you @cukarthik for understanding this :slight_smile:

I’m egregiously late to this party :partying_face:

Guys… should we really be customizing definitions to the type of RWD we’re running the study against? Doesn’t that fly in the face of what OHDSI network studies are supposed to do?

We are recommending that all sites who are interested in participating try to run both sets of T/Cs. The original study was appropriate for the Columbia data, but on testing in a large US claims data was determined to not correctly classify exposure, so the alternative T/Cs were created, which when tested back at Columbia, did not perform consistently with the original T/C definitions. So, we have prepared two packages as a means of performing a sensitivity analysis around exposure definition to evaluate robustness of the findings.

1 Like

You got it! Will be curious to see what we find across the network.

I’ve run the T/Cs of IUD studies in Korean DBs, and made some recommendation to @mattspotnitz and @cukarthik (as long as I remembered).

By the way, I think we need standardized way to run T/Cs across data network for every OHDSI studies.

2 Likes

What do you have in mind, my friend? Santa Claus :santa: is listening. I heard you’ve been very good this year! :smile:

1 Like

How is this not the JSON definition of ATLAS? Of course, we need to make it a generic standard, not attached to ATLAS.

Thank you @krfeeney

Actually, current ATLAS-generated PLE and PLP package generates a table with the count of the cohorts in the study. So we can share this file first, before run the rest of package.

100% agree.

Now, thinking out loud: we could have a study package set up run with CreateCohorts=TRUE and turn everything else to FALSE. I’ve never thought to do this but it would be a very simple feasibility test. :face_with_monocle:

I do agree that it would be useful to have some easier way to do feasibility. When you receive just a study ZIP (preferably in a GitHub repo :wink:), there is a bit of a scavenger hunt to find the cohorts to throw them into ATLAS for a quick and dirty feasibility test. It gets more manageable as you become more familiar the study package format.

You’re so right. A lot of challenges arise from small samples. Heck, I didn’t even know you can also somehow screw up packaging your study cohorts until yesterday. May also have been a hazard of that investigator designing in a very active public spot that anyone can muck with. :dizzy_face:

Other challenges arise from highly correlated covariates – often the result of not going up in the ancestor-descendant relationships to find the most comprehensive ancestor to put into the list. But you never really find this out until you’re really far into running the package. I wonder if there’s a way to do a correlation test before a full study is run?

Ok. I’ll stop for now. :smile: I wholeheartedly support any way to make evaluating T/C/O feasibility more straightforward.

@mattspotnitz Well done! @zhuk and I have some experience on hysteroscopic contraceptive device study so let me add my 2 cents in here :smiley:

In ‘Cu-IUD’ cohort for EHR data you’re using ‘IUD Placement Procedure’ concept set defined as 4275113 Insertion of intrauterine contraceptive device including all the descendants. The problem is some of the descendants look messy for me:

43020952 SNOMED Insertion of hormone releasing intrauterine contraceptive device
2110194 CPT4 Insertion of intrauterine device (IUD)

It looks like the 1st one should be excluded from here or even added to the C-cohort.
Have some concerns on the 2nd. Isn’t it used for the hormone-realizing method from time to time?
Link and screenshot:
image

Even though these concept-sets are validated on particular datasets or you’re excluding the hormone-realizing methods with inclusion criteria, I guess they should be transparent in terms of a network study. What do you think?

2 Likes

Congratulation on the publication of this valuable finding at high-impact journal! @mattspotnitz

2 Likes

Thanks Chan!

1 Like

We have code ready for the network study and just created a PR to the ohdsi-studies repo, so it should be there soon. I really liked the cohort diagnostics that @schuemie and @Patrick_Ryan developed during the Barcelona study. We didn’t add it to the package since it’s under development, but we will leverage it for potentially debugging issues when sites run the study.

If anyone wants to try to run the study package before it gets to the ohdsi-studies repo, they can go to the this repo to test things out. Thanks!

1 Like

PR accepted. :slight_smile:

1 Like

Thanks! :slight_smile: If you don’t mind doing another test run of this updated study package, it would be great.

t