OHDSI Home | Forums | Wiki | Github

OHDSI/Oxford Study-a-thon: Any data partners want to participate this week in two studies on knee replacement?


(Patrick Ryan) #1

Team:

This week @Daniel_Prieto is hosting 40 friends at University of Oxford for a OHDSI study-a-thon. As we did at the OHDSI F2F in NYC earlier this year, the group proposed research ideas, voted on a favorite, and yesterday, we got started on taking a clinically important question and turning it into reliable evidence that can improve patients lives by informing medical decision-making.

The topic area selected is knee arthroplasty. We are conducting two sets of analyses:

The patient level prediction question is: Amongst patients with total or unicompartmental knee arthroplasty, which patients will go on to have adverse post-surgical outcomes (including post-operative infection, venous thromboembolism, surgical revision, hospital re-admission, or mortality)? We will be using multiple time-at-risk windows, including 30d, 90d, 1yr, and 5yr.

The population-level effect estimation question seeks to compare the risk of adverse post-surgical outcomes (including post-operative infection, venous thromboembolism, surgical revision, hospital re-admission, or mortality) between patients with total knee replacement and patients with unicompartmental knee replacement .

We will be conducting this study on as many databases as feasible, we know we’ll be at a minimum using UK EHRs, US claims and US EHR data, but we welcome participation from the rest of the OHDSI community to virtually participate alongside us. Our target is to have completed draft manuscripts prepared by end of week, so we can obtain appropriate approvals from our respective organizations then submit to Lancet.

Each day, we’ll post our progress, including cohort definitions, prediction design specifications, and estimation design specifications on the OHDSI ATLAS instance, and we’ll put the R study package up on GitHub for those to run.

For now, here’s the links to our draft exposure cohorts, which will be revised throughout the day:

Target: Patients with total knee replacement: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769719

Comparator: Patients with unicompartmental knee replacement:
http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769720

If you are interested in participating, would you mind running these cohorts to evaluate the feasibility?

Thanks in advance for your collaboration.


(Thomas Falconer) #2

@Patrick_Ryan, this sounds great! Just ran the exposure and comparator cohorts for feasibility here at Columbia and we have 2,014 patients in the target and 325 in the comparator. We’re looking forward to participating in the study!


(Patrick Ryan) #3

Thanks @thomasfalconer for running the initial feasibility cohort! We’re delighted to have you and Columbia on board with the study!

And as I discussed on the community call today, we’d be grateful for any other data partners who’d like to join the journey with us.


(Patrick Ryan) #4

An update from Day 2 at the OHDSI/Oxford Study-a-thon:

Today, @Daniel_Prieto and team finalized our exposure cohorts and all our outcome cohorts.

We’ve landed on two primary exposure cohorts (revised from the earlier posted cohort definitions based on clinical review and exploration of characterization results across multiple databases):

  1. Patients with total knee replacement: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769729

  2. Patients with unicompartmental knee replacement:
    http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769730

(as time permits, we may also run a sensitivity analysis that removes the inclusion criteria restricting to ‘no hip/feet/spine pathology’ because we noted this had a large impact in our US data)

We finalized definitions for 5 primary outcomes of interest:

  1. Venous thromboembolism events: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769735

  2. Post-operative infection events: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769733

  3. Revision of knee arthroplasty: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769732

  4. Re-admission after discharge from knee replacement: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769734

  5. All-cause mortality: http://www.ohdsi.org/web/atlas/#/cohortdefinition/1769731

(We are also intended to explore an outcome around opioid use and length of stay)

We have designed our patient-level prediction study, defining the machine learning algorithms, covariates, and population settings. We used ATLAS 2.6 to generate the PLP R package, and have now kicked off the package at Janssen and Iqvia. Tomorrow, we’ll be reviewing the prediction results, and once we reach consensus on the models we want to move forward with, @jennareps will be posting a package that’d like to ask the community to run to externally validate the trained model (much like we did at the 2018 OHDSI Symposium!).

After we complete the PLP study, we’ll be turning our attention to population-level effect estimation. It’s be a great treat to collaborate with so many amazing people here, from UK, Netherlands, Belgium, Switzerland, Hungary, Spain and US, all focused on a common goal: to generate reliable evidence! We’re making great progress and I can’t wait to see what the rest of the week holds in store…


(Seng Chan You) #5

Sorry @Patrick_Ryan we’ve got no patients from EHR data of ours. I think we don’t have the procedure concept ids for Unicompartmental knee arthroplasty or total knee replacement…


(Patrick Ryan) #6

Thanks @SCYou for looking in your data! I appreciate your collaboration!


(Patrick Ryan) #7

Day 3 update from the OHDSI/Oxford study-a-thon:

Today our focus was patient-level prediction. Last night and this morning, we trained various models using the THIN database from Iqvia (UK EHR), the Iqvia US Ambulatory EHR database, and US claims data from Iqvia, MarketScan, and Optum. As a group, we then broke up into teams, each focused on a particular health outcome of interest, to evaluate the model performance and explore the baseline characteristics to determine if further model should be warranted.

The prediction problems of interest were:

  1. Amongst patients with total knee replacement, which patients experience a venous thromboembolism event within 90 days after surgery?

  2. Amongst patients with total knee replacement, which patients experience a post-operative infection event within 90 days after surgery?

  3. Amongst patients with total knee replacement, which patients experience a revision knee arthroplasty within the 5 years after surgery?

  4. Amongst patients with total knee replacement, which patients experience a hospital re-admission within the 90 days after surgery? Within 5 years after surgery?

  5. Amongst patients with total knee replacement, which patients will die within the 90 days after surgery?

From across this large array of models, the team selected a few models that appeared to have good performance and also observed some baseline characteristics with biologically plausible univariate associations with the outcome, and these models are being packaged up for external validation across the OHDSI network.

Specifically, we seek to perform external validation for 3 clinical outcomes: Revision, Readmission, and Mortality. So here, @Daniel_Prieto and the rest of our Oxford team need your help!

@jennareps is creating the R packages for each outcome that can run against any OMOP CDM v5 instance. The first package, for Readmission, is already available on Github: https://github.com/OHDSI/StudyProtocolSandbox/tree/master/readmissionMdcdValidation.
We’ll be posting the two additional study packages shortly.

Each package will have the same functionality: The PLP external validation package will create the necessary target and outcome cohorts in a temporary table, then apply the model that was trained elsewhere on your data, then compute its performance. The resulting output is the external validation model performance, which includes Area Under ROC curve and calibration, as well as a descriptive summary of the baseline characteristics. There will be an ‘export’ folder that you contains these outputs, which if you’d like to participate in the study, you can share back with @jennareps. As you’ll see, the export folder does not contain any patient-level data, so it’s safe to share; the results will be then included as part of the final resultset that will be made publicly available on the OHDSI Shiny Server (and included as part of the publication).

Based on our preliminary results, it looks quite possible that we’ll have something from this prediction work that meaningful to share with the clinical community, which could potentially improve medical decision-making amongst patients undergoing knee replacement surgery. So we appreciate anyone who is interested in joining this work with us by being an external validation data partner.

Tomorrow, a group of us will continue to work on the writing up the publication for our patient-level prediction study, while others will turn their attention to the design and implementation of population-level effect estimation studies that will examine the comparative effectiveness of total knee replacement vs. unicompartmental knee replacement. I’ll provide another update at the end of tomorrow with another network study opportunity, for anyone who is interested.

Patients are waiting…let’s generate some reliable evidence!


(Evan Minty) #8

Hi -

Thanks for organizing this! Have run these on Stanford’s Stride v8. Numbers are on the same order of magnitude as Columbia for target and comparator cohorts. Will look to execute the R packages as they become available.

Are the prediction studies being configured in public ATLAS? I’d be happy to contribute any design considerations (although we’re likely past that now).


(Patrick Ryan) #9

Day 4 update from the OHDSI-Oxford study-a-thon:

The patient-level prediction external validation packages are now available for the community to run if anyone is interested in participating (thanks already to @thomasfalconer and @Evan_Minty for agreeing to run this, and to @SCYou for attempting the initial cohorts) :

Revision: https://github.com/OHDSI/StudyProtocolSandbox/tree/master/TKRrevisionValidation

Readmission: https://github.com/OHDSI/StudyProtocolSandbox/tree/master/readmissionMdcdValidation

Mortality: https://github.com/OHDSI/StudyProtocolSandbox/tree/master/mortalityValidation

We’ve run these external validations against various databases within Janssen and Iqvia. The mortality results are particularly exciting with very encouraging results in both US and UK, so we’ve focused on these in our publication preparation.

Additionally, we designed our population-level effect estimation studies and began work on executing the analysis through the diagnostics. We pre-specified a collection of 39 negative control outcomes, and used them together with the 6 outcomes of interest to estimate effects across 3 primary times-at-risk: 60d, 1yr, and 5yr. The process has already been quite informative: our first pass failed diagnostics in the first US claims database, as we saw our propensity score model did not adequately balance covariates and some of those unbalanced factors were known confounders for the outcomes of interest. That diagnostic helped us identify an additional inclusion criteria to add to our target and comparator cohorts to make our populations more comparable. We’ve kicked off diagnostics across a host of databases now, and will be reviewing the diagnostics as a team together tomorrow morning. If the propensity score distribution, covariate balance, and calibration plots all check out, then we’ll be unblinding the outcomes of interest and learning about the real-world effects of total vs. unicompartmental knee replacements! And we’ll be posting the study packages up on OHDSI’s Github repo for anyone who would like to replicate the study and contribute to the overall resultset.

Tomorrow is the last day of the OHDSI-Oxford Study-a-thon, and our goal is to complete drafts on our two manuscripts. @Daniel_Prieto has assembled a terrific team, so I believe this very ambitious target will be met. I’m extremely excited to see how our prospective prediction of the ongoing TOPKAT trial will play out, but we’ll only get credit for have a real-world effectiveness guess at the clinical efficacy if we get this paper published before the trial reports out in a couple months. So the team will have to remain focused to make this a reality.


(Daniel Prieto-Alhambra) #10

Day 5 of the Study-a-thon!
We have consistent results for our population-level-estimation analysis, replicated in different data sources, and with similar and clinically meaningful results! Watch this space for a link to a Shiny interactive ‘report’!!

Thanks so much @Patrick_Ryan , @anthonysena , @jweave17 and @jennareps for an awesome week with amazing science and a lot of fun! I think we’ve converted 40 more scientists to OHDSI (see below for a pic of the ‘survivors’ left on Friday afternoon!

More fun to come in the new year!!

I look forward to the OHDSI Europe symposium next year!!


The process for proposing and defining a network study
(Vojtech Huser) #11

The sponsor of the TOPKAT trial is U of Oxford. So maybe you can talk them into waiting for you. (allow you to beat them or even better, publish simultaneously in the same issue) https://clinicaltrials.gov/ct2/show/NCT01352247


(Daniel Prieto-Alhambra) #12

Not so easy @Vojtech_Huser . They have their own commitments, deadlines and deliverables. Good news though is that we’ve already got a good draft of the paper. We’ll submit before they unblind their results


(Kristin Kostka, MPH) #13

Congratulations @Daniel_Prieto and team (@Patrick_Ryan @anthonysena @jweave17 @jennareps) for doing what we have been unable to do all year – generate a high quality manuscript on the immediate heels of a study-a-thon exercise! Thank you for showing that this can be done. Your work is an impressive step in moving the paradigm forward. I’m hopeful all future study-a-thons can generate high quality, meaningful publications!


(Daniel Prieto-Alhambra) #14

Thanks @krfeeney
So these are my reflections on things that might have helped to make this happen this week… and that maybe were different in previous Study-a-thons (although @Patrick_Ryan might want to correct me!):
1.- the activity took a whole working week, from monday morning to friday lunch time. This is longer than usual, and gave the group plenty of time to work on writing
2.- we allocated sections (of the paper) to people, to give them ownership and accountability. This was done publicly in the room and in a shared gdocs
3.- we had some people cross-fertilising between groups. This gave the writing task more consistency
And 4.- (this does not depend entirely on the organiser but one can try…) we had keen students to lead on the final tweaks and submission

Still, I will not rest until the papers are send off… and the deadline is Jan 10th!


t