OHDSI Home | Forums | Wiki | Github

[OHDSI COVID-19] Community update 23March2020: Reflections on OHDSI study-a-thons

Team:

We are only a couple days away from the kickoff of the OHDSI Covid-19 study-a-thon.

At this point, at least some of you are wondering ‘what is a study-a-thon anyway?’. Well, back in a prior world when if I said ‘corona’ you’d free associate ‘and lime’ instead of ‘virus’, I would have directed you to some past OHDSI events, where we’ve been trying to innovate on how do open collaborative research by bringing together a bunch of talented scientists for 3 to 5 days, identifying one or research questions around a common topic, and then dividing-and-conquering and sharing in the steps necessary to design and implement an analysis package, execute the study across a network of databases, and compile results for clinical review and interpretation. Pretty much the exact opposite of ‘social distancing’: a group of 30-50 people spending >12 hours a day in a confined space talking to each other, leaning over each other’s shoulders to touch laptop screens, and getting so intimate that we were literally finishing each others’ sentences…in google docs:).

The first study-a-thon we tried was held at Columbia back in May2018. Link to forum discussion: OHDSI Face-to-Face at Columbia May2-3: Community study-a-thon. We tried to come together over 2 days to design and execute a population-level effect estimation study to generate real-world evidence that would predict the results of an ongoing randomized trial comparing tofacitinib vs. adalimumab in rheumatoid arthritis. @BridgetWang was our guinea pig for trying out this study-a-thon idea, and brought a valuable problem that remains important to the rheumatology community.

What went well? We were able to split the estimation study design process into 4 independent components: 1) defining exposure cohorts (target and comparators), 2) defining outcome phenotypes, 3) selecting negative controls, 4) designing the analysis specification, which could be successfully integrated into one ‘study package’ and executed across databases in the OHDSI network.

What did we learn that we could improve? 1) phenotype development is NOT a conceptual activity that can be done by writing out a design in a word document…you need clinical and data domain expertise combined with data access to be able to iterate through the logic and conceptsets to produce cohorts that correctly represent the populations of interest. (OHDSI innovations that came out of this insight: PheValuator and CohortDiagnostics); 2) Data partners are not not all the same level of maturity with their data standardization, and more guidance is needed to get everyone on be following consistent rules (OHDSI innovations that came out of this insight: THEMIS and DataQualityDashboard); 3) Running distributed analytics is difficult, particularly with varying levels of skill across the wide range of technologies required in the OHDSI community, R/SQL/RDBMS/Web server, etc. (OHDSI innovation that came out of this insight: integrate ‘study package’ generation into the ATLAS design component for population-level estimation and patient-level prediction, and provide better documentation for how to run studies with the Book of OHDSI); 4) no matter how talented people a team has, 2 days is just not enough time to go end-to-end from idea->study design->study execution across a network->results collated->publication written, and once the study-a-thon ends, it’s hard to maintain the focused efforts of the group on the shared problem (OHDSI innovation that came out of this insight: extend future OHDSI study-a-thons to more days so that future Bridgets aren’t left in study purgatory:))

The second study-a-thon was a kickoff event for EHDEN, held at Oxford in Dec2018. Link to initial forum discussion: OHDSI/Oxford Study-a-thon: Any data partners want to participate this week in two studies on knee replacement?. For this event, we focused our attention to knee arthroplasty. In 4 days, we set our sights on completing two OHDSI network studies: 1) a population-level effect estimation study that compared the safety of total vs. unicompartmental knee replacement, and 2) the development and evaluation of a patient-level prediction model to determine which patients electing knee arthroplasty were most at-risk for short-term adverse events. @edburn and @jweave17 led the estimation study, while @RossW and @jreps led the prediction efforts. @Rijnbeek and @Daniel_Prieto provided a nice overview of the event at the 2019 OHDSI Symposium: https://www.ohdsi.org/ohdsi-news-updates/ehden-lancet-study/; the results have been presented at scientific conferences and the estimation study was published in Lancet Rheumatology late last year.

What went well? 1) Having multiple clinical experts in the topic, who understood the importance of the problem and the potential application of the evidence to clinical practice, proved invaluable (OHDSI innovation from this insight: get more clinical domain experts to participate, but also provide lead time for others to do literature review ahead of time to familiarize themselves with the problem); 2) Focusing on phenotyping first allowed us to develop and characterize populations and run feasibility across multiple data partners, which then enabled us to more efficiently design the estimation and prediction studies by re-using these cohort definitions. 3) Collaborative writing throughout the study lifecycle (before analysis with literature review, during analysis with methods for protocol, after analysis for results reporting) is a more productive use of group effort than waiting until the end to start writing alone. What did we learn that we could improve? 1) Trying to do ‘on-the-fly’ literature reviews at the same time as analysis design makes it difficult to incorporate lessons from past work into future activities (OHDSI innovation: start some lit review activities as homework preceeding the study-a-thon event), 2) the last mile problem continued: we got 80% completed with our estimation study during the 4 days, but that last 20% took a heroic effort by Weaves and Ed over several weeks (OHDSI innovation: substantial improvements in the R Shiny apps within the estimation and study packages to allow for collaborative exploration of results), 3) developing large-scale prediction models is actually easier than crafting small-scale risk calculators (OHDSI innovation from this insight: the PatientLevelPrediction package has new diagnostics and cohort-based custom features to allow for more efficient model development and evaluation)

And just this January2020, we held another EHDEN study-a-thon in Barcelona. Here was the initial forum thread: Barcelona study-a-thon: call for collaborators for studies on treatment for rheumatoid arthritis. The disease area focus was rheumatoid arthritis, and our goal was to fill some of the evidence gaps that were highlighted in the latest RA guidelines from ACR and EULAR. In 5 days, we sought to 1) characterize real-world drug utilization patterns in RA to see if they were concordant with guideline recommendations, 2) estimate the comparative safety of first-line csDMARDs, and 3) predict which patients were at highest risk for known adverse events of RA treatments. Here’s a summary of the event: https://www.ohdsi.org/ohdsi-news-updates/2020-barcelona-studyathon/. And here’s a nice Q&A with Dani that provides greater context: https://www.ohdsi.org/ohdsi-news-updates/prieto-alhambra-qa/

What went well? 1) Upfront work to draft a preliminary study prospectus document gave greater clarity on target analyses and allowed us to secure participation from multiple databases who required pre-approval; 2) We had a good balance of clinical experts, data domain experts, and analysis/design experts, who appreciated each others strengths, accepted their knowledge gaps, and could complement each other through the study process. Having esteemed international rheumatology experts in the room that didn’t laugh at my each time I mispronounced hydroxychloroquine was nice:) 3) Having more data partners able to execute study packages had an exponential value in our understanding - seeing patterns across US, UK, Netherlands, Spain, Estonia, Germany, Japan, Belgium, France dramatically increased our confidence in our findings; 4) OHDSI tools, both web front-end design in ATLAS and back-end study packages posted at https://github.com/OHDSI-studies, worked remarkably well with on a few minor hiccups, despite the range of different technical environments and data type; 5) we were phenomenally productive, finalizing 3 fully-specified protocols before executing analyses and completing 7 abstract submissions to EULAR/ISPE within 2 weeks of the study-a-thon completion!

What did we learn that we could improve? 1) Better coordination of shared writing activities to minimizing the reinventing the wheel (OHDSI innovation: assign a lead to each activity who can then assign tasks to team members rather than everyone writing the same thing on different pages of a Google doc); 2) Focusing individuals on specific tasks (either aligned to a given study or to a given competency) might be more effective than everybody trying to be involved in everything; 3) Characterization analyses are not as seamless to execute across a distributed network as estimation and prediction (OHDSI innovation: We are actively developing a R study package for characterization to streamline the design and compilation processes); 4) the last mile continues: authoring a full manuscript on results requires more dedicated time and attention than any other step along the study lifecycle. more improvements in collaborative writing of background/discussion sections and also better publication-ready graphics from network resultsets could assist this process.

So, why this little stroll down OHDSI’s memory lane?

Because this week, we’ll be embarking on our community’s fourth study-a-thon, and the stakes have never been higher. We’ve got to take everything we’ve learned from our past study-a-thon experiences, and everything that everyone has cumulatively learned from all of their activities in the OHDSI community - data standardization, open-source analytics, scientific best practices - together with everything we know about COVID-19 to generate as much real-world evidence as we possibly can to support and inform the public health efforts around the world to stem the tide of this pandemic. The OHDSI COVID-19 virtual study-a-thon is not just another activity to kill time while on lockdown, it’s the opportunity we all have as scientists to come together and apply our talents to make a real difference for humanity. It may sound trite to say ‘knowledge is power’, but right now, it seems that reliable evidence is going to be one of our most valuable weapons against this virus, and WE are a community in position to provide some of this evidence. It’s also important to reinforce that the OHDSI COVID-19 study-a-thon is not an endpoint, its just the kick-off of a sustained community effort…unfortunately, on Mar29 when the study-a-thon wraps up, the virus will still be with us and there’ll remain many unanswered questions. But I hope this study-a-thon will serve as a catalyst for our community to innovate on how to collaboratively work together, to reinforce the critical importance of applying community data standards to timely population data across our international network, to advance our analytic capabilities and organizational competencies to design and execute network studies, and to persist through the last mile of evidence dissemination until all policy makers, medical product manufacturers, healthcare providers, and patients have the information they need to promote better health decisions and better care.

We have just closed registration for the OHDSI Covid-19 study-a-thon, and I’m extremely excited to see that >300 people from ~30 countries are joining this journey with us. Tomorrow, I’ll be providing more details about how exactly this study-a-thon will work logistically, and how exactly you can help. Until then, I encourage you to get some rest…it’s going to be a busy week!

4 Likes
t