OHDSI MEETINGS THIS WEEK
Oncology WG - Outreach/Research Subgroup Meeting - Tuesday at 10am ET
Gold Standard Phenotype Library WG Meeting - Tuesday at 11am ET
OHDSI Community Call - No meeting this week, next meeting will take place next Tuesday, January 14th at 12pm ET
CDM & Vocabulary WG Meeting - Tuesday at 1pm ET
PLE + PLP WG Meeting (Eastern Hemisphere) - Wednesday at 3pm in Hong Kong
Oncology WG - Development Subgroup Meeting - Wednesday at 10am ET
NLP WG Meeting - Wednesday at 2pm ET
Psychiatry WG Meeting - Thursday at 8am ET
Meeting number (access code): 962 271 701
Meeting password: OHDSI
PLE + PLP WG Meeting (Western Hemisphere) - Thursday at 9am PT
OMOP CDM Oncology WG - CDM/Vocabulary Subgroup Meeting - Thursday at 10am ET
EHR WG Meeting - Friday at 10am ET
China WG Meeting - Friday at 10am ET
OMOP CDM Oncology WG - Genomic Subgroup Meeting - Friday at 10am ET
You can find a full list of upcoming OHDSI meetings here:
Favorite 2019 OHDSI Papers - We want to create a thread of the community’s favorite or most inspiring OHDSI papers from the past 12 months.Was there a paper that used OHDSI tools and standards particularly inspiring to you? Please share them here: Favorite OHDSI Papers Of 2019
Two OHDSI studies published in Lancet! Another OHDSI study has been published in Lancet! The EHDEN team’s Rheumatology paper is available here: https://www.thelancet.com/journals/lanrhe/article/PIIS2665-9913(19)30075-X/fulltext
If you haven’t yet checked out the LEGEND hypertension study in the Lancet, you can check it out here:
For more info on the study, check out our press release:
2019 OHDSI Symposium - Tutorial Videos Videos from the 2019 OHDSI tutorials are officially online! You can access tutorial videos and materials here:
You cannot do a kindness too soon, for you never know how soon it will be too late
Ralph Waldo Emerson COMMUNITY PUBLICATIONS
Understanding the nature and scope of clinical research commentaries in PubMed.
JR Rogers, H Mills, LV Grossman, A Goldstein and C Weng,
Journal of the American Medical Informatics Association : JAMIA, Dec 30 2019
Scientific commentaries are expected to play an important role in evidence appraisal, but it is unknown whether this expectation has been fulfilled. This study aims to better understand the role of scientific commentary in evidence appraisal. We queried PubMed for all clinical research articles with accompanying comments and extracted corresponding metadata. Five percent of clinical research studies (N = 130 629) received postpublication comments (N = 171 556), resulting in 178 882 comment-article pairings, with 90% published in the same journal. We obtained 5197 full-text comments for topic modeling and exploratory sentiment analysis. Topics were generally disease specific with only a few topics relevant to the appraisal of studies, which were highly prevalent in letters. Of a random sample of 518 full-text comments, 67% had a supportive tone. Based on our results, published commentary, with the exception of letters, most often highlight or endorse previous publications rather than serve as a prominent mechanism for critical appraisal.
Merging heterogeneous clinical data to enable knowledge discovery.
MG Seneviratne, MG Kahn and T Hernandez-Boussard,
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 2019
The vision of precision medicine relies on the integration of large-scale clinical, molecular and environmental datasets. Data integration may be thought of along two axes: data fusion across institutions, and data fusion across modalities. Cross-institutional data sharing that maintains semantic integrity hinges on the adoption of data standards and a push toward ontology-driven integration. The goal should be the creation of query-able data repositories spanning primary and tertiary care providers, disease registries, research organizations etc. to produce rich longitudinal datasets. Cross-modality sharing involves the integration of multiple data streams, from structured EHR data (diagnosis codes, laboratory tests) to genomics, imaging, monitors and patient-generated data including wearable devices. This integration presents unique technical, semantic, and ethical challenges; however recent work suggests that multi-modal clinical data can significantly improve the performance of phenotyping and prediction algorithms, powering knowledge discovery at the patient- and population-level.
BEAGLE 3: Improved Performance, Scaling, and Usability for a High-Performance Computing Library for Statistical Phylogenetics
Adapting electronic health records-derived phenotypes to claims data: Lessons learned in using limited clinical data for phenotyping.
A Ostropolets, C Reich, P Ryan, N Shang, G Hripcsak and C Weng,
Journal of biomedical informatics, Dec 2019 19
Algorithms for identifying patients of interest from observational data must address missing and inaccurate data and are desired to achieve comparable performance on both administrative claims and electronic health records data. However, administrative claims data do not contain the necessary information to develop accurate algorithms for disorders that require laboratory results, and this omission can result in insensitive diagnostic code-based algorithms. In this paper, we tested our assertion that the performance of a diagnosis code-based algorithm for chronic kidney disorder (CKD) can be improved by adding other codes indirectly related to CKD (e.g., codes for dialysis, kidney transplant, suspicious kidney disorders). Following the best practices from Observational Health Data Sciences and Informatics (OHDSI), we adapted an electronic health record-based gold standard algorithm for CKD and then created algorithms that can be executed on administrative claims data and account for related data quality issues. We externally validated our algorithms on four electronic health record datasets in the OHDSI network. Compared to the algorithm that uses CKD diagnostic codes only, positive predictive value of the algorithms that use additional codes was slightly increased (47.4% vs. 47.9-48.5% respectively). The algorithms adapted from the gold standard algorithm can be used to infer chronic kidney disorder based on administrative claims data. We succeeded in improving the generalizability and consistency of the CKD phenotypes by using data and vocabulary standardized across the OHDSI network, although performance variability across datasets remains. We showed that identifying and addressing coding and data heterogeneity can improve the performance of the algorithms.