OHDSI Home | Forums | Wiki | Github

OHDSI Informatics Study: DQD Lab Thresholds

I am seeking sites who would be willing to run a new OHDSI network study. (a methods study (subtype informatics study))

This study supports DataQualityDashboard. The goal is to do DQA (data quality assessment) of laboratory values.

See study protocol here: https://github.com/vojtechhuser/DataQuality/tree/master/extras/protocol
Study github is here: https://github.com/vojtechhuser/DataQuality

The study was piloted on DQD forum thread but, for updates, I created this new thread to keep things better organized.

March 10 updates

6 sites (and datasets) were analyzed in the study so far.

Central Processing code is here: https://github.com/vojtechhuser/DataQuality/blob/master/extras/CentralProcessingDQDThresholds.R

The preliminary results are being summarized in an emerging AMIA 2020 abstract. (AMIA deadline extension is helping)

The study analyzes 5943 distinct lab-unit pairs. A total of 1350 lab-unit pairs have data from 2+ sites allowing production of benchmark data.

The study is in great need of non-US datasets. (e.g., from Europe or Asia or elsewhere).

Please join the study ! The package no longer depends on Achilles and you just need table MEASUREMENT (v5 and v6 CDM versions should work)

Hi @Vojtech_Huser, if you need/want additional US data, Columbia would be happy to participate. Let me know!

1 Like

The AMIA abstract was submitted. COVID19 showed additional importance of similar data quality checking (as for values) for value_as_concept_id filed. Yesterday, the ‘Thresholds and Values Study’ was updated to include this additional data quality focus. A new step was added in addition to DataQuality::dashboardLabThresholds() called DataQuality::dashboardLabValueAsConceptID(). Updated protocol was published. I am looking for sites that are willing to execute the v5.0 of the study. Existing sites must update the package and check that they are using v5.0.

The extracted data for dashboardLabValueAsConceptID are done as percentages so the site is not revealing the actual counts of the test. Only the ratio of result values. For example for blood type group, you don’t reveal number of tests, just the fact that x% of results indicate coded value: ABnegative. This should facilitate greater site participation in the study.

The instructions to run the study remain the same and can be found at https://github.com/vojtechhuser/DataQuality/#support-development-of-data-quality-dashboard-dqd

The study interim results are described in the text below and at this link https://www.researchgate.net/publication/341734975_Data_Quality_Assessment_of_Laboratory_Data

Introduction
In recent decade, tools and knowledge bases for Data Quality Assessment (DQA) emerged that identify implausible data rows in healthcare observational databases. A tool developed by Observational Health Data Sciences and Informatics (OHDSI) community called Data Quality Dashboard (DQD) contains expert-driven knowledge base (KB) with maximum and minimum thresholds for checking plausibility of values for laboratory tests values that are coded in LOINC (Logical Observation Identifiers Names and Codes).

Methods
We first evaluated existing DQD expert-driven thresholds KB in terms of coverage, development effort and consistency. Next, we designed a network study to extract aggregated data (per lab test-unit pair) from multiple sites and used the data to inform development of possible DQA methods. We designed and evaluated several methods for creating a data-driven KB for laboratory data DQA. Method 1 consisted of producing KB with percentile values and other benchmark data and checking whether evaluated dataset has significantly different percentile values. Method 2 consisted of splitting threshold into two thresholds: extreme and plausible thresholds (for min and max). The extreme value was based on finding that at some sites we saw extreme values (e.g., 9999; referred to as special-meaning-numerical-value or semantic value) present in the data and extreme threshold was a possible method how to identify semantic values across many datasets and lab-unit pairs. It was easier to achieve some consensus for extreme value methodology (based on formula of 1st/99th percentile ± standard deviation) among DQD developers compared to plausible thresholds. Finally, since we evaluated expert-driven KB for consistency across convertible units, we have also created and tested an additional knowledge base that facilitates unit conversions.

Results and Discussion
The evaluation of the expert-driven KB showed the following: (1) OMOP datasets contain lab results not covered by expert-driven KB (330 distinct lab test present in expert-driven KB; 5,943 lab tests observed in network study); (2) development requires significant resources (several hours to review of hundreds of thresholds, need for multiple specialty expertise and expert consensus); (3) for some tests, thresholds values do not agree when defined for multiple units (convertible into each other). In the OHDSI network study (stud?y repository at github.com/vojtechhuser/DataQuality) we collected data from six OHDSI sites. The benchmark KB has data on 1,350 lab-unit pairs (data from 2+ sites; full KB has 5,943 lab-unit pairs). Study repository contains the benchmark KB together with additional results (see extras/DqdResults https://github.com/vojtechhuser/DataQuality/tree/master/extras/DqdResults), such as ranked list of common lab tests. The full data-driven KB contains data on additional 3,993 distinct lab tests not covered by the existing expert-driven KB (12.1 times more) and can facilitate a more comprehensive DQA. The most optimal DQA approach may involve combination of data driven approach followed by (or combined with) expert review. Besides planned integration into OHDSI DQD tool, our KB is self-standing and can be used by other data models (we have developed an R script for PCORNet CDM). Limitations: Our results are limited by the fact that only 6 sites contributed data. Acknowledgement: This research was supported by the Intramural Research Program of the National Institutes of Health (NIH)/National Library of Medicine (NLM)/Lister Hill National Center for Biomedical Communications (LHNCBC).

t