A common occurrence in my walk of life is the need to evaluate an observational database to determine if it is ‘fit for purpose’ to support a particular analytical use case. More broadly, me and my team has been tasked with evaluating a new database to determine its anticipated value across multiple analytical use cases (e.g. clinical characterization, population-level estimation, patient-level prediction) across an array of exposures of interest (e.g. drugs within various therapeutic areas) and outcomes of interest (e.g. safety and/or effectiveness measures). We may use this ‘value assessment’ as a means of determining which organizations we want to engage in strategic partnerships or which databases we want to invest in enterprise licensing agreements.
Currently, to my knowledge, there is no consensus approach for performing such a database evaluation, nor is it clear what should be reasonable requirements we should impose on a data holder to facilitate an evaluation or what metrics would be sufficient to enable the valuation. Over the course of the last few years, I’ve participating in various evaluations, which have run the extreme from the instance where our evaluation was largely based on powerpoint slides of self-reported accolades from a data vendor to the other extreme where a data holder offered us a evaluation period with an instance of their de-identified source data along with source documentation and during that time we ETLed the data into OMOP CDM, executed ACHILLES to assess data quality and examine prevalence of conditions/drugs/procedures of interest.
From my perspective, there’s a few different dimensions to think about when considering how to conduct a database evaluation for ‘fitness for use’:
-
what are the analytic use case(s)? Is the data to be used for one drug/one outcome or across a portfolio of exposures/diseases/outcomes? Is the data to be used for crosssectional descriptive assessments or longitudinal evaluations? Is the primary motivator clinical characterization (observation), patient-level prediction (inference), or population-level effect estimation (causal inference)? How concerned am I with the validity of the evidence I’m looking for (is rough ballpark ok or am I looking for properly calibrated and highly precise estimates?)
-
What are the data required to define the entities of interest (populations/exposures/outcomes/covariates) in my analytic use cases?
a. Types of data: conditions, drugs, procedures, measurements, observations, visits
b. Extent of longitudinality required for patient’s medical history to observe prior conditions/drugs/procedures?
c. Types of patients: all vs. young vs. old; healthy population vs. diseased; population vs. speciality disease -
how much time do I have to conduct the evaluation? Do I have a day, a week, a month, a year?
-
how much certainty must I obtain before I can make a recommendation?
-
How much transparency will we have into the database under evaluation?
Will I be given the full dataset? Given a random subset? Given the WhiteRabbit ScanReport? Given schema/user documentation?
Independent from these dimensions, there may be objective measures or subjective assessments that I’d like to provide as part of my evaluation:
- Number of patients
- Duration of follow-up for each person (distribution of length of observation period)
- Density of data within each domain
- Period of time covered by database
- Prevalence of selected drugs/conditions/procedures/measurements of interest
- Meta-data about originating population and data capture process
- Source schema and source value frequency to estimate feasibility and resource burden for ETLing to OMOP CDM
- Plausibility assssments: do patients with disease get appropriate treatments? do patients with treatments have indications? do age/gender-specific observations occur in different age/gender strata (e.g. prostate cancer screening in young women)
- Concordance with external references
Some of a database evaluation could logically follow the nice frameworks that @mgkahn et al. have set up for more global approaches to data quality assessment (read https://www.ncbi.nlm.nih.gov/pubmed/27713905 and https://www.ncbi.nlm.nih.gov/pubmed/25992385), but these frameworks largely take the perspective of a data holder or associated affiliate with full and unconstrained access to the source data. In my circumstance, its generally the case that both time and level of access may be constrained, and I’m forward-looking into potential use of the data rather than present-thinking about immediate need.
So, all of that is a long ramble to ask the community: what do you all think makes for an appropriate database evaluation? what process do you follow? what information are you looking for? what tools do you use to perform your evaluation? what could the OHDSI community do to improve the quality, efficiency, and transparency of the data evaluation process?
Please reply to the thread here, and if you are interested in the discussion, join our OHDSI community call next week (see here: OHDSI Community Call 1Nov2016)