OHDSI Home | Forums | Wiki | Github

ONC presentation

David McCallie is co-chair of the ONC Interoperability Standards Priorities Task Force, where ONC is the US Office of the National Coordinator for Health Information Technology. He has asked me to present OHDSI and interoperability this Friday, April 16. Obviously our HL7-OHDSI partnership is relevant, but it is an opportunity to deliver a message to ONC. I was going to talk about our approach to standards, our progress, and thoughts about ONC and observational research. Please let me know if you have thoughts you think we should express. Thanks.

2 Likes

Hi George, Please send the link to your presentation to David Mc Callie.

George,

The main question I would ask of David McCallie is: how we can fund more high-quality Apache licensed works such as OHDSI? In health care we face significant market failures, and just like kidney donations or physician residency placement, we need to do something about the market failures we experience in high-quality medical software. How could grants, including SBIR grants, be better used to support open works like OHDSI where there are cost-sharing relationships among market participants?

To me, the most critical aspect of OHDSI is the user-driven open source culture, based upon an Apache license. It brings with it a humble yet productive de facto standard that truly serves its user community. Being free of charge is of course, lovely. But most critically, it is the unfettered access to source code, and the legal right to make and distribute modifications, that makes OHDSI community so productive and focused on the needs of its user community. This is an essential freedom, and the “right to fork” even when not used, is what drives a real sense of interoperability that transcends de jure standardization among competing proprietary implementations. Even on its worst days, OHDSI is far more transparent and interoperable, out of the gate, than approaches that use closed source software that do not permit the construction and distribution of derivative works. I’m especially impressed by the commercial firms in the OHDSI ecosystem that value this freedom. This approach, to me, is something to be cherished and a bright-spot to be developed by the health care community: OHDSI as a stand-out success story.

HL7 has very different incentives and commercial culture than OHDSI. I’m know there are some great efforts and lovely people in the HL7 community, but I question their presumed market design. HL7 rests on the premise that you can separate a de jure specification from the implementation, and that a competitive market between implementations is the key to lowering the costs of software. History has not been kind to this view of the world. We had the same idea with XML & Java (write once run anywhere), and before that with any number of industry standardization efforts such as CORBA. From IBM in the 70s/80s, to Microsoft in the 90s/00s, and now onto Amazon & Google, what we see is the opposite – de facto standardization is king. To foster “interoperability” between proprietary walled gardens, de jure standards must be configurable and complex, yet, these same forces push exactly against interoperability. In Microsoft’s legal case two decades ago, the internal memos literally spoke of “embrace, extend, and extinguish” of web standards. While these standards often start with encouraging open source implementations, over time, they decay as they are replaced with proprietary alternatives. This isn’t about the intent of market participants, it’s about the entire market design: as implementations compete in the marketplace, they must differentiate from each other, and to do that, they must extend/deviate from the specification to remain nimble and responsive to user needs. In established markets, de jure standardization efforts are actually anti-competitive moats: they slow market responsiveness and hinder market entrance by requiring large, up-front investments. Perhaps unintentionally, this creates a significant barrier to entry for smaller commercial firms, reducing the very competition upon which the market design intends to foster. It’s for these reasons that I’m skeptical that FHIR will produce the deep interoperability that it promises… unless the market design is addressed.

This brings me to the challenge of OHDSI and HL7 collaboration. Which culture will win? Will HL7 focus more on building viable, heavily adopted open source de facto implementations (where the “standard” is a trailing documentation artifact), or will it pull OHDSI into the de jure standardization market design. This to me is the risk to the OHDSI community, that by bringing it into the HL7 fold, it becomes another de jure standard, disconnecting it from its collaborative community and de facto open source implementation. This path has the potential to slow OHDSI development speed and community acceptance, all while raising costs of development so that only larger players can afford to participate.

Whether it is aware or not, NIH puts their thumb on the market design scale. For example, the NIH could ask that 10% of all grant money for researchers be distributed to open source (OSI licensed) projects upon which the researcher uses and which the grantee has no financial interest. This would be a game changer for having viable ecosystems. Something like this could be used to fund developers, in arrears, with a track record of making targeted improvements to the OHDSI (and HL7) ecosystems used by the researchers. In this market design, there is still economic competition: people’s reputations are funded based upon how they advance the open source works researchers depend upon. I would note, that in this design, developers aren’t starting from scratch, they are forced into a dialog with other developers that serve medical research. Rather than building completely new works, they have a strong incentive to gradually improve existing works (and providing clean upgrade path). If the NIH wants interoperability, I believe this is a low-cost high-value path forward.

Conversely, I believe NIH currently puts its thumb on ways that shun the open source model of the the OHDSI community. When one applies for an NIH SBIR grant, you often have to explain your business model, how you’re going to build a “defensible moat”, get patent licensing, and/or some sort of “renewing revenue stream”. This leads to completely different incentives, ones which foster the creation of proprietary walled gardens. Supporting this more traditional market design, the commercial victors are the ones who don’t deliver interoperability, because if you do, you can easily lose your user community to competitors that use your openness against you. Currently, suggesting open source licensing is a red flag for SBIR reviewers. The NIH could instead take a chance in this regard, and let new businesses seek the sort of successful open source commercial collaborations and cost-sharing methods used by the OHDSI community. Perhaps the worst outcome of a failed open source business… is better Apache licensed community software. Dollar for dollar, while the upside may be less likely to create the next blockbuster, open source SBIR ventures advance the cause that the NIH really seeks: real, grass-roots interoperability though de facto implementations. To guide reviewers, the NIH SBIR grant criteria might ask reviewers to consider what value is provided to the community even if the commercial market does not materialize as expected.

Sorry for the long missive. I hope it helps. I’m exceptionally hopeful on the OHDSI’s proven approach to building a successful community. I’m also delighted to help when I am financially able to do so.

Clark

3 Likes

George-

These are all probably already on your radar, but HL7-OHDSI partnership could be very helpful for quality, safety, value-based-payment (etc.) metrics and benchmarking.

  • NCQA (Ben Hamlin) has already shown that ambulatory quality measures implemented via eCQM can be more robustly implemented in OMOP rather than direct FHIR-eCQM conversions.
  • Stanford (Nigam Shah) has also shown that OMOP can implement HEDIS measures
  • Several have shown that OMOP can implement grouper logic used for risk adjustment (such as the CMS/Medicare and HSS/Commercial Hierarchical Classification Conditions - (HCC) logic

So, if ONC, NCQA, AHRQ, CMS, and HSS could endorse (or ideally certify) OHDSI definitions for those HEDIS, HCC, Quality Based Payment, etc. measures, there could be additional benefits, including:

  • Data aggregators could process those measures to identify and help remediate gaps in clinical quality (e.g. by HIEs, State HIEs, hospital systems, or large payers)
  • Given the interoperability to FHIR, it would be easier for patients to fill data gaps (e.g. using BlueButton, CommonWell, Seqster, etc.) to authorize sharing of their clinical data
  • It would be easier to share those measures back with patients, supporting information blocking (in a standard way) - and letting them know when they are in numerators and denominators of such measures. This can let them correct data errors (e.g. if they should be considered permanent exclusions for certain HEDIS measures)

But perhaps even more importantly, if there is a large uptake of OMOP (especially among HIE and All Payer Databases), you could get:

  • More robust benchmarking (which is used for risk adjustment in quality scores and payment models). Currently hospitals and plans purchase such benchmarking from vendors like Vizient and Optum (or get it from CMS), but there are known limitations to the risk modeling that OHDSI tools could help address
  • Improved ability to assess the impact of quality improvement and payment model design changes on quality, safety & financial outcomes (e.g. for NCQA, AHRQ, CMS, CMMI) - using the full Hades suite
  • The ability to do what-if analyses of proposed changes to risk/payment models and benchmarking.

All of these could take significant administrative cost of the total national healthcare costs. Significant money goes into purchases of certified vendors for HEDIS, VBP, etc. plus hospital/plan/etc.-specific IT costs to support them; and CMS/HSS costs to audit all of that. If the Feds were to endorse use of OHDSI for those metrics (e.g. phenotypes for numerators & denominators), many of those costs could be reduced. There could be standard dashboards to track how entities are doing compared to benchmark and where they should focus for maximum value. There could be standard predictive models for the same; plus more robust means to assess the impact of initiatives to drive quality improvement efforts. Quality and Value-improvement efforts proven in one region could be rapidly and cost-effectively replicated in other areas. All of this would align well with efforts to drive improvements in the Quintuple Aim (simultaneously improving Quality, Cost Effectiveness, Patient Satisfaction, Provider Satisfaction, and Health Equity).

Lastly, OMOP/OHDSI could be strong framework for address data availability & standardization issues that are pain points for certain quality and safety improvement efforts. As one example, AHRQ has funded an effort to improve the Operational Measurement of Diagnostic Safety (e.g. understanding and mitigating delays & errors in diagnosis). There are data elements that would benefit from collection in a standard way, but which are typically collected via ad-hoc surveys, in commercial safety monitoring tools, or in REDcap databases. If there were workgroups to identify the shared needs and work them through HL7/OHDSI, then there would be a standard way for further driving such quality & safety improvement efforts, while also supporting large scale benchmarking (as utilization of such new content grows).

-Tom

2 Likes

Hello @hripcsa,

Not sure what the remit of the task force is exactly, but previous responses already added a lot of detail on the role of OMOP/OHDSI in the wider healthcare interoperability ecosystem for research, quality and outcomes monitoring purposes. So hereby just some pointers on materials that may be helpful for reference.

In EHDEN, we’ve written a report that highlights the relation between OMOP and other data models such as FHIR, CDISC, i2b2, openEHR etc., as well as further opportunities to improve FAIRness in OHDSI.

Also, an angle that I like to use to present OHDSI (especially network studies and studyathons) is as a key enabler of open science in medicine, as e.g. detailed in your LEGEND paper.

Greetings,

Kees

t