Since we are using OMOP as a research repository not only for a dedicated study, we are looking for a way to update vocabularies regularly and automated . Are there any plans to provide another interface beside the CSV download e.g. via REST to directly get them into our OMOP environment? Thx. Cheers Ines
I second this motion! My organization and I would also be very interested in a REST interface to Athena, because we also would love regular (i.e., weekly) and automated updates to the vocabulary tables.
Hi Ines and Tim, I absolutely understand why you are requesting this.
At this time, Athena does not provide that kind of service and as far as I know a development request for this has not been put on the development roadmap.
How about we go through an exercise to collect the requirements from you (what do you think is necessary to perform your automated vocabulary update? what is your infrastructure that such a process would have to be triggered from?).
With this more specific information we could put together a feature request that would have to be brought to the attention of the OHDSI board for decision.
I am very open to meet for a requirements engineering session!
Cheers ~ Mik
Tagging @mgkahn @anoop.yamsani
@mik – Thanks so much for your willingness to entertain proposals. This is on my to-do list.
Some initial half-baked thoughts:
The API should support bi-temporal versioning. That is, the ability to
REST getconcepts from the API based on their valid start/end dates or based on their repository create/update dates. We should also be able to query based on the ATHENA version number, once a system has been established.
I think the JSON object for a given concept should include its associated records in the other OMOP vocabulary tables, like
I think there is an opportunity to replace the
concept_classphysical tables with database views. Why? Because every record in these tables also has a one-for-one corresponding record in the OMOP
concepttable. (Yes, these records are stored twice!) Reducing the number of physical vocabulary tables would greatly simply an ATHENA RESTful API, with no loss of functionality.
At one point, I also questioned whether the
concept_ancestortable could be merged with
concept_relationship(as long as the
relationshiprecords were defined appropriately). I must confess I haven’t had the change to parse @Dymshyts’s response in detail to understand the nuances (Missing RxNorm mappings for cholesterol medications).
More to come…
One could think about ATHENA releases in a more declarative manner, where each release specifies a set of constraints on an OHDSI database, typically involving (but perhaps not limited to) concept tables.
A separate program then examines an existing OHDSI database, and provides several bits of functionality: a) produce a report of deviations from the constraints, especially those deviations which have corresponding patient data that references them, b) add/update/remove concept definitions based upon the differences detected, c) for any conflicting configuration, produce a report of dependent patient data which would need to be examined to resolve the conflicts.
The advantage of this approach is that it’d permit local customization as needed by a given community, and, could provide a report on constraint variations before they would be applied. This general declarative (“make it so”) approach is popularized by Ansible for those doing system configuration.
I think the main impediment is resources. Somebody needs to build this. And in an Open Source environment - if you need something and it isn’t there - it’s your job.
What’s that, @quinnt?
Correct. The reason for that is to support an information model, where everything is organized through concepts. For an old-fashioned RDBS model that’s awkward. But right now these constitute 100% of all CDM instances. So, probably a bit too early to drop the reference tables.
True. You could have “Ancestor of” and “Descendant of” relationships. But where do you want to put the min_levels_of_separation and max_levels_of_separation?
True. That would be a “database updater”. The question is what is more effective: This updater, or a simple refresh of the ETL. The latter has the advantage that you don’t need to encode equivalent functionality twice, avoiding conflicts and contradictions.
But again, even if we want an updater, somebody needs to code it up.