The requirements can possibly talk more about how good is a given computable phenotype.
Scenario: Accessing Phenotypes with Known Performance for Predictive Modeling can be a bit improved.
E.g., the concept of sensitivity of a given computable definition. (all other related terms).
The document may try to define those better. If my IsSmoker phenotype is declared to be Gold Standard Phenotype Algorithm but has sensitivity of only x%. I may still not use it since I require at least y% sensitivity.
@SCYou, you’re right. For a time, we were using * to denote required fields, but it looks like those didn’t make it in this draft. I’ll be sure to add that back in on the next one.
For validation, the idea is that you would only enter the four true/false positive/negative values because these are the fundamental building blocks for all other metrics. From these, we can derive everything else like the total number of cases, sensitivity, specificity, PPV, F1 score, accuracy, etc. In the case of a chart review, you might not always have values for the true/false negative cells (i.e. looking at charts for those whom the algorithm did not choose). That’s ok. The values for those cells would just be 0 for that particular validation set.
We’ve proposed having a form where you’d have cells or boxes to put those values in prior to submitting the form.
@Vojtech_Huser, I’m not sure I understand this point here. Sensitivity is a measure of how well any classification algorithm performs, regardless of whether it is a computable or rule-based phenotype, or even another algorithm entirely.
If you’re referring to how the sensitivity for a computable phenotype is obtained, I’ll defer to @Juan_Banda regarding APHRODITE’s internal validation procedures and @jswerdel for his work on PheValuator to get at sensitivity for any phenotype algorithm.
Indeed! For exactly that reason, the “Gold Standard” portion of the library doesn’t prevent cohort definitions from entering due to their metric values. A sensitivity of x% may be good enough for Person A but not for Person B, and there will never be a “one size fits all” solution for every phenotype. That’s why one of the principles of the library is to make searchability a priority; once you choose a Book (phenotype), you can see all of the Chapters (cohort definitions) that are available to you. A user can sort them by any metric, filter them, and otherwise organize them to find what works best for his/her particular use case.
Thank you for your questions @SCYou & @Vojtech_Huser, and I hope that helps to clarify some things. To you and every else, feel free to reach out with any other questions you may have!
Great work in writing this up! I like the detail in the document, and in particular appreciate the explicit evaluation of risk. This may have been discussed earlier regarding citation tracking, but would we want to consider additional identifiers for publications (e.g., DOI, PubMed ID, etc.). To keep with the requirement of making citation submission easy, these could be optional and possibly something we ask a librarian to curate?
@lrasmussen, thank you. Yes, I think that makes sense and is a good idea. For the citation entry form, we could toggle between DOI, PubMed ID, and something more manually entered like a BibTeX entry. If you or others can think of other relevant IDs used for citations like DOI or PubMed, please let me know.
All, I won’t be able to meet at 10am today, but I am looking forward to discussing the library at noon on the community call!
@SCYou, that’s interesting. Nothing stops us from adding an “Inconclusive” category to the mix to represent records that were inspected but indeterminate. Out of curiosity, what were some of the aspects that made classifying the records impossible? I’m curious if this was a feature of the data, the cohort definition, neither, or both and if it’s something that we can possibly get at with the right question(s).
Usually it happens because of missing data (non-informative texts only in the discharge note).
But specifically in case of GI bleeding, we cannot often find definite evidence of the bleeding even though GI bleeding is highly suspicious and we performed endoscopy. So, even though the patient said that he/she had hematochezia or hematemesis, we don’t know what exactly happened whe he/she arrive at the hospital. If there are many inconclusive cases, then overall accuracy of the cohort can be suspicious…
Thank you kindly for the opportunity to present at the community call this past Tuesday. I’ll be re-reviewing your questions, concerns, and suggestions from the recording.
In advance of the software demonstration at the symposium, I wanted to release this updated architecture diagram, which I believe more clearly articulates the cycle being proposed. After the symposium, I’ll be writing up the technical specifications in more detail, but the diagram below outlines the process. I expect to have a working proof of concept for this particular configuration at the demo.
Users have two Shiny applications which they can use to interface with the library in different ways:
The viewer application for read-only activities (i.e. examining, searching/filtering, and/or downloading cohort definitions). This is a space where we can display whatever kinds of charts, diagrams, tables, etc. people find useful when they are trying to locate a cohort definition.
The submission application to propose adding new data to the library, which requires authentication with a Google account. Currently, this is being carried out with Auth0 using the auth0 R package. The idea here is that the librarians would have access to the full Auth0 dashboard, which has a lot options that comes with it (details left to future tech specs document).
The types of data being submitted I’ve been calling “modes of submission”. You might want to submit a cohort definition of your own, or submit validation data for someone else’s definition. You might want to submit a citation for a library definition that’s been used in a publication, or you might want to submit a cohort charaterization so others can anticipate what their cohorts may look like when they execute the cohort definition at their site.
Whatever the case, when data are submitted using the corresponding form within the Library Submission Appplication, the data are processed and then uploaded to a Google Drive service account dedicated to the library.
Librarians can then clone the gold standard phenotype library repository and sync to the Drive folder. Every librarian who does this becomes one with the staging area, so to speak. Any changes made online on Drive or locally on the librarian’s computer will show up for all other librarians as the changes sync. This is also true as the Shiny application uploads its submission data to Drive; the newly uploaded data will sync for all librarians.
The staging area is not the official record though, as proposed submissions may take varying degrees of consideration (peer review) before they are ready to be elevated to OHDSI’s official library. When a staged change is ready to become official, the librarians can add it to the repository via standard git commands, thereby taking ownership of that particular update.
Before the content can reach back to the viewer/submission applications, an index file is necessary to compile the data into a single object. This essentially allows the applications to run and load far faster than they would if they had to query the repository and perform all of the calculations at runtime. A simple example of this is with metrics. Recall that we ask for raw true/false positive/negative values; we therefore calculate the sensitivity, specificity, PPV, NPV, F1 score, and accuracy “behind the scenes” so that the applications can display the numbers without first having to calculate them. Another example is provenance, where we calculate connected components so that we can display a network graph for a given cohort definition to show all of its relationships to other definitions. The index file is also an opportunity to insert other advanced calculations and pre-loading mechanisms that have not yet been foreseen.
Finally, a cron job can automatically rebuild the index file on the server by periodically checking the index file timestamp against the timestamp of the latest commit to the official record. If something has been pushed to the official record after the index file was last made, then the index is due for an update and will be rebuilt with the updated data from the official record. This advantages the librarians, because once they have pushed an update to the official record, they don’t need to worry about anything else, as the index will automatically update on its own. We could also consider a git hook to tie the automation more precisely to a commit event, but that would place more burden on the librarians to ensure the hook is configured and executing properly.
This completes the cycle! When a user checks back into the library applications, the updates that the librarians have brought to the official record will appear.
As a mentioned in the call, the requirements of the library are conceptually separate from the technical specifications, but I think both can advance simultaneously to some extent. A good example of this was when @Patrick_Ryan raised the idea that having cohort characterizations would be a good mode of submission. From a requirements perspective, this currently needs its own set of data elements to be specified; precisely, what do we expect users to submit when they add their own cohort characterization? However, from a technical perspective, it’s relatively easy to extend the Library Submission Application to incorporate one more type of submission.
I’ll be on the line during our regularly scheduled WG meeting (Tuesdays @ 10am ET) for those who wish to discuss this further. The week after, we’ll cancel the meeting since most of us will be at the symposium, which I’m very much looking forward to!