OHDSI Home | Forums | Wiki | Github

Steering committee: Proposal to Standardize Network Studies

Hmmm, why are there two study repos? Are these two independent studies?

Thanks @Rijnbeek, agree we should also use the GitHub API to get more information. My idea is to start by making a Shiny app that will replace https://www.ohdsi.org/network-research-studies/ by scraping the README files, and using the GitHub API.

I agree that would be nice, but I have a hard time imagining people will keep those files up-to-date. Let me think about it some more. If someone has ideas to make this more feasible, let me know.

As long as I know, the cohort definitions were different

I’m sorry, but I struggle to see how this differs from methods studies. Could you give a few examples of informatics studies that are definitely not methods studies?

Hi @schuemie!

Both repos are part of the same study. One repo has target and comparator cohorts that were designed on an EHR database and the other has target and comparator cohorts that were optimized for a claims database. We are recommending that participating sites run both repos. The “IUD Study Updates” thread has additional information on this topic:

Thanks!

Ok, I’m making some executive decisions:

  1. @krfeeney: Based on the lengthy discussion last Friday, I removed the ‘Feasibility Achieved’ status again, because I believe it to be undefined.

  2. @Vojtech_Huser: I’m not yet willing to add an ‘Informatics Study’ category until I see clear examples of studies that do not belong to the two existing categories (Clinical Application and Methods Research).

  3. @mattspotnitz: I’ve created a single repo for the IUD study. I also created a new rule: a study can have only one repo. Within the repo you are free to do what you want, so if you insist on having two separate study packages you can add them as subfolders. However, I recommend reconciling the two packages into one.

Consensus was the semantic labeling was inappropriate (so no “Feasibility”) not to dismiss the entirety of adding a step. We talked extensively about a “Drafted” phase to incorporate the first run of a study on a local database that may still have to work through design considerations for scaling to beyond that data.

I feel strongly that the lack of a label between Started and Design Finalized would fail to capture the interim work product that is provisional study design prior to the necessary finalization. Drafted, as @Patrick_Ryan defined it, would be inclusive of the first run of the package on a local database – but it would not be a fully prespecified design. The completion of a full prespecification would be the “Design Finalized” phase.

I don’t recall there was consensus :wink: I recall there was a much bigger discussion going on than just what meta-data tags to have, and I do think there was consensus that we should continue that discussion in the coming months, while we move forward with the meta-data tags we have right now.

Specifically the notion of a ‘Drafted’ tag still seems horribly vague to me. If you ran your study on your local data and have unblinded yourself to those results that to me implies you have finalized the design, because I wouldn’t want you to go back and change the design after that.

Hi all!

I created a first version of a Shiny app that might replace the Network Research Studies page. The Shiny app scrapes our ohdsi-studies GitHub including the README files. Interestingly, but probably not surprisingly, I think nobody completely adhered to the template :wink: I went in and modified the README’s myself, so sorry about that.

I think overall the idea seems to be working, but it is clear we need some decision regarding tags, exposures, and outcomes.

Here’s a list of the tags used so far

  • EHDEN (3x)
  • Rheumatoid arthritis
  • rheumatoid arthritis
  • Drug Utilization
  • OHDSI-Korea
  • FEEDER-NET
  • F2F
  • drug safety
  • tofacitinib
  • etanercept
  • adalimumab
  • Xeljanz
  • Enbrel
  • Humira
  • RWD
  • RWE

I highlighted in bold the ones I would have used. I understand the desire to tag the exposures and the outcomes, but

  1. I would put those in a separate place to avoid clutter. It is not uncommon for an OHDSI study to include dozens of exposures and outcomes. @Rijnbeek proposed a CSV file. However, given the lack of adherence we see for the README template, I have little hope we can pull that of without continuous policing.
  2. We’d need a standardized terminology, as everyone on these forums will understand :wink: Luckily, we have one: our own Vocabularies. But combined with (1) I wonder if this is much more trouble than its worth.

Let me know what you think of the app, and what your thoughts are on tags, exposures, and outcomes!

Let the forces of Brownian movement have more time. :smile:

I like this Shiny app. I showed it to a group of new investigators this morning. They liked the idea of having a place to know if their questions are duplicative or overlapping with other investigators.

If we accept that as one of the use cases for this page – we benefit from getting a larger sample of what naturally floats to the top and what tags feel unnecessary.

This is a subset of studies. I suspect there is a phenotype of who runs these kinds of studies :wink: Make the CSVs, see what happens. Why not use the vocabularies? We have to define our Ts, Cs and Os using OMOP standard concepts.

The labels today are the start. These are people’s blind attempts at adding “tacit knowledge” that might help someone find their study to collaborate or use it for their own future study. I think this is a very easy change management cycle here. Give them better suggestions of what you want and we’ll work on improving.

BTW, why not include tags for OHDSI studies that are the output of a dedicated community led effort (e.g. Face-to-Face, Study-a-thons, etc)? Don’t we have some questions outstanding about how these projects finish?

I can see that several authors posted a study. Can you please clarify the process of requesting rights to create a repo. @schuemie

@schuemie
First of all, thank you for this great job, again!

Still, I cannot see the ‘ticagrelor vs clpidogrel’ study. Is there anything wrong in the README file of this study?

1 Like

Ok, here’s a proposal for a CSV file format:

  • conceptId
  • conceptName (I prefer my CSV files to also be human-readable)

We could also add a column type that distinguishes between ‘exposure’ and ‘outcome’, although I dread to think of how the freedom of a text field will be abused :wink:

We could call it 'relevantConcepts.csv"?

Anyone want to create the first CSV file?

There are currently 5 ‘owners’: @krfeeney, @schuemie, @jreps, @msuchard, and @Patrick_Ryan. Please ask one of them to create the repo for you. Provide that person a name for the repo and a one-line description that will be shown on the main GitHub page.

GitHub, like most web servers, is case sensitive. Your readme file was called ‘readme.md’, but the template specifies ‘README.md’. I changed the case for you. It may take up to 24 hours to show in the Shiny app.

1 Like
t