OHDSI Home | Forums | Wiki | Github

Phenotype Phebruary 2023 - Week 1 Discussion - Phenotype Peer Review

Agreed. But given a clinical idea/description can we now phenotype blinded to research question.

I think this is one big fat lingering open debate question. Other people have brought it up: @agolozar, @Daniel_Prieto, еtc. if something is a key input, how can you “blind” it from that?

Or do you mean “abstract it from a concrete question”? “Make it generic to typical types of questions”? Introduce “flavors” in @Daniel_Prieto’s words?

if so, we have to say what these flavors or types of research questions are, and state them in the description.

Many times you all are saying the same things but somehow focusing on different points of the same argument

Summary of the discussion Week 1 on peer review?
(all paraphrased)

  1. In the presence of objective diagnostics with prespecified decision thresholds, we don’t need peer review - but only a check list that a process was followed. However, today we do not have objective diagnostic or a checklist. @Patrick_Ryan
  2. Peer review is not pitting one persons subjective opinion on another persons work with little-to-no empirical basis to reconcile differences. @Patrick_Ryan
  3. Peer reviewer role - options for peer reviewed could be a) accept, b) reject, c) revise and resubmit;
  4. Use of peer review may not have any more advantage over a superficial confidence @Patrick_Ryan
  5. Future researcher “to determine if the existing phenotype is fit-for-purpose for the new research question or different database.” @Kevin_Haynes
  6. In the absence of objective diagnostics, "By soliciting another peer scientist to provide their independent perspective - we hope to discover measurement errors that we would not have otherwise seen " @Gowtham_Rao
  7. Checks if a) clinical idea is described and umabigious, b) cohort definition logic and clinical descriptions are concordant, c) cohort definitions are tested. @Gowtham_Rao
  8. we apply the OHDSI principles of a systematic approach. If we do undefined “reviews” we add more pain to the current situation. @Christian_Reich
  9. Job of the peer is to ensure the process to pressure test criteria to correct misclassification - index criterion, sensitivity problem, specificity problem,
  10. We should develop phenotypes with evaluation in mind - i.e. each step to reduce a source of error. Proposes a diagnostics informed approach for the process @Patrick_Ryan
  11. Peer reviewers repeating the steps to see if they come to the same conclusion - probably unnecessary. Uses analogy of the publication peer reviewer @Patrick_Ryan
  12. in phenotyping we don’t need to question intent or relevancy of phenotyping a clinical target @Patrick_Ryan
  13. It is the clinical description that helps justify design choices in cohort definition @Gowtham_Rao
  14. Peer reviewer should not pass judgment on whether the described clinical target (as in clinical description) is right or wrong e.g. crohns disease of appendix @Patrick_Ryan
  15. Estimating differential error maybe a post phenotyping process @hripcsa
  16. Until objective decision threshold place diagnostics are in place, peer review may be one possible imperfect way to stress test some of the subjectivity in the process and have it questioned by independent person @Azza_Shoaibi
  17. Phenotype peer review does not need to redo the phenotype development, but should be able to come to a recommendation a) accept, b) reject based on submitted material. @Azza_Shoaibi
  18. If we parse all ambiguities add add them to clinical description, we would multiply each library entry @Christian_Reich
  19. The clinical description is 1000 times better than what we have now - wishy washy descriptions in the papers. But still they fall short of what is needed @Christian_Reich
  20. The question is what is the question. Atrial fibrillation as an adverse event may require a different phenotype from Atrial fibrillation requiring DOAC therapy. Thus necessitating additional clinical descriptions and additional reviews and diagnostic metrics. @Kevin_Haynes

Coming in as someone who is not a phenotyper, but rather a person often relying on their work: if the goal of OHDSI is generating evidence, then phenotypes should be seen as a means to this end.

My impression though is that the general discussion around phenotypes often treats them as an end in themselves. I would have thought that, alongside decisions around choice of study design and appropriate statistics etc, decisions around phenotypes surely must also be context-dependent. Without knowing the context for which a phenotype will be used, won’t the response of the reviewer often simply be “it depends”?

Going back to the original question of what criteria are important for reviewing a phenotype, I would think that two important elements that need to be considered at the stage of reviewing a new phenotype or the repurposing of a previously used one are:

  1. the research question for which a phenotype is going to be used for, and
  2. the databases which will be included in the study to address the research question

(Maybe this latter is worth an entirely separate discussion, especially now that the success of OHDSI has brought with it an increasing variety of types of data sources!)

Absolutely correct! Let’s talk about this topic week 3 of Phenotype Phebruary 2023!