Hello Everyone,
I have recently started to try out OHDSI tools and came across this Phevaluator R package.
Read the paper and watched few videos. great work on developing this package.
Some context here before we get onto the questions. The below is an example
My dataset has 10K records.Out of which 9900 patients have Lung Cancer. Rest 100 patients don’t have Lung Cancer…
Now I have a rule based phenotype algorithm to identify patients with Lung Cancer.
So, we implemented this algo in Atlas (created cohort definition) to identify Lung Cancer patients
This algorithm (which was implemented in Atlas) results in 9500 patients (cohort generation result)…
So like mentioned in the paper and videos, it’s easy to get or access accuracy which is (9500/10000) = 95%
But the class proportion is imbalanced in our dataset, so accuracy may not be a reliable measure to assess the performance of algorithm.
So we are interested in the wholesome performance of the phenotype algorithm, I am also interested to know the SENSITIVITY AND SPECIFICITY ETC.
As you can see that my dataset is a mix of Lung Cancer and Non-Lung Cancer and class proportion is also imbalanced… Hence the below questions
So my objective is to assess the performance of Lung Cancer algorithm and find out the characteristics like sensitivity, specificity, etc
However, I have few questions as I am just getting started and learning stuff through this forum.
- XSpec cohort - 9500 patients
I understand that we create cohorts in Atlas with criteria which can help us get the positive items (For ex, if I use 10 codes for Lung cancer, then we can be sure that this person is having Lung cancer). But wouldn’t this filter out all other possible Lung cancer cases? Ex: someone might have been hospital recently diagnosed for Lung cancer and he has only 1 or 2 codes. As you can see my XSpec cohort identifies only 9500 patients as Lung cancer instead of 9900 because I chose to have only people who have only people with 10 codes of Lung cancer. In this case, aren’t we losing this person from identifying as a case? Will this drop in records impact anyway? I understand there may not be one right answer but would like to know how would you guys do this?
- XSens cohort - 90 patients
Let’s say I create a cohort to “exclude Lung Cancer concepts”. Can this be a valid definition for sensitive cohort? Because this can give us a list of patients who have a high probability of not being a Lung cancer case and I get 90 patients only out of 100 (actual) under XSens cohort. Again a drop of 10 records here. Is it necessary that we try our best to get as many records as possible under each cohort definition? what is the impact of dropped records. Is there any most appropriate way to do this?
- Prevalance cohort
I see in the doc that XSens can be used as Prevalence cohort. But am confused here… Ex: As I have 10K people in my population and 9900 of them have lung cancer, to know the prevalence of a disease, I need to consider the entire population (9900/10000 equals 99%). Am I right? So, trying to understand why is the default value for this field is XSens cohort? Because it may not give the prevalence of the disease. Am I right? How do I create this cohort? Should I create a cohort to identify people with Lung cancer from a population of something like XSpec + XSens? As I have both the criteria (XSpec & XSens), can this give the accurate prevalence value? Is there any most appropriate way to do this? can help me with this please
- PLP model
I see that after the creation of above 3 cohorts, it is creating the diagnostic model. But in the documentation under “createPhenotypeModel”, I don’t see any settings/Parameters to tune-in for train set or test set ratio. As you can see that my data is heavily imbalanced. How do we handle such scenarios while model creation? Though I know PLP has such settings to define train and test ratios, Phevaluator doesn’t have those settings. How do we handle scenarios like this?
- Evaluation cohort
I see in the doc that we get data for this cohort from XSpec but shouldn’t this be an unseen data for the model? We have already created 3 cohorts above and based on my reading of this package, I could see that it creates eval cohort from XSpec cohort.Can help me understand why do we use XSpec here? How can we find the unseen data to evaluate because we have already used our data in the creation of above 3 cohorts? I would like to understand this package better so that I can use it correctly and interpret my results accordingly
- Creating PA for evaluation
I understand that this is the step where we implement our phenotype algorithm in Atlas and use it’s cohort definition id in “testPhenotypeAlgorithm” for assessment. But may I know what is “EV” under cutpoints ?
I understand we usually have 0.5 as a threshold to discriminate the classes but what does “EV” mean and how different it is from other threshold values like 0.1,0.2,0.3,0.4,0.5 0.6,0.7 etc?
Can help me with these questions please?