I’m new and need to confirm my understanding of the steps for a new PatientLevelPrediction study. I already pulled my dataset (EPIC EHR/Clarity) of risk factors, protective factors, and outcomes; it resides in Oracle, have Java installed, and have .csv with my records of interest. Are these the steps? Please correct me if this isn’t right, or provide clarification.
1a. Navigate the online Atlas tool to get familiar with major components. This has 2 exploratory data sources already installed. Don’t think I can load my CDM formatted data sources?
1b. Download and install OHDSI In A Box virtual machine. This has a Medicare data set but can I use this to create a new data set or is it only for tutorial purposes like the online Atlas?
1c. Download and install White Rabbit/Rabbit In A Hat, Achilles, Usagi, CommonDataModel, Vocabulary, Atlas, WebAPI, and PatientLevelPrediction.
Load data set into WhiteRabbit, Rabbit In A Hat and Achilles to analyze and map the data.
Use the Oracle DDL (CommonDataModel-master.zip) to create the OMOP CDM and Results tables on my Oracle schema
Usagi to map source system codes into OMOP vocabulary, then export as an append to the CDM. Load my code standardized data, add indices/PKs/FKs. If Usagi doesn’t append the data into the Oracle tables, I need to manually upload the standard code sets. I don’t think I need to use HERMES for vocabulary exploration if I’m using Usagi.
Use the local Atlas install to create cohorts. This breaks up my CDM table into usable subsets for R packages? Atlas will generate SQL scripts for my local Oracle database; run SQL on local Oracle. I don’t think I need to use Circe-be?
Use R Studio, run R packages pointing to my local Oracle data source. And analyze algorithm outputs.
I’m still unsure if I need to use Hermes or Cire.