OHDSI Home | Forums | Wiki | Github

New Version of Olympus Posted (Postgres Fixes)

Hi all,

I have just posted a new version of OLYMPUS. It works successfully with Postgres. I have tested against the public OHDSI SynPUF data including generating cohorts and running Heracles analyses.

Below is an example configuration for Postgres.

In this example, the CDM data and vocabulary data are on the “public” schema, which needs at least read permissions. The “ohdsi” schema will contain all the other tables for creating cohorts, jobs, etc, and requires read/write permission.

Note, this version does not yet incorporate the new multihoming configuration. We will send an update when that is released. If you are running multiple CDMs, please grab the individual applications (eg. Circe) manually from GitHub.

As usual please report any bugs with Olympus.

Thanks

@jduke

I have this running against my local Postgres.

Works:

  • “ohdsi” schema created
  • Athena: OK
  • Hermes: OK
  • Heracles: not completely tested due to Circe issues
  • Achilles: not yet tested

Issues:

  • Circe: No current definitions presented and the “New Definition” button does not appear.
    – Caused by: org.postgresql.util.PSQLException: ERROR: column cohortdefi0_.id does not exist
  • Calypso: No feasibility definition table presented
    – Caused by: org.postgresql.util.PSQLException: ERROR: relation “feasibility_study” does not exist

Bill

Can you send a screenshot of your configuration screen? Is ohdsi listed as both your results and cohort schemas?

What tables have shown up in your ohdsi schema?

Hi Jon,

FYI

Olympus isn't deployed in the cloud so I don't have a configuration screen. Each app is deployed to a tomcat webserver as war files.

The multihomed WebAPI is deployed on the cloud and configured directly using the ohdsi schema daimon tables.

Lee

Tables in ohdsi schema:
schema_version
batch_job_instance
batch_job_execution_params
batch_step_execution
batch_step_execution_context
batch_job_execution
batch_job_execution_context
exampleapp_widget
cohort_generation_info
cohort_definition_details
cohort
heracles_analysis
heracles_results
heracles_results_dist
heracles_heel_results
cohort_definition
feas_study_generation_info
feas_study_result
feas_study_index_stats
feas_study_inclusion_stats
feasibility_study
feasibility_inclusion
laertes_summary
drug_hoi_relationship
evidence_sources
drug_hoi_evidence

Configuration screenshot:

Hi all,

After all the problems I had with the combination of Olympus and postgres before the last months I was of course very curious to test this new version. I just performed a very quick test just to see if it works now. Good news: Hercules was showing met some nice graphs on screen. I hope to play with these tools soon.

Many thanks for all these fixes,
Marcel

1 Like

Great news! I have made several more improvements to Heracles since then, mainly around speed.

We’ll have to get Olympus going with the multihomed change before we can get those out though, but we have a new engineer coming on board. Hopefully, she’ll help move it forward.

Thanks!

Hi,

I am trying to deploy Olympus for testing here at Pitt and ran into some problems. The issue seems to be related to flyway. The link below is a document with the details along with an error report. Would anyone be able to suggest how I can proceed? Thanks!

I believe you are running into a postgres version / flyway script mismatch. Is it possible for you to upgrade your postgres db?

That is the version provided by RHEL 6. I would really prefer to not have to update my postgres since I expect that it may be challenging. Any chance to avoid that would be nice. Please advise though if you think it will be the only solution.

Also, once this is resolved, will I need to drop all tables from the ohdsi schema before re-launching?

thanks!

I believe the only current option is to add a -Dflyway.enabled=false to the java process and run the postgres db scripts manually (removing the ‘IF NOT EXISTS’). That should work…

Adding @Chris_Knoll & @Frank so they are aware. Another version (major) of WebAPI should declare what versions of databases are supported and update the flyway migrations (removing checks for pre-existence). That would be the cleanest approach but would require an upgrade path for those that have already installed WebAPI.

I think there’s 2 things we can do with the flyway scripts in the next release:

  1. Remove the if not exists checks and let the flyway migration fail if it tries to create a table that already exists. This should be treated as an error condition anyway.

  2. Add schema support. There’s been a number of reports where they try to use the default schema for both the cdm and WebAPI installs. We should allow the install to specify the target schema. Flyway will manage creating the schemas you specify, and we’ll just have to go back to our scripts and update the create tables statements to specify the schema.

-Chris

Trying not to ask a stupid question, but…

I have olympus working on postgres with the flyway=false flag. There are three schemas listed in the configuration file CDM, Results and Cohort. I assume the CDM is my OMOP schema, but what are the other two for? In which schema do I run above postgres scripts? Do I run all 18 scripts or just the most recent version of each ( V1.0.0.6.*, for example )?

This is a fresh v5 install of the CDM with the vocabulary and a test set of patient data loaded. I am trying to create a system to showcase the OHDSI features to faculty outside of just the CDM.

Thanks!

Hi, Richard,
Thanks for putting the effort for getting this installed on your env. Once you got it hooked up, everything should be gold, but getting there does have some challenges:

Re: three schemas listed
The CDM schema is the schema to find your CDM tables (all the tables you find in the OMOP CDMv5 schema ddl). The results schema is the schema that Olympus will use to create the tables that support the functions in WebAPI. I don’t believe we use the cohort schema anymore: that was to allow you to specify if you want to write cohort results into the cdm’s cohort table or our OHDSI result schema’s cohort table. We now always write into the OHDSI result schema.

Re: migration scripts: I am not sure if you want to run with flyway=false, as it will run all 18 scripts found in the migration directory in the correct order, and keep track of which scripts have been executed such that future updates will just apply the db migrations you are missing. However, if you want to take that responsibility on yourself, the answer is ‘yes’, you will need to run all 18 scripts in order on the database.

-Chris

@Richard_Starr, I would add that if you want to keep it super simple and use the same schema for CDM and Results, that should still work, but it would be cleaner to keep the CDM and Results schemas separate.

As @Chris_Knoll pointed out, Olympus does not use the Cohort schema at this time.

Thanks @Chris_Knoll and @jon_duke, I was going to stay with the default postgres 8 version on RHEL6, but I think I will go ahead and upgrade to postgres 9. That will remove any manual script work and make life easier.
Thanks again!

Hi, Jon, sorry to say, but using the same schema for your CDM as your results schema will lead to conflicts. So, I recommend to put the cdm tables into it’s own schema (call it ‘cdm’ or what have you) and the resuts schema can use the default schema (which should be ‘public’ in Postgres, or ‘dbo’ in ms sql).

I think this is the week (or perhaps next week is the week) to put in the required fixes to the migration scripts and configuration to specify the database schemas properly. I will need some help testing this across platforms, but I think once this is settled, we won’t have any more confusion about database schemas.

In fact, with the advent of the ‘Multi-homed’ branch, all the cdm-specific connection strings and table qualifiers are all database driven. Is Olympus using the Multi-Homed branch?

-Chris

Thanks Chris. @lee_evans and I are working on a docker-based Olympus-like solution that will handle multi-homed and provide more future-proofing for the new stuff coming down the pipeline (like R based web apps).

t