OHDSI Home | Forums | Wiki | Github

Cohort Generation in Atlas with Generation status Failed

‘Invalidated By System’ messages appears when you restart the service and there’s a job in a ‘running’ or ‘pending’ state. The jobs that were running between the restarts need to be reset (because the thread that is executing the analysis is killed, so on restart, all you can do is mark all running jobs ‘failed’ with ‘Invalidated by system’.

The original problem you were having is strange: you say that you click ‘generate’ but the job just sits in pending status? It’s possible you have another error happening between starting the job and the job starting the execution. Do you have anything in your logs? You mentioned that it did run successfully once. Can you get a simple cohort to generate or revert to the version that was working?

@Chris_Knoll
I deleted all the entries associated with cohort generation from webapi and results schema. Then I retried generating a simple cohort and facing the same issue with status “PENDING” from a long time. Attached the screenshot.
I am also attaching the screenshot for fail_message column.


I am on Atlas version 2.7.1 and WebAPI 2.7.1.

I will try to redo the cohort generation using previous version on WebAPI and Atlas.
Also, the cohort once generated doesn’t delete in one click, I had to reload the page again and again to make it work.

I don’ t see any logs which is associated with Atlas issues.


Kindly suggest.

Thank you

Hi,
I explained in the prior message: ‘Invalided by System’ mesasge happens when the WebAPI service is restarted and any job in ‘running’ state is reset with that failed message.

It looks like you did have a successful generation on cohort 175, so I’m not sure what is the issue with that cohort definition. Can you provide a screenshot of the cohort 176? Also, did you check your log files to look for an exception? Maybe there was an error in preparing the execution which causes the job to appear in ‘starting’ state but never actually starts executing due to some pre-launch error.

@Chris_Knoll
Hi Chris,

Here is what I did-

  1. Restarted tomcat service.
  2. Checked if WebAPI and Atlas are properly deployed and accessible via browser.
  3. Created Cohort # 175, clicked generate > failed
  4. Clicked generate again > complete
  5. Created cohort 176, clicked generate > PENDING > FAILED
  6. Clicked generate again > COMPLETED

The logs that I can see is as below for the time when it failed:

Encountered one more scenario without touching WebAPI services:
Created cohort# 195> generate > Pending since past 15 mins


meanwhile the entry in DB table:

Additional Logs I found today, shows error when it gets stuck in PENDING state.

Below log is when it FAILED on Generate Cohort Click

Need your assistance on this.
Thank You

The issue is that the tables are created by the Flyway migrations. So, the cohort_inclusion_result table is created as follows:

Note the missing mode_id column. The same is true for the cohort_inclusion_stats and cohort_summary_stats tables.

The v2.5.0 release contains instructions on how to manually resolve the issue, but a Flyway migration was not included to automate this schema change. One possible solution is to provide a Flyway migration that brings the schema up to date, which is what I’ve done here.

For convenience, in Postgres, the solution is to run the following:

ALTER TABLE @resultsSchema.cohort_inclusion_result ADD mode_id  int NOT NULL DEFAULT 0;
ALTER TABLE @resultsSchema.cohort_inclusion_stats ADD mode_id  int NOT NULL DEFAULT 0;
ALTER TABLE @resultsSchema.cohort_summary_stats ADD mode_id  int NOT NULL DEFAULT 0;

where @resultsSchema is the name of the schema containing the WebAPI tables.

Hi,
The tables created by flyway into the WebAPI database are no longer the approach to get the DDL for the results schema (hosted in the same database as your CDM schema).

The way to get the DDL for the results schema is to start your WebAPI instance, and open up the endpiont: /WebAPI/ddl/results?dialect={your dialect}. See SqlRender for your choice of dialects.

Since it may be difficult to understand which tables are new/altered from your current deployment, one way to figure it out is to deploy the latest results schema into a fresh/empty schema, and then use a DB schema compare utility to indicate which tables and columns are different between the new version and the current version. Then, you can plan your migrations based on your current situation.

1 Like

Okay, good to know. Is that the long term plan as well? My use case might be atypical, but I am trying to automate the setup of a WebAPI v2.6.0 development environment, so I was hoping to avoid any manual steps. I ended up adding the migration so I could get my image to work (although I’ve only tested basic cohort generation). I’ll create a PR to show what I did.

EDIT: PR up here if anyone is interested:

Feel free to close it.

Yes, I think using the DDL is the long term. @JamesSWiggins uses it to automate deployments of WebAPI and Atlas onto the Amazon WebService platform. I’m pretty sure he uses the DDL directly out of the WebAPI endpoint.

We won’t be maintaining results schema via WebAPI in the future. In fact, we’d like to clean up those tables from the WebAPI schema, but we aren’t sure the impact that may cause, so we’re waiting for a 3.0 WebAPI release so that we can do some of that housekeeping.

-Chris

1 Like

Hi,

Yes, please be encouraged to check out this automation in the GitHub repo below. Even if you aren’t deploying on AWS, the shell scripts contained within may be useful:

James

1 Like

Hi All,
I am also facing a similar issue :
I am currently analyzing the features offered by the ATLAS tool. We are using a sample SynPuf data for this activity. We are using PostgreSQL based AWS RDS as the database.
Observation : The ATLAS tool created the cohorts and the reports were generated successfully. However, when we went to perform characterization and build cohort pathways, the process failed with the below error:

java.util.concurrent.CompletionException: org.springframework.jdbc.BadSqlGrammarException: StatementCallback; bad SQL grammar
Please refer to the image below :

We picked up the segment of queries mentioned in the error and executed it manually on our database and they executed without any issues.
We are looking for some help to resolve this. Please do let us know if you have any suggestions.

Thanks!

Thank you @psbrandt for posting this solution as I was coming up against this error. Also thanks to @Chris_Knoll and others on this thread as it has contained super useful information.

I have been using Broadsea (another thanks goes to @lee_evans) to run Atlas for the Synpuf data (CDM v5.3.1) contained in a Postgres database.

However (there’s always a but…) I run into issues when in Atlas on the Cohort Definitions page/Generation tab.

@psbrandt’s solution:

-- I use @resultsSchema = ohdsi as that seems to be what Broadsea uses as the results schema
ALTER TABLE ohdsi.cohort_inclusion_result ADD mode_id  int NOT NULL DEFAULT 0;
ALTER TABLE ohdsi.cohort_inclusion_stats  ADD mode_id  int NOT NULL DEFAULT 0;
ALTER TABLE ohdsi.cohort_summary_stats    ADD mode_id  int NOT NULL DEFAULT 0;

removes the initial error when clicking Generate for the Cohort Definition, but now clicking Generate gives a new error:

# from the Generation Status FAILED hyperlink:
java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.springframework.dao.DuplicateKeyException: StatementCallback; SQL [CREATE TEMP TABLE Codesets  (codeset_id int NOT NULL,
  concept_id bigint NOT NULL
)
; INSERT INTO Codesets (codeset_id, concept_id)
SELECT 0 as codeset_id, c.concept_id FROM (select distinct I.concept_id FROM
(
  select concept_id from synpuf.CONCEPT where concept_id in (1312706,19087090,710062,715233,729855,740275,798874,711714,723013,43526424)

) I
) C; CREATE TEMP TABLE qualified_events

AS
WITH primary_events (event_id, person_id, start_date, end_date, op_start_date, op_end_date, visit_occurrence_id)  AS (
-- Begin Primary Events
select P.ordinal as event_id, P.person_id, P.start_date, P.end_date, op_start_date, op_end_date, cast(P.visit_occurrence_id as bigint) as visit_occurrence_id
FROM
(
  select E.person_id, E.start_date, E.end_date, row_number() OVER (PARTITION BY E.person_id ORDER BY E.start_date ASC) ordinal, OP... [truncated] ...t sr
  CROSS JOIN (select count(*) as total_rules from ohdsi.cohort_inclusion where cohort_definition_id = 2) RuleTotal
  where sr.mode_id = 1 and sr.cohort_definition_id = 2 and sr.inclusion_rule_mask = POWER(cast(2 as bigint),RuleTotal.total_rules)-1
) FC
; TRUNCATE TABLE best_events; DROP TABLE best_events; TRUNCATE TABLE cohort_rows; DROP TABLE cohort_rows; TRUNCATE TABLE final_cohort; DROP TABLE final_cohort; TRUNCATE TABLE inclusion_events; DROP TABLE inclusion_events; TRUNCATE TABLE qualified_events; DROP TABLE qualified_events; TRUNCATE TABLE included_events; DROP TABLE included_events; TRUNCATE TABLE Codesets; DROP TABLE Codesets]; ERROR: duplicate key value violates unique constraint "cohort_inclusion_stats_pkey"
  Detail: Key (cohort_definition_id)=(2) already exists.; nested exception is org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "cohort_inclusion_stats_pkey"
  Detail: Key (cohort_definition_id)=(2) already exists.

I can then get rid of this error by removing the cohort_inclusion_stats_pkey constraint for the cohort_inclusion_stats table in the ohdsi schema (which feels like a yucky thing to do!) in PGAdmin. There are subsequent errors for cohort_inclusion_* table constraints that I can delete also - then the Generation Status becomes “COMPLETE” in Atlas. However it seems to break the inclusion report that is meant to load (doesn’t load):

as well as giving a “The report you requested is not available. Please click the generate button to generate cohort reports for this data source” message on the Cohort Definitions/Reporting tab:

Is this a known issue/is there a solution/am I doing something wrong?

Also please let me know if you need other information about my setup (Win 10 Pro, Broadsea in Docker, Synpuf data (CDM v5.3.1) in a Postgres database).

Thanks for any help
Ty

Re: the cohort_inclusion_stats_pkey: I can’t find a reference to that in the codebase (or any of our migration scripts), so I’m not sure how those PKs got there on that table, but it doesn’t make sense to put a PK on the cohort_definition_id because those tables are one to many…

I’m not sure why the inclusion rule report isn’t showing when you click ‘view report’, but the endpoint to it is:
http://localhost:8080/WebAPI/cohortdefinition/{cohortId}/report/{sourceKey{?mode=1

You an adjust the above to your specific cohrot ID and source key, and see what the exception is.

Thank you for such a quick reply @Chris_Knoll!

So I used

# (I didn't get very inventive with the source key)
http://localhost:8080/WebAPI/cohortdefinition/3/report/OHDSI-CDMV5?mode=1

and the last few lines are:

{"methodName":"runWorker","fileName":"ThreadPoolExecutor.java","lineNumber":1149,"className":"java.util.concurrent.ThreadPoolExecutor","nativeMethod":false},
{"methodName":"run","fileName":"ThreadPoolExecutor.java","lineNumber":624,"className":"java.util.concurrent.ThreadPoolExecutor$Worker","nativeMethod":false},
{"methodName":"run","fileName":"TaskThread.java","lineNumber":61,"className":"org.apache.tomcat.util.threads.TaskThread$WrappingRunnable","nativeMethod":false},
{"methodName":"run","fileName":"Thread.java","lineNumber":748,"className":"java.lang.Thread","nativeMethod":false}],
"serverErrorMessage":{"line":1173,"column":null,"where":null,"schema":null,"sqlstate":"42P01","position":98,"severity":"ERROR","detail":null,"hint":null,"datatype":null,"constraint":null,"routine":"parserOpenTable","internalQuery":null,"internalPosition":0,
"message":"relation \"ohdsi.cohort_censor_stats\" does not exist","file":"d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\parser\\parse_relation.c","table":null},
"sqlstate":"42P01","nextException":null,"errorCode":0,"message":"ERROR: relation \"ohdsi.cohort_censor_stats\" does not exist\n  Position: 98",
"localizedMessage":"ERROR: relation \"ohdsi.cohort_censor_stats\" does not exist\n  Position: 98","suppressed":[]},
"message":"PreparedStatementCallback; bad SQL grammar [select cs.base_count, cs.final_count, cc.lost_count from ohdsi.cohort_summary_stats cs left join ohdsi.cohort_censor_stats cc on cc.cohort_definition_id = cs.cohort_definition_id where cs.cohort_definition_id = ? and cs.mode_id = ?]; nested exception is org.postgresql.util.PSQLException: ERROR: relation \"ohdsi.cohort_censor_stats\" does not exist\n  Position: 98",
"localizedMessage":"PreparedStatementCallback; bad SQL grammar [select cs.base_count, cs.final_count, cc.lost_count from ohdsi.cohort_summary_stats cs left join ohdsi.cohort_censor_stats cc on cc.cohort_definition_id = cs.cohort_definition_id where cs.cohort_definition_id = ? and cs.mode_id = ?]; nested exception is org.postgresql.util.PSQLException: ERROR: relation \"ohdsi.cohort_censor_stats\" does not exist\n  Position: 98",
"suppressed":[]},
"headers":{"id":"6a055392-11d5-47f0-2222-22c710d88e6e","timestamp":1599183741355}}

So it seems although the table cohort_censor_stats doesn’t exist when it’s expecting it to? (I can confirm I cannot find it in PGAdmin)

Does the fact that cohort_inclusion_stats_pkey from my last post exist mean there’s a versioning issue within Broadsea?

Thanks again
Ty

Not sure, I’m not familiar with broadsea, but there’s a specific endpoint that you can get to to find the CDM results schema DDL that has all the necessary tables:

@lee_evans: do you think broadsea can be updated to include a stand alone WebAPI, and a separate CDM instance created, pre-populated with the CDM schema and results schema? I there is some confusion with those legacy WebAPI tables being in the WebAPI schema, and we should really demonstrate the proper configuration of a single WebAPI (to store configuration) and multiple CDM instances (each with their own cdm and results schema ).

Thanks again @Chris_Knoll.

I’m not sure I follow with this:

but there’s a specific endpoint that you can get to to find the CDM results schema DDL that has all the necessary tables

but it does seem to be Broadsea idiosyncrasy I’m dealing with? If so, if @lee_evans had any ideas that would be most appreciated.

Right, broadsea has everything pre-set up, but I was just saying if you go to the endpiont:
/WebAPI/ddl/results, you’ll get a DDL script that will initialize a results schema (but, yes, this is something that broadsea saves you from having to do yourself).

I’m just not sure the CDMs that come with broadsea have been initialized properly with the results schema.

Ah I see, thank you. Sounds like installing WebAPI and going through that process might be the way to go.

As an aside I went to http://localhost:8080/WebAPI/ddl/results and there is DDL like code, the below is the first statement.

IF OBJECT_ID('ohdsi.cohort', 'U') IS NULL
CREATE TABLE ohdsi.cohort
(
	COHORT_DEFINITION_ID int NOT NULL,
	SUBJECT_ID bigint NOT NULL,
	cohort_start_date date NOT NULL,
	cohort_end_date date NOT NULL
);

It doesn’t work in pgAdmin but the disclaimer is I’m pretty amateur when it comes to Postgres. I suspect I need to wrap it into a function/stored procedure for the conditional “IF” (should there be a “THEN” statement too…). Any suggestions on this front are welcome to but equally feel free to ignore if this is beyond the scope of this forum.

Sorry, I gave you incomplete information:

The endpoint I gave you defaults to MSSQL server: there are other params you can pass it to do some other helpful things (like specify a dialect and a results schema name):

http://localhost:8080/WebAPI/ddl/results?dialect=postgresql&results=yourResultSchema&vocab=CdmSchema

This should give you a valid PostgreSQL DDL script that you can execute directly into a PGAdmin console (no stored procedures required). This also gives you some default indexes, but you may want to read up on what the indexes are doing before blindly applying them. There is also scripts that initialize a table for ‘concept_hierarchy’ which depends on you having a fully populated vocabulary tables from a cdm schema (see the vocab param above). If you want achilles results to show up an atlas, you’ll need this table. Also worth noting: The CDM schema shoudl be paired with a results scheema…you don’t have 1 results schema for 5 different CDMs: that would lead to results from one CDM overwriting another’s result…so just think of it as each CDM needs its own results schema for WebAPI to record results of an analysis query.

Remember: the results schema is supposed to be co-located within the same database (but separate schema) as your CDM data. This is because we do joins across the CDM tables (ie: vocabulary) and the results schema tables (ie: to resolve a concept_id back to its name via the CONCEPT table)…so if you are thinking you would have all your CDMs on one database server and you set up your results schemas on another database server, that’s not how it works, you need to put those schemas so that they can ‘talk’ to each other.

Also remember that the WebAPI database is different from the CDM database…so if you are going to go through your own WebAPI setup (which I’m happy to help you with) Just remember that to set up webAPI, you’ll set up a separate database to host the WebAPI tables (which are automatically created for you when you launch the WebAPI service, provided you follow the install guide carefullly. and you insert JDBC connection information into the WebAPI.source table that WebAPI uses to connect to the (external) CDM databases to perform analytical tasks. I recommend you go with PostgreSQL for your WebAPI database, but know that your CDM data can be housed in redshift, or many other large-scale data types of DBMS…WebAPI does only simple insert/update/query operations on a CDM so there’s more flexibility on what the CDM supports, but WebAPI has very limited RDBMS support, but postgresql is one of them.

For your own small test env, I would just take a cut of your patient level data and load it up into a PostgreSQL CDM schema, initialize a results schema into it, and configure a WebAPI source to point to it. This isso you can see the WebAPI stuff work end-to-end through Atlas before doing the hard work of building out a huge RDBMS platform to host all of your patient level data. with multiple CDM sources (if that’s what you have)

Thanks @Chris_Knoll! Once again super informative and I appreciate the time you’ve spent on this.

I have started the process of setting up WebAPI as per the install guide link you gave. I have been going step by step and are up to PostgreSQL Installation Guide.

The instructions are based on pgAdmin 3 and have done my best to apply the logic to the slightly different pgAdmin 4. This has seemingly gone well with one exception: in the webapi schema Default Privileges, I cannot seem to grant USAGE to group ohdsi_app on TYPES. I’ve taken a series of screenshots to demonstrate (the “+” button on the Types sub-tab is greyed out, screenshot #6 below).

(1)


(2)

(3)

(4)

(5)

(6) {this is where the “+” button on the Types sub-tab is greyed out}

(7)

I’ll no doubt find out in due course whether not having this setting will break something but if you see anything obvious I’m doing wrong please let me know.

Once I’ve got to a stage that I’ve got things working I’m most happy to provide documentation for pgAdmin 4 differences to pgAdmin 3 if that’s of use.

@Chris_Knoll I agree that if broadsea is still being used actively in the OHDSI community then we should consider developing & releasing an OHDSI broadsea web-tools 2.0 with improvements like this.

t