OHDSI Home | Forums | Wiki | Github

Optimizing PostgreSQL server for OHDSI?

I have locally installed PostgreSQL and loaded it with Synpuf in CDM format. However, some queries (especially those of cohort definitions) take a really long time (days). Does anyone have suggestions for improving performance of the PostgreSQL configuration? (I have created all indices specified in the SQL provided in the CommonDataModel repo.) Can I someone increase the query cache? Temp space? Things like that?

I have same issue. Sorry, @schuemie for no help, but I feel somewhat relieved. :laughing:
I’m so curious, too.

I did a little digging and found an article describing different configuration options for PostgreSQL. I think the default configuration probably sets up memory settings to handle relatively light workloads, but nothing like the types of queries we perform on a cdm (or at least like cohort definition queries): This is from: Scaling PostgreSQL for Large Amounts of Data | Severalnines

For Vertical Scaling, it could be needed to change some configuration parameter to allow PostgreSQL to use a new or better hardware resource. Let’s see some of these parameters from the PostgreSQL documentation.

  • work_mem: Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Several running sessions could be doing such operations concurrently, so the total memory used could be many times the value of work_mem.
  • maintenance_work_mem: Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. Larger settings might improve performance for vacuuming and for restoring database dumps.
  • autovacuum_work_mem: Specifies the maximum amount of memory to be used by each autovacuum worker process.
  • autovacuum_max_workers: Specifies the maximum number of autovacuum processes that may be running at any one time.
  • max_worker_processes: Sets the maximum number of background processes that the system can support. Specify the limit of the process like vacuuming, checkpoints, and more maintenance jobs.
  • max_parallel_workers: Sets the maximum number of workers that the system can support for parallel operations. Parallel workers are taken from the pool of worker processes established by the previous parameter.
  • max_parallel_maintenance_workers: Sets the maximum number of parallel workers that can be started by a single utility command. Currently, the only parallel utility command that supports the use of parallel workers is CREATE INDEX, and only when building a B-tree index.
  • effective_cache_size: Sets the planner’s assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used.
  • shared_buffers: Sets the amount of memory the database server uses for shared memory buffers. Settings significantly higher than the minimum are usually needed for good performance.
  • temp_buffers: Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables.
  • effective_io_concurrency: Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. Currently, this setting only affects bitmap heap scans.
  • max_connections: Determines the maximum number of concurrent connections to the database server. Increasing this parameter allows PostgreSQL running more backend process simultaneously.

__

I seem to recall that when we were setting up a demo for some OHDSI tools on PG, we did play with the mem_ settings so that the joins and other queries would use local memory more than swapping to disk. That may be a good place to start.

2 Likes

This website can be useful. Choose DB type: Data Warehouse:

https://pgtune.leopard.in.ua/#/

4 Likes

Also… EnterpriseDB (Supported version of PostgreSQL) has some tools that make configuring Postgres much easier. Most of these tools are either free or cheap, or at they were the last time I had to configure a Postgres instance.

1 Like

the lessons i come across these few month (even the seniors didnt know, they knew mysql a lot more)

  • postgres.conf and pgtune
  • indexing
  • vacuum perhaps if you delete too much.

the defaults in local postgresql is very conservative and not automatically set or asked. need quite some tuning.

Thanks everyone for your input! Unfortunately, I still haven’t resolved my problem. Here’s what I did:

PGTune

Following @admin 's suggestion, I went to PGTune and created this configuration:

# DB Version: 13
# OS Type: windows
# DB Type: dw
# Total Memory (RAM): 58 GB
# CPUs num: 16
# Data Storage: hdd

ALTER SYSTEM SET
 max_connections = '40';
ALTER SYSTEM SET
 shared_buffers = '14848MB';
ALTER SYSTEM SET
 effective_cache_size = '44544MB';
ALTER SYSTEM SET
 maintenance_work_mem = '2047MB';
ALTER SYSTEM SET
 checkpoint_completion_target = '0.9';
ALTER SYSTEM SET
 wal_buffers = '16MB';
ALTER SYSTEM SET
 default_statistics_target = '500';
ALTER SYSTEM SET
 random_page_cost = '4';
ALTER SYSTEM SET
 work_mem = '23756kB';
ALTER SYSTEM SET
 min_wal_size = '4GB';
ALTER SYSTEM SET
 max_wal_size = '16GB';
ALTER SYSTEM SET
 max_worker_processes = '16';
ALTER SYSTEM SET
 max_parallel_workers_per_gather = '8';
ALTER SYSTEM SET
 max_parallel_workers = '16';
ALTER SYSTEM SET
 max_parallel_maintenance_workers = '4';

I ran the commands, restarted the server, and verified the settings took hold using SHOW. However, speed did not increase.

ANALYZE

I ran ANALYZE on all the tables the code touched. No difference in speed.

Extra indices

I noticed the indices provided with the DLL only index ID fields. I added extra indices for DATE fields, since these are used heavily in the query that is causing the problem, for example

SET SEARCH_PATH TO synpuf;
CREATE INDEX  idx_observation_period_start_date  ON  observation_period  (observation_period_start_date ASC);
CREATE INDEX  idx_observation_period_end_date  ON  observation_period  (observation_period_end_date ASC);

This increased the speed of most queries, but I’m still stuck with the one query.

Problematic query

The problem I have is that one query never finishes. It is a query belonging to this heart failure cohort definition. It is running for over 12 hours now, while the other cohort definitions I ran took minutes at most. I’ve pasted the offending query below.

Some things I noticed:

  • The concept set used only contains 111 concept, so not very heavy at all.
  • Despite all the settings mentioned above, the PostgreSQL server process is only using 1 core, and 100MB of memory, while running this query.

Any thoughts on how to proceed would be greatly appreciated!

CREATE TEMP TABLE qualified_events AS
	WITH primary_events(event_id, person_id, start_date, end_date, op_start_date, op_end_date, visit_occurrence_id) AS (
			-- Begin Primary Events
			SELECT P.ordinal AS event_id,
				P.person_id,
				P.start_date,
				P.end_date,
				op_start_date,
				op_end_date,
				cast(P.visit_occurrence_id AS BIGINT) AS visit_occurrence_id
			FROM (
				SELECT E.person_id,
					E.start_date,
					E.end_date,
					row_number() OVER (
						PARTITION BY E.person_id ORDER BY E.sort_date ASC
						) ordinal,
					OP.observation_period_start_date AS op_start_date,
					OP.observation_period_end_date AS op_end_date,
					cast(E.visit_occurrence_id AS BIGINT) AS visit_occurrence_id
				FROM (
					-- Begin Condition Occurrence Criteria
					SELECT C.person_id,
						C.condition_occurrence_id AS event_id,
						C.condition_start_date AS start_date,
						COALESCE(C.condition_end_date, (C.condition_start_date + 1 * INTERVAL '1 day')) AS end_date,
						C.visit_occurrence_id,
						C.condition_start_date AS sort_date
					FROM (
						SELECT co.*
						FROM synpuf.CONDITION_OCCURRENCE co
						JOIN Codesets cs
							ON (
									co.condition_concept_id = cs.concept_id
									AND cs.codeset_id = 1
									)
						) C
						-- End Condition Occurrence Criteria
					) E
				JOIN synpuf.observation_period OP
					ON E.person_id = OP.person_id
						AND E.start_date >= OP.observation_period_start_date
						AND E.start_date <= op.observation_period_end_date
				WHERE (OP.OBSERVATION_PERIOD_START_DATE + 0 * INTERVAL '1 day') <= E.START_DATE
					AND (E.START_DATE + 0 * INTERVAL '1 day') <= OP.OBSERVATION_PERIOD_END_DATE
				) P
			)

-- End Primary Events
SELECT event_id,
	person_id,
	start_date,
	end_date,
	op_start_date,
	op_end_date,
	visit_occurrence_id
FROM (
	SELECT pe.event_id,
		pe.person_id,
		pe.start_date,
		pe.end_date,
		pe.op_start_date,
		pe.op_end_date,
		row_number() OVER (
			PARTITION BY pe.person_id ORDER BY pe.start_date ASC
			) AS ordinal,
		cast(pe.visit_occurrence_id AS BIGINT) AS visit_occurrence_id
	FROM primary_events pe
	JOIN (
		-- Begin Criteria Group
		SELECT 0 AS index_id,
			person_id,
			event_id
		FROM (
			SELECT E.person_id,
				E.event_id
			FROM primary_events E
			INNER JOIN (
				-- Begin Correlated Criteria
				SELECT 0 AS index_id,
					cc.person_id,
					cc.event_id
				FROM (
					SELECT p.person_id,
						p.event_id
					FROM primary_events P
					JOIN (
						-- Begin Condition Occurrence Criteria
						SELECT C.person_id,
							C.condition_occurrence_id AS event_id,
							C.condition_start_date AS start_date,
							COALESCE(C.condition_end_date, (C.condition_start_date + 1 * INTERVAL '1 day')) AS end_date,
							C.visit_occurrence_id,
							C.condition_start_date AS sort_date
						FROM (
							SELECT co.*
							FROM synpuf.CONDITION_OCCURRENCE co
							JOIN Codesets cs
								ON (
										co.condition_concept_id = cs.concept_id
										AND cs.codeset_id = 1
										)
							) C
							-- End Condition Occurrence Criteria
						) A
						ON A.person_id = P.person_id
							AND A.START_DATE >= P.OP_START_DATE
							AND A.START_DATE <= P.OP_END_DATE
							AND A.START_DATE >= (P.START_DATE + 1 * INTERVAL '1 day')
							AND A.START_DATE <= (P.START_DATE + 365 * INTERVAL '1 day')
					) cc
				GROUP BY cc.person_id,
					cc.event_id
				HAVING COUNT(cc.event_id) >= 1
					-- End Correlated Criteria
				) CQ
				ON E.person_id = CQ.person_id
					AND E.event_id = CQ.event_id
			GROUP BY E.person_id,
				E.event_id
			HAVING COUNT(index_id) = 1
			) G
			-- End Criteria Group
		) AC
		ON AC.person_id = pe.person_id
			AND AC.event_id = pe.event_id
	) QE
WHERE QE.ordinal = 1;

Additional information: Output of EXPLAIN for the query above

Subquery Scan on qe  (cost=3195218.95..3195218.98 rows=1 width=40)
  Filter: (qe.ordinal = 1)
  CTE primary_events
    ->  Subquery Scan on p_1  (cost=2096426.76..2096697.21 rows=7212 width=40)
          ->  WindowAgg  (cost=2096426.76..2096625.09 rows=7212 width=44)
                ->  Sort  (cost=2096426.76..2096444.79 rows=7212 width=24)
                      Sort Key: co_1.person_id, co_1.condition_start_date
                      ->  Hash Join  (cost=71483.92..2095964.61 rows=7212 width=24)
                            Hash Cond: (co_1.person_id = op.person_id)
                            Join Filter: ((co_1.condition_start_date >= op.observation_period_start_date) AND (co_1.condition_start_date <= op.observation_period_end_date) AND ((op.observation_period_start_date + '00:00:00'::interval) <= co_1.condition_start_date) AND ((co_1.condition_start_date + '00:00:00'::interval) <= op.observation_period_end_date))
                            ->  Nested Loop  (cost=653.34..1991654.70 rows=584211 width=16)
                                  ->  Seq Scan on codesets cs_1  (cost=0.00..35.50 rows=10 width=8)
                                        Filter: (codeset_id = 1)
                                  ->  Bitmap Heap Scan on condition_occurrence co_1  (cost=653.34..198577.71 rows=58421 width=20)
                                        Recheck Cond: (condition_concept_id = cs_1.concept_id)
                                        ->  Bitmap Index Scan on idx_condition_concept_id  (cost=0.00..638.73 rows=58421 width=0)
                                              Index Cond: (condition_concept_id = cs_1.concept_id)
                            ->  Hash  (cost=34352.15..34352.15 rows=2098515 width=12)
                                  ->  Seq Scan on observation_period op  (cost=0.00..34352.15 rows=2098515 width=12)
  ->  WindowAgg  (cost=1098521.73..1098521.76 rows=1 width=48)
        ->  Sort  (cost=1098521.73..1098521.74 rows=1 width=40)
              Sort Key: pe.person_id, pe.start_date
              ->  Nested Loop  (cost=1098206.33..1098521.72 rows=1 width=40)
                    Join Filter: ((pe.person_id = e.person_id) AND (pe.event_id = e.event_id))
                    ->  GroupAggregate  (cost=1098206.33..1098269.29 rows=1 width=12)
                          Group Key: e.person_id, e.event_id
                          Filter: (count((0)) = 1)
                          ->  Merge Join  (cost=1098206.33..1098268.93 rows=18 width=16)
                                Merge Cond: ((e.person_id = p.person_id) AND (e.event_id = p.event_id))
                                ->  Sort  (cost=606.39..624.42 rows=7212 width=12)
                                      Sort Key: e.person_id, e.event_id
                                      ->  CTE Scan on primary_events e  (cost=0.00..144.24 rows=7212 width=12)
                                ->  GroupAggregate  (cost=1097599.94..1097606.76 rows=101 width=16)
                                      Group Key: p.person_id, p.event_id
                                      Filter: (count(p.event_id) >= 1)
                                      ->  Sort  (cost=1097599.94..1097600.70 rows=303 width=12)
                                            Sort Key: p.person_id, p.event_id
                                            ->  Hash Join  (cost=36.20..1097587.45 rows=303 width=12)
                                                  Hash Cond: (co.condition_concept_id = cs.concept_id)
                                                  ->  Nested Loop  (cost=0.57..1096959.71 rows=157090 width=16)
                                                        ->  CTE Scan on primary_events p  (cost=0.00..144.24 rows=7212 width=24)
                                                        ->  Index Scan using idx_condition_person_id on condition_occurrence co  (cost=0.57..151.86 rows=22 width=12)
                                                              Index Cond: (person_id = p.person_id)
                                                              Filter: ((condition_start_date >= p.op_start_date) AND (condition_start_date <= p.op_end_date) AND (condition_start_date >= (p.start_date + '1 day'::interval)) AND (condition_start_date <= (p.start_date + '365 days'::interval)))
                                                  ->  Hash  (cost=35.50..35.50 rows=10 width=8)
                                                        ->  Seq Scan on codesets cs  (cost=0.00..35.50 rows=10 width=8)
                                                              Filter: (codeset_id = 1)
                    ->  CTE Scan on primary_events pe  (cost=0.00..144.24 rows=7212 width=40)

So… if I call ANALYZE on the temp CodeSets table right before calling the query above

  • Postgres spawns 8 additional processes to answer the query
  • The query completes in 3 minutes, instead of >15 hours.

SqlRender automatically adds an ANALYZE statement after a CTAS statement on PostgreSQL. However, because CodeSets is created using CREATE TEMP TABLE and INSERT INTO, there’s no ANALYZE statement added. @Chris_Knoll : maybe we can add a hint in the OHDSI SQL generated by Circe, that SqlRender translates to ANALYZE in PostgreSQL?

(Sorry for the spam)

I guess we don’t need a hint, we can use the UPDATE STATISTICS statement, which is already supported by SqlRender, and is translated to ANALYZE on PostgreSQL.

Ok, so add an UPDATE STATISTICS call to each of our temp table creations?

The interesting thing about indices is that more is not always better. One thing that helps when one has lots of indices is to put the indices into a separate schema and load said schema onto a separate physical drive. This is a hack that I have used before in the past, granted this was with spinning rust and not modern solid state drives.

It is clear that the query optimizer does not think that spinning off more threads will be of help. There is a setting for this, but I cannot remember what it is now. Perhaps if you cannot find it, I can drag out some of my old postgres performance books.

There are some nested loops in your Analyze plan, those are always sore points in speed. Conditional WHEREs are a known hot spot to manually optimize. Unlike what I have seen with SQL Server, Postgres seems to prefer subselects over conditional WHEREs… or perhaps this was to do with the different types of data that I am dealing with in the medical field instead of MDM; hard to say without direct tests.

Anyway, sorry for the ramble, hope it helps.

EDIT:
One other thing, if you are doing batch insert/update/deletes, it may help to turn off auto-vacuum and manually Vacuum after each batch. Less important if it is only inserts, but indicies maybe become out of order if many updates and/or deletes have been ran. The only way to know with this is just testing.

OHDSI is the first time i ever use any SQL for the data warehouse purposes. We used postgresql because it was that trendy SQL for self-learners and they can bridge into other careers.

The biggest comment I wish to say is that - it seems very hard to get postgreql to run efficiently with multiple CPU cores (particularly - SELECT). Sometimes the query is simply fast. Sometimes a very simple cluster (30 sec) would cut down query time by 50000x (from no response in 5 days to a 5 minute query). Sometimes you just have to wait for a day.

How this psql thing work is very mysterious, and hard to understand.

Yes, or turn CREATE TABLE + INSERT INTO into a CTAS (SELECT ... INTO in OHDSI SQL), which will have the same effect. For parallel databases like RedShift a CTAS will probably also be much faster. I think this shouldn’t be too difficult for Codesets since there is always just 1 INSERT INTO statement (with UNION ALLs)?

I ran into another issue: even after implementing all the changes above, this cohort definition did not complete after running the entire weekend. Here’s the SQL (with the new ANALYZE for Codesets) up to the point where it got stuck:

CREATE TEMP TABLE Codesets (
	codeset_id INT NOT NULL,
	concept_id BIGINT NOT NULL
	);

INSERT INTO Codesets (
	codeset_id,
	concept_id
	)
SELECT 0 AS codeset_id,
	c.concept_id
FROM (
	SELECT DISTINCT I.concept_id
	FROM (
		SELECT concept_id
		FROM synpuf.CONCEPT
		WHERE concept_id IN (
				44784217,
				38001137,
				38001138,
				315078,
				444070
				)
		
		UNION
		
		SELECT c.concept_id
		FROM synpuf.CONCEPT c
		JOIN synpuf.CONCEPT_ANCESTOR ca
			ON c.concept_id = ca.descendant_concept_id
				AND ca.ancestor_concept_id IN (
					44784217,
					38001137,
					38001138,
					315078,
					444070
					)
				AND c.invalid_reason IS NULL
		) I
	) C

UNION ALL

SELECT 1 AS codeset_id,
	c.concept_id
FROM (
	SELECT DISTINCT I.concept_id
	FROM (
		SELECT concept_id
		FROM synpuf.CONCEPT
		WHERE concept_id IN (
				21600248,
				43013024,
				40228152,
				40241331,
				1310149,
				1309204,
				45892847
				)
		
		UNION
		
		SELECT c.concept_id
		FROM synpuf.CONCEPT c
		JOIN synpuf.CONCEPT_ANCESTOR ca
			ON c.concept_id = ca.descendant_concept_id
				AND ca.ancestor_concept_id IN (
					21600248,
					43013024,
					40228152,
					40241331,
					1310149,
					1309204,
					45892847
					)
				AND c.invalid_reason IS NULL
		) I
	) C

UNION ALL

SELECT 2 AS codeset_id,
	c.concept_id
FROM (
	SELECT DISTINCT I.concept_id
	FROM (
		SELECT concept_id
		FROM synpuf.CONCEPT
		WHERE concept_id IN (
				45890325,
				45890400,
				2107068,
				4051932
				)
		
		UNION
		
		SELECT c.concept_id
		FROM synpuf.CONCEPT c
		JOIN synpuf.CONCEPT_ANCESTOR ca
			ON c.concept_id = ca.descendant_concept_id
				AND ca.ancestor_concept_id IN (
					45890325,
					45890400,
					2107068,
					4051932
					)
				AND c.invalid_reason IS NULL
		) I
	) C;
	
ANALYZE Codesets;

CREATE TEMP TABLE qualified_events AS
	WITH primary_events(event_id, person_id, start_date, end_date, op_start_date, op_end_date, visit_occurrence_id) AS (
			-- Begin Primary Events
			SELECT P.ordinal AS event_id,
				P.person_id,
				P.start_date,
				P.end_date,
				op_start_date,
				op_end_date,
				cast(P.visit_occurrence_id AS BIGINT) AS visit_occurrence_id
			FROM (
				SELECT E.person_id,
					E.start_date,
					E.end_date,
					row_number() OVER (
						PARTITION BY E.person_id ORDER BY E.sort_date ASC
						) ordinal,
					OP.observation_period_start_date AS op_start_date,
					OP.observation_period_end_date AS op_end_date,
					cast(E.visit_occurrence_id AS BIGINT) AS visit_occurrence_id
				FROM (
					-- Begin Condition Occurrence Criteria
					SELECT C.person_id,
						C.condition_occurrence_id AS event_id,
						C.condition_start_date AS start_date,
						COALESCE(C.condition_end_date, (C.condition_start_date + 1 * INTERVAL '1 day')) AS end_date,
						C.visit_occurrence_id,
						C.condition_start_date AS sort_date
					FROM (
						SELECT co.*,
							row_number() OVER (
								PARTITION BY co.person_id ORDER BY co.condition_start_date,
									co.condition_occurrence_id
								) AS ordinal
						FROM synpuf.CONDITION_OCCURRENCE co
						JOIN Codesets cs
							ON (
									co.condition_concept_id = cs.concept_id
									AND cs.codeset_id = 0
									)
						) C
					WHERE C.ordinal = 1
						-- End Condition Occurrence Criteria
					) E
				JOIN synpuf.observation_period OP
					ON E.person_id = OP.person_id
						AND E.start_date >= OP.observation_period_start_date
						AND E.start_date <= op.observation_period_end_date
				WHERE (OP.OBSERVATION_PERIOD_START_DATE + 0 * INTERVAL '1 day') <= E.START_DATE
					AND (E.START_DATE + 0 * INTERVAL '1 day') <= OP.OBSERVATION_PERIOD_END_DATE
				) P
			WHERE P.ordinal = 1
			)

-- End Primary Events
SELECT event_id,
	person_id,
	start_date,
	end_date,
	op_start_date,
	op_end_date,
	visit_occurrence_id
FROM (
	SELECT pe.event_id,
		pe.person_id,
		pe.start_date,
		pe.end_date,
		pe.op_start_date,
		pe.op_end_date,
		row_number() OVER (
			PARTITION BY pe.person_id ORDER BY pe.start_date ASC
			) AS ordinal,
		cast(pe.visit_occurrence_id AS BIGINT) AS visit_occurrence_id
	FROM primary_events pe
	JOIN (
		-- Begin Criteria Group
		SELECT 0 AS index_id,
			person_id,
			event_id
		FROM (
			SELECT E.person_id,
				E.event_id
			FROM primary_events E
			INNER JOIN (
				-- Begin Correlated Criteria
				SELECT 0 AS index_id,
					cc.person_id,
					cc.event_id
				FROM (
					SELECT p.person_id,
						p.event_id
					FROM primary_events P
					JOIN (
						-- Begin Condition Occurrence Criteria
						SELECT C.person_id,
							C.condition_occurrence_id AS event_id,
							C.condition_start_date AS start_date,
							COALESCE(C.condition_end_date, (C.condition_start_date + 1 * INTERVAL '1 day')) AS end_date,
							C.visit_occurrence_id,
							C.condition_start_date AS sort_date
						FROM (
							SELECT co.*
							FROM synpuf.CONDITION_OCCURRENCE co
							JOIN Codesets cs
								ON (
										co.condition_concept_id = cs.concept_id
										AND cs.codeset_id = 0
										)
							) C
							-- End Condition Occurrence Criteria
						) A
						ON A.person_id = P.person_id
							AND A.START_DATE >= P.OP_START_DATE
							AND A.START_DATE <= P.OP_END_DATE
							AND A.START_DATE >= (P.START_DATE + 1 * INTERVAL '1 day')
							AND A.START_DATE <= P.OP_END_DATE
					) cc
				GROUP BY cc.person_id,
					cc.event_id
				HAVING COUNT(cc.event_id) >= 1
				-- End Correlated Criteria
				
				UNION ALL
				
				-- Begin Correlated Criteria
				SELECT 1 AS index_id,
					cc.person_id,
					cc.event_id
				FROM (
					SELECT p.person_id,
						p.event_id
					FROM primary_events P
					JOIN (
						-- Begin Drug Exposure Criteria
						SELECT C.person_id,
							C.drug_exposure_id AS event_id,
							C.drug_exposure_start_date AS start_date,
							COALESCE(C.DRUG_EXPOSURE_END_DATE, (DRUG_EXPOSURE_START_DATE + C.DAYS_SUPPLY * INTERVAL '1 day'), (C.DRUG_EXPOSURE_START_DATE + 1 * INTERVAL '1 day')) AS end_date,
							C.visit_occurrence_id,
							C.drug_exposure_start_date AS sort_date
						FROM (
							SELECT de.*
							FROM synpuf.DRUG_EXPOSURE de
							JOIN Codesets cs
								ON (
										de.drug_concept_id = cs.concept_id
										AND cs.codeset_id = 1
										)
							) C
							-- End Drug Exposure Criteria
						) A
						ON A.person_id = P.person_id
							AND A.START_DATE >= P.OP_START_DATE
							AND A.START_DATE <= P.OP_END_DATE
							AND A.START_DATE >= (P.START_DATE + 0 * INTERVAL '1 day')
							AND A.START_DATE <= P.OP_END_DATE
					) cc
				GROUP BY cc.person_id,
					cc.event_id
				HAVING COUNT(cc.event_id) >= 2
				-- End Correlated Criteria
				
				UNION ALL
				
				-- Begin Correlated Criteria
				SELECT 2 AS index_id,
					cc.person_id,
					cc.event_id
				FROM (
					SELECT p.person_id,
						p.event_id
					FROM primary_events P
					JOIN (
						-- Begin Procedure Occurrence Criteria
						SELECT C.person_id,
							C.procedure_occurrence_id AS event_id,
							C.procedure_date AS start_date,
							(C.procedure_date + 1 * INTERVAL '1 day') AS END_DATE,
							C.visit_occurrence_id,
							C.procedure_date AS sort_date
						FROM (
							SELECT po.*
							FROM synpuf.PROCEDURE_OCCURRENCE po
							JOIN Codesets cs
								ON (
										po.procedure_concept_id = cs.concept_id
										AND cs.codeset_id = 2
										)
							) C
							-- End Procedure Occurrence Criteria
						) A
						ON A.person_id = P.person_id
							AND A.START_DATE >= P.OP_START_DATE
							AND A.START_DATE <= P.OP_END_DATE
							AND A.START_DATE >= (P.START_DATE + 0 * INTERVAL '1 day')
							AND A.START_DATE <= P.OP_END_DATE
					) cc
				GROUP BY cc.person_id,
					cc.event_id
				HAVING COUNT(cc.event_id) >= 1
					-- End Correlated Criteria
				) CQ
				ON E.person_id = CQ.person_id
					AND E.event_id = CQ.event_id
			GROUP BY E.person_id,
				E.event_id
			HAVING COUNT(index_id) > 0
			) G
			-- End Criteria Group
		) AC
		ON AC.person_id = pe.person_id
			AND AC.event_id = pe.event_id
	) QE
WHERE QE.ordinal = 1;

Again the issue seems to be the query constructing the qualified_events table. This time, the Codesets table has 11K rows, and the CTE called primary_events has 1.4 million rows. Here’s the output of EXPLAIN:

  Filter: (qe.ordinal = 1)
  CTE primary_events
    ->  Subquery Scan on p_3  (cost=9179691.64..9559318.99 rows=3 width=40)
          Filter: (p_3.ordinal = 1)
          ->  WindowAgg  (cost=9179691.64..9559312.27 rows=538 width=44)
                ->  Merge Join  (cost=9179691.64..9559298.82 rows=538 width=24)
                      Merge Cond: (c.person_id = op.person_id)
                      Join Filter: ((c.condition_start_date >= op.observation_period_start_date) AND (c.condition_start_date <= op.observation_period_end_date) AND ((op.observation_period_start_date + '00:00:00'::interval) <= c.condition_start_date) AND ((c.condition_start_date + '00:00:00'::interval) <= op.observation_period_end_date))
                      ->  Subquery Scan on c  (cost=9179684.21..9484983.99 rows=43614 width=16)
                            Filter: (c.ordinal = 1)
                            ->  WindowAgg  (cost=9179684.21..9375948.36 rows=8722851 width=358)
                                  ->  Sort  (cost=9179684.21..9201491.34 rows=8722851 width=20)
                                        Sort Key: co_1.person_id, co_1.condition_start_date, co_1.condition_occurrence_id
                                        ->  Hash Join  (cost=202.51..7995209.38 rows=8722851 width=20)
                                              Hash Cond: (co_1.condition_concept_id = cs_3.concept_id)
                                              ->  Seq Scan on condition_occurrence co_1  (cost=0.00..6771415.08 rows=303030208 width=24)
                                              ->  Hash  (cost=198.53..198.53 rows=319 width=8)
                                                    ->  Seq Scan on codesets cs_3  (cost=0.00..198.53 rows=319 width=8)
                                                          Filter: (codeset_id = 0)
                      ->  Index Scan using idx_observation_period_id on observation_period op  (cost=0.43..67876.15 rows=2098515 width=12)
  ->  WindowAgg  (cost=1675.59..1675.62 rows=1 width=48)
        ->  Sort  (cost=1675.59..1675.60 rows=1 width=40)
              Sort Key: pe.person_id, pe.start_date
              ->  Nested Loop  (cost=1675.44..1675.58 rows=1 width=40)
                    Join Filter: ((pe.person_id = e.person_id) AND (pe.event_id = e.event_id))
                    ->  GroupAggregate  (cost=1675.44..1675.47 rows=1 width=12)
                          Group Key: e.person_id, e.event_id
                          Filter: (count((0)) > 0)
                          ->  Sort  (cost=1675.44..1675.45 rows=1 width=16)
                                Sort Key: e.person_id, e.event_id
                                ->  Hash Join  (cost=652.09..1675.43 rows=1 width=16)
                                      Hash Cond: ((p.person_id = e.person_id) AND (p.event_id = e.event_id))
                                      ->  Append  (cost=651.98..1675.30 rows=3 width=16)
                                            ->  GroupAggregate  (cost=651.98..652.03 rows=1 width=16)
                                                  Group Key: p.person_id, p.event_id
                                                  Filter: (count(p.event_id) >= 1)
                                                  ->  Sort  (cost=651.98..651.99 rows=2 width=12)
                                                        Sort Key: p.person_id, p.event_id
                                                        ->  Hash Join  (cost=203.09..651.97 rows=2 width=12)
                                                              Hash Cond: (co.condition_concept_id = cs.concept_id)
                                                              ->  Nested Loop  (cost=0.57..449.20 rows=65 width=16)
                                                                    ->  CTE Scan on primary_events p  (cost=0.00..0.06 rows=3 width=24)
                                                                    ->  Index Scan using idx_condition_person_id on condition_occurrence co  (cost=0.57..149.49 rows=22 width=12)
                                                                          Index Cond: (person_id = p.person_id)
                                                                          Filter: ((condition_start_date >= p.op_start_date) AND (condition_start_date <= p.op_end_date) AND (condition_start_date <= p.op_end_date) AND (condition_start_date >= (p.start_date + '1 day'::interval)))
                                                              ->  Hash  (cost=198.53..198.53 rows=319 width=8)
                                                                    ->  Seq Scan on codesets cs  (cost=0.00..198.53 rows=319 width=8)
                                                                          Filter: (codeset_id = 0)
                                            ->  GroupAggregate  (cost=468.59..468.83 rows=1 width=16)
                                                  Group Key: p_1.person_id, p_1.event_id
                                                  Filter: (count(p_1.event_id) >= 2)
                                                  ->  Sort  (cost=468.59..468.64 rows=20 width=12)
                                                        Sort Key: p_1.person_id, p_1.event_id
                                                        ->  Hash Join  (cost=162.54..468.16 rows=20 width=12)
                                                              Hash Cond: (cs_1.concept_id = de.drug_concept_id)
                                                              ->  Seq Scan on codesets cs_1  (cost=0.00..198.53 rows=10690 width=8)
                                                                    Filter: (codeset_id = 1)
                                                              ->  Hash  (cost=162.26..162.26 rows=22 width=16)
                                                                    ->  Nested Loop  (cost=0.57..162.26 rows=22 width=16)
                                                                          ->  CTE Scan on primary_events p_1  (cost=0.00..0.06 rows=3 width=24)
                                                                          ->  Index Scan using idx_drug_person_id on drug_exposure de  (cost=0.57..54.00 rows=7 width=12)
                                                                                Index Cond: (person_id = p_1.person_id)
                                                                                Filter: ((drug_exposure_start_date >= p_1.op_start_date) AND (drug_exposure_start_date <= p_1.op_end_date) AND (drug_exposure_start_date <= p_1.op_end_date) AND (drug_exposure_start_date >= (p_1.start_date + '00:00:00'::interval)))
                                            ->  GroupAggregate  (cost=554.36..554.39 rows=1 width=16)
                                                  Group Key: p_2.person_id, p_2.event_id
                                                  Filter: (count(p_2.event_id) >= 1)
                                                  ->  Sort  (cost=554.36..554.37 rows=1 width=12)
                                                        Sort Key: p_2.person_id, p_2.event_id
                                                        ->  Hash Join  (cost=200.01..554.35 rows=1 width=12)
                                                              Hash Cond: (po.procedure_concept_id = cs_2.concept_id)
                                                              ->  Nested Loop  (cost=0.57..354.70 rows=56 width=16)
                                                                    ->  CTE Scan on primary_events p_2  (cost=0.00..0.06 rows=3 width=24)
                                                                    ->  Index Scan using idx_procedure_person_id on procedure_occurrence po  (cost=0.57..118.02 rows=19 width=12)
                                                                          Index Cond: (person_id = p_2.person_id)
                                                                          Filter: ((procedure_date >= p_2.op_start_date) AND (procedure_date <= p_2.op_end_date) AND (procedure_date <= p_2.op_end_date) AND (procedure_date >= (p_2.start_date + '00:00:00'::interval)))
                                                              ->  Hash  (cost=198.53..198.53 rows=73 width=8)
                                                                    ->  Seq Scan on codesets cs_2  (cost=0.00..198.53 rows=73 width=8)
                                                                          Filter: (codeset_id = 2)
                                      ->  Hash  (cost=0.06..0.06 rows=3 width=12)
                                            ->  CTE Scan on primary_events e  (cost=0.00..0.06 rows=3 width=12)
                    ->  CTE Scan on primary_events pe  (cost=0.00..0.06 rows=3 width=40)

If I decouple the two, so first create a temp table called primary_events, call ANALYZE on that, and then run the second part of the query, the query completes in 13 minutes (5 minutes for primary_events, 8 for qualified_events).

In theory, a CTE (Common Table Expression) isn’t actually resolved to data, but instead you can think of the SQL being copied every time the CTE is referenced in the main query. Since this particular CTE is called 5 times in the main query, that is a lot of duplication of work.

@Chris_Knoll: would it make sense to turn the CTE into a temp table in Circe’s output?

That is the theory, but in PG, they actually materialize the statement (but this has optimizations in PG 13). You are correct that the CTE should just be a reusable expression is applied across the query and predicates should be ‘pushed down’ into it, but the PG devs (in their infinite wisdom) went a different way in order to implement something called ‘fencing’. It’s a long story…

In any case, creating a temp table for primary events was considered but the problem case was cases where you did things like ‘primary events are all visits’, which leads to a complete copy of the VISIT_OCCURRENCE table into tempdb just so you can join to the criteria in ‘qualified evens’. Maybe the solution is to make the query builder place the CTE part into a sub-query, and have the query builder do the copy-paste of the CTE query into the parts of the query, instead of having the different query engines handle CTE materialization differently. I can look into this avenue…my other concern with temp tables is that as you create them, they loose all indexing on them (like sorting by dates, or indexed on concept_ids)…

The thing I notice in your plan, tho, is a bit of sorting and table sequence scan (seq san on condition_occurrence:

 Hash Cond: (co_1.condition_concept_id = cs_3.concept_id)
                                              ->  Seq Scan on condition_occurrence co_1  (cost=0.00..6771415.08 rows=303030208 width=24)

I feel like there are sort indexes you can apply on PG tables (ie: I think the data should be sorted in date order, by person_id), and also I think the standard_concept_id columns should be part of the index too. Maybe you could play with indexes only on condition_occurrence to see if that modifies the plan. Also, I think memory settings will determine if an index can be applied or if it will just have to dredge through the table sequentially to find records. Did you play with memory options?

1 Like

Thanks @Chris_Knoll! I hadn’t thought about all the limitations of using temp tables you bring up. I agree it probably is a bad idea.

I’m using the PG options described in this post, so allowing for a lot of memory to be used.

I’ve constructed all required indices on the CDM tables, including indices on condition_occurrence for condition_concept_id (I can see it in pgAdmin). I have no clue why the plan doesn’t mention an index scan for this field.

Right, so here’s what we have:

CREATE INDEX  idx_condition_person_id  ON  condition_occurrence  (person_id ASC);
CLUSTER  condition_occurrence  USING  idx_condition_person_id ;
CREATE INDEX idx_condition_concept_id ON condition_occurrence (condition_concept_id ASC);
CREATE INDEX idx_condition_visit_id ON condition_occurrence (visit_occurrence_id ASC);

One thing I see missing is that nothing is sorting the records on date (ie: start_date). Just about everything in cohort definitions will be attempting to seek records on a date (by both person_id and concept_id, ordered by date). I wonder if a sort on that column in the index would lead to a index scan instead of a table sequence…

t