OHDSI Home | Forums | Wiki | Github

Negative control selection

Here’s a proposal for getting to a set of negative controls. Let’s take

  • 4 exposures with 25 negative controls outcomes each
  • 4 outcomes with 25 negative control exposures each

The reason for using both negative control exposures and outcomes is that stratifying results either by outcome or exposure might tell us something about when a specific method is more or less appropriate. The choice for 4 and 25 is that this still makes it somewhat tractable, given that we probably will use these 2425 = 200 negative controls as basis for synthesizing positive controls. If we inject signals at 1.5, 2, and 4. that means we’ll have 3*300 = 600 positive controls, so 800 exposure-outcome pairs to execute each method variation on.

For the 4 outcomes, we could simply pick those used in the OMOP Experiment: GI bleed, acute myocardial infarction, acute liver failure, and acute renal failure.

For the 4 exposures, we could pick 4 very different drugs: diclofenac, ciprofloxacin, metformin, and sertaline.

Let me know what you think! What would be your favorite exposures and outcomes to focus on, and why?

1 Like

All the outcomes I mentioned are acute and considered adverse effects in almost any context. Maybe we should also have outcomes that are more related to effectiveness? The only one I can think of now is stroke (which for example anticoagulants try to prevent). Any other suggestions?

And maybe we should have a longer-term outcome as well? How about cancer?

@Patrick_Ryan, any suggestions?

I can help out with the stroke phenotype, as we are working on it now.

(And sorry slow to answer the previous post, which also looked reasonable.)

George

a couple other outcomes that could be considered to be increased risk or
effectiveness (reduced risk):

hip fracture
infection
nuisance effects: headache, nausea

I just realized we’ll also have to pick comparators for each negative control exposure. We could use a similar heuristic to what was used in OMOP (most prevalent drug with same indication but in different class). The most important thing is that even though they’re all negative controls, that it’s a realistic comparator, meaning a comparator researchers might pick in a real study. I’ll see if we can mine clinicaltrials.gov for target-comparator pairs.

Using ClinicalTrials.gov to find plausible comparators seems to work. Just because I can, I created the amazing comparator finder that, given a target drug, lists all comparators that were found for that drug in ClinicalTrials.gov. Using this tool, I came up with the following comparators for the four drugs of interest:

diclofenac - celecoxib
ciprofloxacin - azithromycin
metformin - glipizide
sertraline - quetiapine

When starting with the four outcomes of course things become more complicated: for each outcome we’ll have to find 25 negative control exposures and appropriate comparators, were the comparator is also believed not to cause the outcome.

I tried to be a bit more systematic, and recorded reasons for selecting these comparators. In doing so, I changed the comparator for sertraline to venlafaxine.

The process for selecting comparators was now as follows:

  1. For each target drug, list all comparators found in clinicaltrials.gov and rank them by prevalence in a US insurance claims database.

  2. Manually go down the list until an appropriate comparator is found. Some reasons why I rejected comparators was because the comparator was either too similar (same class) or to dissimilar (mostly used for a completely different indication).

Here’s the final list with comments:

MethodEvalComparatorSelection.xlsx (14.5 KB)

Help needed!

So I selected 25 negative control outcomes for each of the four exposures (and their comparators). Here is the process I used for each exposure:

  1. In ATLAS, I created a concept set including the target and comparator drug.
  2. I went to ‘Explore evidence’, generated the evidence of things related to these drugs,
  3. I restricted to the subset of negative control candidates, which are things with no evidence of any relationship to the two drugs in literature, labels, and spontaneous reports. (this list is already ranked by prevalence in a US insurance claims database).
  4. I manually went through the list until I found 25 outcomes that seems like true negative controls.

I stored my results in the attached Excel file (one sheet per target-comparator pair):

NegativeControlOutcomes.xlsx (504.0 KB)

Could someone else review these results?

Oh, for those willing to help review, remember that a good negative control should just not be causally related to the exposure, meaning that the drug(s) neither causes nor prevents the outcome from happening. It is totally fine if an outcome is related in a non-causal way, for example if both drug and outcome are more likely to appear in elderly or males.

As mentioned before, next to the four drugs with 25 negative control outcomes each, I’d also like to pick four outcomes with 25 negative control exposures each. After reviewing the Mini-Sentinel Health Outcome Algorithm Inventory I’ve selected the following mix of different outcomes which I believe are not too rare (so most DBs should have some), and can be identified in most databases:

  1. Acute pancreatitis
  2. GI bleeding
  3. Acute stroke (ischemic or hemorrhagic)
  4. Inflammatory bowel disease (IBD)

Let me know your thoughts.

This is an interesting approach. I agree that the 4 comparators selected
following this approach are reasonable. They may not be ‘optimal’ but they
should be sufficient for purposes of this methods evaluation exercise. It
raises the question of whether such an approach should be considered for
comparator selection, more generally. There, I’m hoping that @frank work
on empirical comparator selection can yield some promise .

My aim here was modest: selecting comparators that no one would find completely unreasonable. I wouldn’t recommend this approach for picking the optimal comparator.

Yes, definitely achieved the objective of ‘not completely unreasonable’
selection. Independent from that, having an empirical measure of
‘comparability’ could be useful, and in this case, could provide further
evidence for why the choices you made seem rational.

On http://forums.ohdsi.org/uploads/default/original/1X/8149c12c06bcb0ed3c41b050d2509dc6479b45b2.xlsx NegativeControlOutcomes.xlsx, you dropped some I might not have thought to drop but that’s fine.

George

Thanks George!

Anyone else who can help review these negative controls? We really want to be sure about their negative status. I would like to prevent others from writing papers like this.

I started working on the negative control exposures for our outcomes of interest. I started with acute pancreatitis, and found it to be much more work than anticipated. The main reason was that many things appear to cause pancreatitis, or at least have an unclear link. I found this paper to be very helpful.

Here are the steps I took:

  1. In ATLAS, I created a concept set for acute pancreatitis
  2. I went to ‘Exlore evidence’, generated evidence on drugs related to this outcome
  3. I restricted to the subset of negative control candidates
  4. I then linked this information to the information extracted from clinicaltrials.gov to identify target-comparator pairs where both drugs are candidate negative controls for AP.
  5. I ordered the list by the minimum of the count of people on the target drug and on the comparator drug.
  6. I manually reviewed the list until I reached 25 negative control target-comparator pairs.

Here is my list:

NcExposurePairs.xlsx (16.5 KB)

I’m afraid some of the pairs lower on the list have very little prevalence, so we may need to rethink our strategy. Anyone want to help out and look at these?

Note that griseofulvin is mentioned in the review below. It seemed kind of suspicious as it causes a lot of things. George

Eur J Clin Pharmacol. 2001 Sep;57(6-7):517-21.
Spontaneous reports on drug-induced pancreatitis in Denmark from 1968 to 1999.
Andersen V1, Sonne J, Andersen M.

Abstract

OBJECTIVES:
To present an update on drug-induced pancreatitis reported to the Danish Committee on Adverse Drug Reactions.

DESIGN:
Retrospective study of spontaneous case reports to the Danish reporting system on adverse drug reactions.

METHODS:
All cases of suspected drug-induced pancreatitis reported to the Danish Committee on Adverse Drug Reactions from 1968 to 1999 were analysed. Three cases were excluded leaving 47 cases for analysis.

RESULTS:
Drug-induced pancreatitis made up 0. 1% of all the reports to the committee from 1968 to 1999. The proportion seemed to increase and was 0.3% during the last 8 years. The 47 cases corresponded to 0.1% of the number of patients discharged due to pancreatic disease (without cancers) per year in Denmark. Serious courses were frequent as indicated by death and hospitalisation being reported in 4 (9%) and 32 (68%) cases, respectively. Death occurred after valproate (two cases), clomipramine (one case) and azathioprine (one case). Definite relationship was stated for mesalazine (three cases), azathioprine (two cases) and simvastatin (one case) on the basis of re-challenge. A possible or probable causality was considered for a further 30 drugs including 5-acetylsalicylic acid agents, angiotensin-converting enzyme inhibitors, estrogen preparations, didanosine, valproate, codeine, antiviral agents used in acquired immunodeficiency syndrome therapy, various lipid-reducing agents, interferon, paracetamol, griseofulvin, ticlopine, allopurinol, lithium and the MMR (measles" mumps/rubella) vaccination.

CONCLUSION:
Drug-induced pancreatitis is rarely reported. The incidence may be increasing and the course is often serious. This is the first report on definite simvastatin-induced pancreatitis. Further studies on the pancreotoxic potential of drugs are warranted.

Urghh, according to this paper that is based on 1 spontaneous report with ‘possible or probable’ cause. But better safe than sorry, so I’ll take it out. Thanks!

I’ve added another heuristic for finding comparators in the hope of finding ones that have higher prevalences. The new heuristic is simply based on ATC codes: a comparator is a drug where the first four digits of the ATC code match (same indication) but the 5th doesn’t (different class). I’ve implemented this in the Amazing Comparator Finder, and it seems to work nicely.

(note: I restricted to one ATC code per ingredient, the most prevalent one)

I’ll proceed by using this to find additional negative control target-comparator pairs for acute pancreatitits.

First, ‘the Amazing Comparator Finder’ is truly amazing! Great work,
Martijn. Very impressive!

Second, it seems like ‘placebo’ is a really good comparator a lot of the
time. It’s a shame it doesn’t have a ATC code, otherwise, we should really
try to find that drug in the database :stuck_out_tongue:

t