OHDSI Home | Forums | Wiki | Github

Which meta-analysis model should I use for OHDSI study between fixed-effect vs. random-effect model?

Thanks to many helps from @Patrick_Ryan, @jswerdel, I’ve got the result from the Medicaid and Meicare for the comparison of combination treatment in hypertension study.

I’m trying to summarize this multi-national, multi-center result by using meta-analysis. I’ve never studied or used meta-analysis before.
So I have a question, which meta-analysis model should I use for this study: Fixed-effect vs. Random effect?

I thought many previous papers started with fixed-effect model and then moved to a random-effects model if the test for heterogeneity is significant.

The assumption for fixed-effect model is that all studies in the analysis share a common effect size. And the assumption of random-effect model is that there is a distribution of true effect sizes.

I have results from the representative population of whole Koreans, US Medicare and US medicare population. I prefer using ‘random-effect model’ because I don’t believe there is a common effect size of hazard ratio among these population.

The characteristics and the outcome of Korean population would be differ from other countries. Generally, medicare population is much older than general population. Medicare population has its own characteristics, too.

I don’t know which is the best way.
I think we need to develop best practice for meta-analysis in OHDSI
Is there anybody who can help me about this problem?
@schuemie @msuchard

In my opinion a fixed-effects assumption is hard to defend even under the best of circumstances. Unless the populations are truly similar, we should assume some distribution of effects.

(Remember, our studies estimate an average effect, but never assume the effect is constant on a person-by-person level. With different populations, the average will therefore differ, and the random effects model should account for that.)

I recommend we always use random-effects models. There is some ready-to-use R code in our EvidenceSynthesis package.

By the way, one of the key research areas for the population-level estimation workgroup this year is evidence synthesis. Hopefully we can develop even better methods for combining estimates that are more appropriate for observational studies done in a distributed network of databases.

Thank you, @schuemie , I knew you’d be there!!

I agree with you. I’ll use the ‘random-effect model’ meta-analysis regardless of the result of heterogeneity test.

I have another quick question. In the EvidenceSynthesis package, ‘metagen’ function is used, which is based on ‘inverse variance weighting’.

Is there any specific reason for using this weighting method?
(Sorry, as I said, I don’t know anything about meta-analysis…)

Inverse variance weighting is the most commonly used approach, and seems appropriate here as well. I’m not aware of other weighting methods that are applicable when for example using a proportional hazards model conditioned on propensity score strata. (E.g. weighting by sample size would not take the conditioning into account and would therefore not be appropriate). But I would be happy to hear about alternatives that could be considered.

Perhaps also important: for empirical calibration, I recommend generating meta-analysis estimates of the negative (and positive) controls, and then perform empirical calibration, rather than performing a meta-analysis on calibrated statistics. The reason for this is that meta-analysis assumes independence of the (random) error. If systematic error is also included in the statistics (through empirical calibration) that assumption is violated, since systematic error at different databases is likely correlated.

Thank you @schuemie. This is really helpful!

Definitely go with random effects models when comparing results from across the OHDSI network. In my study, I used the dersimonian and laird method (easily implemented in R) although there are other random effects methods…

1 Like
t