I'm hereby announcing the formation of the Method Evaluation Task Force!
When designing a study there are many different study designs to choose from, and many additional choices to make, and it is often unclear how these choices will affect the accuracy of the estimate.(e.g. If I match on propensity scores, will that lead to more or less bias than when I stratify? What about power?) The literature contains many papers evaluating one design choice at a time, but (to me) with unsatisfactory scientific rigor; often a method is evaluated on one or two exemplar study from which we cannot generalize, or by using simulations which have an unclear relationship with the real world. The OMOP Experiment was a first attempt at systematic empirical evaluation of method performance, from which we have learned many insights (mostly on how to better evaluate methods ).
Task force objectives
1. Develop the methodology for evaluating methods (for estimating population-level effects).
2. Use the developed methodology to systematically evaluate a large set of study designs and design choices.
1. Call for collaborators (this post)
2. Some further refinement of the objectives (e.g. what gold standards to use, which methods to include in the evaluation)
3. Gather requirements for funding, and apply for funding if needed
4. Do the research
Call for collaborators
Let us know if you're interested in actively participating in this research. Warning: this will be hard work with (probably) no pay. Either send me an e-mail, or respond to this post.
Tentative members of this task force are currently: @nicolepratt, @jweave17, Alejandro Schuler, @nigam, @yuxitian, and me.