Dear fellow meta-analysts:
It has been 7 years since we drafted the current reporting guidelines. I believe the time is ripe for updating them – and based on our debate at the MAER-Net Open Forum in Greenwich, I think at least some of you share that belief. Let me kick off a discussion on the shape of the possible revision.
The intention is not to create a list of rules and then cast them in stone. Rather, the guidelines should, apart from defining the minimum standard for conducting a meta-analysis in economics (well covered by the current guidelines), offer a set of practical recommendations. These recommendations may change again in a couple of year as our tools evolve.
My motivation for proposing the update is that since I started to serve as an associate editor at the Journal of Economic Surveys, I’ve handled quite a few meta-analyses with basic econometric and interpretation errors, studies that do not enhance the reputation of meta-analysis in economics. I find myself providing similar feedback all over again (and the same, I assume, goes for Tom and Chris at the JoES and many of you who write referee reports), so I believe a concrete set of recommendations published in the JoES will help.
Below I offer 12 subjective recommendations that I miss in the current guidelines. You may disagree about some points or may want to add others. We will prepare the revision of the guidelines (if any) based on the discussion that will follow. Thank you for contributing!
Weights. If the meta-analyst doesn't use inverse-variance weights, she or he should explain why. The requirement is in line with the recent review paper on meta-analysis in Nature: “Meta-analyses that are not weighted by inverse variances are common and often poorly justified.”
Outliers. The meta-analyst is encouraged to specify how outliers, both in the estimated effects and standard errors, are treated: whether all observations are included, or which rule was used to omit outlying observations (for example, Hadi or winsorizing and the respective thresholds).
Reconstructed standard errors. If standard errors are not directly reported in some primary studies, the meta-analyst should state how the standard errors were obtained (for example, using the delta method with the assumption of zero covariance). A robustness check is encouraged that excludes observations with reconstructed standard errors.
Study-level dummies. When using meta-regression analysis to investigate the extent of publication bias, the meta-analyst is encouraged to include a robustness check with study-level dummies (fixed effects in the econometric sense) and thus control for unobserved characteristics of individual studies. Note that such specification only captures within-study bias (p-hacking).
Random effects. Study-level random effects in economics meta-analyses can be correlated with publication bias or other aspects of studies. The meta-analyst should exercise caution when adding random effects to multiple MRA, because doing so likely violates the exogeneity condition.
Clustering or bootstrapping. Standard errors in meta-regression analysis should be clustered or bootstrapped. Bootstrapping is the only viable option when the number of studies is small.
Sensitivity of model averaging. If the meta-analyst uses Bayesian or frequentist model averaging, she or he should report robustness checks that show how the results depend on the selected priors (in the Bayesian case) or the selected weights (Mallows or other; in the frequentist case). The procedure employed to simplify model space (Markov Chain Monte Carlo or orthogonalization) should be mentioned.
Collinearity. The meta-analyst is encouraged to report collinearity statistics for multiple MRA, for example the correlation matrix or variance-inflation factors. Note that collinearity increases when inverse-variance weights are used.
Robustness checks. The meta-analyst should report robustness checks to the baseline test of publication bias and the underlying effect. Note that different estimators have different performance in different environments (as shown by Carter et al.). Choose several robustness checks, for example: PET-PEESE, WAAP, Bom & Rachinger, Furukawa, Hedges (and variants thereof), Andrews & Kasy, p-curve, p-uniform.
Data. The meta-analyst is encouraged to provide the data to the editor and referees so that they can check their structure. This can be done either through the journal's submission system or (preferably) publicly through the author's website.
Economic significance. The meta-analyst should discuss the economic significance of results. For example, publication bias or the underlying effect can be significant statistically, but not material in practice. If partial correlation coefficients are used, Doucouliagos's guidelines for the practical strength of the effect should be consulted.