top of page
Search

New Challenges for Meta-Analysis: Attenuation Bias, P-Hacking, Preferred Estimates

My colleagues and I have recently published or revised three meta-analyses, each raising issues that may matter for how we do meta-analysis in economics and related fields. I’d be grateful for thoughts or feedback -- here, by e-mail, or in person at our colloquium in Ottawa.


Attenuation bias, aka regression dilution, arises when explanatory variables are measured with (classical random) error, biasing regression coefficients toward zero. We show that attenuation bias can be quantitatively important in estimating the elasticity of substitution between skilled and unskilled labor, although publication bias is still the bigger problem. We’re now working on comparing the two biases more broadly in economics. Is it possible that on average, the two wrongs make a right? On a technical note, we also argue it’s risky to meta-analyze inverted regression coefficients -- especially common when elasticities are estimated only in their inverse form. Working with these transformed estimates can violate key meta-analysis assumptions. We should meta-analyze the originally reported regression coefficients.


Publication bias and p-hacking are often treated as the same problem, and in many settings, this is defensible: both are observationally equivalent and create a correlation between effect sizes and standard errors. But the distinction can matter for methods. Selection models don’t handle p-hacking, while meta-regression (PET-PEESE) is robust to some forms. In this paper we apply MAIVE, a new extension of PET-PEESE forthcoming in Nature Communications robust to more p-hacking strategies that work on precision (such as clustering choices or variable controls). This suggests a need for more attention to the mechanisms behind selective reporting and for broader adoption of estimators robust to p-hacking, such as RTMA developed by Maya Mathur.


Most economics meta-analyses collect all reported estimates -- a good default. But some estimates are clearly marked by the original authors as less trustworthy. In our class size meta-analysis, we classified estimates as “preferred,” “neutral,” or “discounted” according to how study authors described them. Preferred estimates were systematically larger, and this could not be explained by publication bias (based on tests we didn’t include in the final version of the paper because they were not relevant for JOLE readership though the finding could be relevant for MAER-Net). Takeaway: It’s worth coding which results are preferred in primary studies, as these author judgments may capture information that’s otherwise hard to quantify.


Comments, questions, or counterexamples are welcome -- especially over coffee in Ottawa.


Links to papers and methods are at meta-analysis.cz.

 
 
 

Recent Posts

See All
MAER-Net Blog - Introduction

Introduction to the MAER-Net Blog Content overview How to create a new account for the MAER-Net Blog. Access your profile . Subscribe to...

 
 
 
bottom of page