# Past Discussions on Common Pitfalls in Conducting Meta-Regression Analysis in Economics

Topics covered previously

using t-values as effect sizes

reducing economic effects or tests to categories of statistical significance for the purpose of probit (or logit) meta-regression analysis (MRA).

There is a consensus among MAER-Net members that these are ‘pitfalls’ in the sense they are often misinterpreted and/or poorly modelled. MAER-Net does not wish to ‘prohibit’ the use of logit/probit or t-values in meta-analysis. We merely caution those who choose to do so to exercise greater care interpreting the results from their MRAs.

Why issue this caution? A full justification is beyond the scope of any internet post; however, a brief sketch might look something like,

*Probit/Logit MRA:*

*Probit/Logit MRA:*

reducing any statistical effect or test to crude categories such as: statistically significant and positive, stat insig, stat sig and negative or similar ones will necessarily lose much information that is needed to identify the main drivers of reported research findings reliably. This loss of information is often fatal and almost always unnecessary.

doing so inextricably conflates selective reporting bias with evidence of a genuine economic effect. It is not possible to separate out whether a statistically significant result is due to the researchers’ desire to find such an effect or some underlying genuine economic phenomenon. Logit/probit MRAs are just as likely to be identifying factors related to bad science as they are to understand the economic phenomenon under investigation. However, this is not how Logit/probit MRAs are interpreted, but rather are claimed to identify structure in the underlying economic phenomenon.

using better statistical methods is almost always possible whenever the research that is being systematically reviewed is the result of a statistical test or estimate.

conducting these logit/probit MRA is little more than sophisticated ‘vote-counting,’ which is considered to be bad practice in the broader community of meta-analysts. For example, Hedges and Olkin (1985) prove that vote counts are more likely to come to the

conclusion as more research accumulates, just the opposite of the desirable statistical property, consistency.*wrong*

*t-values*

*t-values*

When t-values are used as the dependent variable, all the moderator variables need to be divided by SE. If not, then their MRA coefficient reflects differential publication bias, not some genuine economic effect.

t-values cannot be considered to be an ‘effect size.’ Doing so, inevitably runs into any number of paradoxes or problems with interpretations. As long as the underlying economic effect is anything other than 0, t-values must increase proportionally with the sqrt(n) and precision (1/SE). So which value of precision or the sqrt(n) should the meta-analyst choose? The perfect study has precision and the sqrt(n) approaching infinity. But here, the t-value will also approach infinity, even when the effect is tiny. Nor is the average t-value a meaningful summary of a research literature. For example, suppose the average t-value of the price elasticity of prescription drugs is -2 (or -1, -3, or any number). Can we infer that prescription drugs are highly sensitive (or insensitive) to prices? Depending on the typical sample size any of these average t-values in consistent with an elastic or an inelastic demand for prescription drugs. Worse still, any average absolute t-value a little larger or smaller than 2 is compatible with a perfectly inelastic demand for prescription drugs and some degree of selection for a statistically significant price effect. Nothing about this important economic phenomenon can be inferred from the typical, or the ideal, t-value.