« Back to Results
Marriott Marquis, Grand Ballroom 13
Hosted By:
American Economic Association
Improving the Transparency and Credibility of Economics Research
Paper Session
Saturday, Jan. 4, 2020 2:30 PM - 4:30 PM (PDT)
- Chair: Edward Miguel, University of California-Berkeley
A Proposed Specification Check for P-Hacking
Abstract
This paper proposes a specification check for p-hacking. More specifically, we advocate the reporting of t-curves and mu-curves - the t-statistics and estimated effect sizes derived from regressions using every possible combination of control variables from the researchers set - and introduce a standardized and accessible implementation. Our specification check allows researchers, referees and editors to visually inspect variation in effect sizes, significativity and sensitivity to the inclusion of control variables. We provide a Stata command which implements the specification check. Given the growing interest in estimating causal effects in the social sciences, the potential applicability of this specification check to empirical studies is very large.Do Pre-analysis Plans Hamper Publication?
Abstract
A critique of pre-analysis plans (PAPs) is that they generate boring, lab-report style papers that are disfavored by reviewers and journal editors, and hence hampered in the publication process. To assess whether this is the case, we compare the publication rates of experimental NBER working papers with and without PAPs. We find that papers with PAPs are, in fact, slightly less likely to be published. However, conditional on being published papers with PAPs are significantly more likely to land in top-5 journals. We also find that journal articles based on pre-registered analyses generate more citations. Our findings suggest that the alleged trade-off between career concerns and the scientific credibility that comes from registering and adhering to a PAP is less stark than is sometimes alleged, and may even tilt in favor of pre-registration for researchers most concerned about publishing in the most prestigious journals and maximizing citations to their work.Forecasting the Results of Economic Research
Abstract
Credible identification of social science findings is central to the transparency revolution. We argue that collecting forecasts of research results is a useful addition to the transparency toolkit. Were the experimental results anticipated? How much would experts update relative to their initial forecasts, given the results? We briefly discuss these and other uses of experimental forecasts, using examples from this nascent literature. We then consider practical decisions related to eliciting forecasts, such as the unit of elicitation. Finally, we provide evidence regarding these decisions from a sample of non-expert forecasters.Discussant(s)
Fiona Burlig
,
University of Chicago
Michael Gechter
,
Pennsylvania State University
David McKenzie
,
World Bank
JEL Classifications
- B4 - Economic Methodology
- C1 - Econometric and Statistical Methods and Methodology: General