« Back to Results

Treatment Effects and Causal Inference

Paper Session

Friday, Jan. 3, 2020 10:15 AM - 12:15 PM (PDT)

Marriott Marquis, Marina Ballroom F
Hosted By: Econometric Society
  • Chair: Alberto Abadie, Massachusetts Institute of Technology

Synthetic Difference In Differences

Dmitry Arkhangelsky
,
CEMFI-Madrid
Susan Athey
,
Stanford University
David A. Hirshberg
,
Stanford University
Guido Imbens
,
Stanford University
Stefan Wager
,
Stanford University

Abstract

We present a new perspective on the Synthetic Control (SC) method as a weighted least squares regression estimator with time fixed effects and unit weights. This perspective suggests a generalization with two way (both unit and time) fixed effects, and both unit and time weights, which can be interpreted as a unit and time weighted version of the standard Difference In Differences (DID) estimator. We find that this new Synthetic Difference In Differences (SDID) estimator has attractive properties compared to the SC and DID estimators. Formally we show that our approach has double robustness properties: the SDID estimator is consistent under a wide variety of weighting schemes given a well-specified fixed effects model, and SDID is consistent with appropriately penalized SC weights when the basic fixed effects model is misspecied and instead the true data generating process involves a more general low-rank structure (e.g., a latent factor model). We also present results that justify standard inference based on weighted DID regression. Further generalizations include unit and time weighted factor models.

Double-Robust Identification for Causal Panel

Dmitry Arkhangelsky
,
CEMFI
Guido Imbens
,
Stanford University

Abstract

We develop a new approach for estimating average treatment effects in the observational studies with unobserved group-level heterogeneity. A common approach in such settings is to use linear fixed effect specifications estimated by least squares regression. Such methods severely limit the extent of the heterogeneity between groups by making the restrictive assumption that linearly adjusting for differences between groups in average covariate values addresses all concerns with cross-group comparisons. We start by making two observations. First, we note that the fixed effect method in effect adjusts only for differences between groups by adjusting for the average of covariate values and average treatment. Second, we note that weighting by the inverse of the propensity score would remove biases for comparisons between treated and control units under the fixed effect set up. We then develop three generalizations of the fixed effect approach based on these two observations. First, we suggest more general, nonlinear, adjustments for the average covariate values. Second, we suggest robustifying the estimators by using propensity score weighting. Third, we motivate and develop implementations for adjustments that also adjust for group characteristics beyond the average covariate values.

Revisiting Regression Adjustment in Experiments with Heterogeneous Treatment Effects

Akanksha Negi
,
Michigan State University
Jeffrey Wooldridge
,
Michigan State University

Abstract

Regression adjustment with covariates in experiments is intended to improve precision over a simple difference in means between the treated and control outcomes. The efficiency argument in favor of regression adjustment has come under criticism lately, where papers like Freedman (2008a,b) find no systematic gain in asymptotic efficiency of the covariate adjusted estimator. In this paper, we verify that when treatment effects are heterogeneous, additively controlling for covariates in a regression is not guaranteed to bring additional efficiency gain. We then show that, like in Lin (2013), estimating separate regressions for the control and treated groups is guaranteed to do no worse than both the simple difference-in-means estimator and just including the covariates in additive fashion. Usually, the estimator that includes a full set of interactions strictly improves asymptotic efficiency. Unlike Imbens and Rubin (2015), who assume full knowledge of the population means of the covariates in a random sampling context, we show that the fully interacted estimator improves asymptotic efficiency even when one accounts for the sampling variation in the sample means of the covariates. This result appears to be new, and simulations show that the efficiency gains can be substantial. We also show that in some important cases – applicable to binary, fractional, count, and other nonnegative responses – nonlinear regression adjustment is consistent without any restrictions on the conditional mean functions.

Identification of Causal Effects with Multiple Instruments: Problems and Some Solutions

Magne Mogstad
,
University of Chicago
Alexander Torgovitsky
,
University of Chicago
Christopher Walters
,
University of California-Berkeley

Abstract

Empirical researchers often combine multiple instruments for a single treatment using two stage least squares (2SLS). When treatment effects are heterogeneous, a common justification for including multiple instruments is that the 2SLS estimand can still be interpreted as a positively-weighted average of local average treatment effects (LATEs). This justification requires the well-known monotonicity condition. However, we show that with more than one instrument, this condition can only be satisfied if choice behavior is effectively homogenous. Based on this finding, we consider the use of multiple instruments under a weaker, partial monotonicity condition. This condition is implied by standard choice theory and allows for richer heterogeneity. First, we show that the weaker partial monotonicity condition can still suffice for the 2SLS estimand to be a positively-weighted average of LATEs. We characterize a simple sufficient and necessary condition that empirical researchers can check to ensure positive weights. Second, we develop a general method for using multiple instruments to identify a wide range of causal parameters other than LATEs. The method allows researchers to combine multiple instruments to obtain more informative empirical conclusions than one would obtain by using each instrument separately.

Statistical Non-Significance in Empirical Economics

Alberto Abadie
,
Massachusetts Institute of Technology

Abstract

Statistical significance is often interpreted as providing greater information than non-significance. In this article we show, however, that rejection of a point null often carries very little information, while failure to reject may be highly informative. This is particularly true in empirical contexts that are common in economics, where data sets are large and there are rarely reasons to put substantial prior probability on a point null. Our results challenge the usual practice of conferring point null rejections a higher level of scientific significance than non-rejections. Therefore, we advocate a visible reporting and discussion of non-significant results.
JEL Classifications
  • C1 - Econometric and Statistical Methods and Methodology: General
  • C2 - Single Equation Models; Single Variables