« Back to Results
Econometrics of Treatment Effects
Friday, Jan. 5, 2024
8:00 AM - 10:00 AM (CST)
University of Southern California
Optimal Tests Following Sequential Experiments
Recent years have seen tremendous advances in the theory and application of sequential experiments. Even though such experiments are not always performed with the goal of hypothesis testing in mind, one may nevertheless be interested in conducting one once the experiment is over. The aim of this paper is to help develop optimal tests for sequential experiments by studying asymptotic properties of the tests. Our main result is that the asymptotic power function of any test can be matched by that of a test in a limit experiment where one observes a Gaussian process for each treatment, and the aim is to provide inference for the drifts of the Gaussian processes. This implies a powerful sufficiency result: any candidate test need only depend on a fixed set of statistics, irrespective of the type of the sequential experiment. These sufficient statistics are the number of times each treatment has been sampled by the end of the experiment, along with final value of the score (for parametric models) or efficient influence function (for non-parametric models) processes for each treatment. We also characterize asymptotically optimal tests under various restrictions such as unbiasedness, \alpha-spending constraints etc. Finally, we apply our our results to three important examples of sequential experiments: costly sampling, group sequential trials, and bandit experiments, and show how one can conduct optimal inference for these examples.
Optimal Stratification of Survey Experiments
This paper studies a two-stage model of experimentation, where the researcher first samples representative units from an eligible pool, then assigns each sampled unit to treatment or control. To implement balanced sampling and assignment, we introduce a new family of finely stratified designs that generalize matched pairs randomization to propensities p(x) not equal to 1/2. We show that two-stage stratification nonparametrically dampens the variance of treatment effect estimation. We formulate and solve the optimal stratification problem with heterogeneous costs and fixed budget, providing simple heuristics for the optimal design. In settings with pilot data, we show that implementing a consistent estimate of this design is also efficient, minimizing asymptotic variance subject to the budget constraint. We also provide new asymptotically exact inference methods, allowing experimenters to fully exploit the efficiency gains from both stratified sampling and assignment. An application to nine papers recently published in top economics journals demonstrates the value of our methods.
Policy Learning with New Treatments
I study the problem of a decision maker choosing a policy which allocates treatment to a heterogeneous population on the basis of experimental data that includes only a subset of possible treatment values. The effects of new treatments are partially identified by shape restrictions on treatment response. Policies are compared according to the minimax regret criterion, and I show that the empirical analog of the population decision problem has a tractable linear- and integer-programming formulation. I prove the maximum regret of the estimated policy converges to the lowest possible maximum regret at a rate which is the maximum of N−1/2 and the rate at which conditional average treatment effects are estimated in the experimental data. I apply my results to design targeted subsidies for electrical grid connections in rural Kenya, and estimate that 97% of the population should be given a treatment not implemented in the experiment.
A Practical Approach To Estimating Treatment Effects With Measurement Error in Confounders Using Repeat Measurements
Estimating the effects of a treatment, such as evaluating a policy, is a central task in the social sciences. Oftentimes, the treatment is only randomly assigned conditional on covariates. Yet when these covariates are observed with error, standard estimators of treatment effects are biased in an ex-ante unknown direction. In this paper, I propose a new estimator for this setting that offers a practical approach under interpretable assumptions when repeat measurements are available or obtainable. The approach builds on familiar ideas of linear instrumental variables, augmented inverse probability weighting, and LASSO-type regularization. To allow for flexible functional forms and high-dimensional covariates under linearity sparsity assumptions, I adapt results on LASSO-estimation in the presence of measurement error to double robust estimation of treatment effects. Simulations using real data suggest the estimator performs well compared to alternatives irrespective of the degree of (possibly heteroskedastic) measurement error and heterogeneity in treatment effects.
Ranking Treatments Using Instrumental Variables and Alternative Monotonicity Restrictions
This paper studies sharp identified sets of and inference for treatment effect parameters in an in- strumental variable framework while imposing alternative monotonicity restrictions. In particular, we consider a discrete, multi-valued treatment, a binary outcome, and a discrete, possibly multi-valued in- strument. We use a linear programming formulation to present a flexible framework and develop general results for characterizing testable restrictions and sharp identified sets of treatment effect parameters that follow from imposing instrument exogeneity while additionally imposing alternative monotonicity restric- tions on how the treatments depend on the instruments and how the outcomes depend on the treatments. Our framework nests both ordered and unordered treatments. We further characterize leading special cases of our general analysis, including encouragement designs, RCTs with one-sided noncompliance, RCTs with close subsitutes, and unordered and ordered monotonicity in Heckman and Pinto (2018). In particular, we illustrate our methodology with empirical applications to the encouragement design of Behaghel et al. (2014) investigating the effects of public vs private job search assistance; the RCTs with one-sided noncompliance of Angrist et al. (2009) investigating the effects of alternative strategies on academic performance of college students and of Blattman et al. (2017) investigating the effects cash incentives and therapy on reducing crime in Liberia; and the RCT with close substitutes of Kline and Walters (2016) investigating the effects of alternative early childhood programs.
C1 - Econometric and Statistical Methods and Methodology: General