« Back to Results
Marriott Philadelphia Downtown, Meeting Room 410
Hosted By:
Econometric Society
binary, endogenous regressor when a discrete-valued instrumental variable is available.
We begin by showing that the only existing point identification result for this model
is incorrect. We go on to derive the sharp identified set under mean independence
assumptions for the instrument and measurement error, and find that these fail to point
identify the effect of interest. This motivates us to consider alternative and slightly
stronger assumptions: we show that adding second and third moment independence
assumptions suffices to identify the model. We then turn our attention to inference.
We show that both our model, and related models from the literature that assume
regressor exogeneity, suffer from weak identification when the effect of interest is small. To address this difficulty, we exploit the inequality restrictions that emerge from our derivation of the sharp identified set under mean independence only. These restrictions remain informative irrespective of the strength of identification. Combining these with the moment equalities that emerge from our identification result, we propose a robust inference procedure using tools from the moment inequality literature. Our method
performs well in simulations.
Inference and Identification Issues in Econometrics
Paper Session
Sunday, Jan. 7, 2018 8:00 AM - 10:00 AM
- Chair: Frank Kleibergen, University of Amsterdam
Estimation and Inference with a (Nearly) Singular Jacobian
Abstract
This paper develops extremum estimation and inference results for nonlinear models with very general forms of potential identification failure when the source of this identification failure is known. We examine models that may have a general deficient rank Jacobian in certain parts of the parameter space. When identification fails in one of these models, it becomes under-identified and the identification status of individual parameters is not generally straightforward to characterize. We provide a systematic reparameterization procedure that leads to a reparameterized model with straightforward identification status. Using this reparameterization, we determine the asymptotic behavior of standard extremum estimators and Wald statistics under a comprehensive class of parameter sequences characterizing the strength of identification of the model parameters, ranging from non-identification to strong identification. Using the asymptotic results, we propose hypothesis testing methods that make use of a standard Wald statistic and data-dependent critical values, leading to tests with correct asymptotic size regardless of identification strength and good power properties. Importantly, this allows one to directly conduct uniform inference on low-dimensional functions of the model parameters, including one-dimensional subvectors. The paper illustrates these results in three examples: a sample selection model, a triangular threshold crossing model and a collective model for household expenditures.Mis-classified, Binary, Endogenous Regressors: Identification and Inference
Abstract
This paper studies identification and inference for the effect of a mis-classified,binary, endogenous regressor when a discrete-valued instrumental variable is available.
We begin by showing that the only existing point identification result for this model
is incorrect. We go on to derive the sharp identified set under mean independence
assumptions for the instrument and measurement error, and find that these fail to point
identify the effect of interest. This motivates us to consider alternative and slightly
stronger assumptions: we show that adding second and third moment independence
assumptions suffices to identify the model. We then turn our attention to inference.
We show that both our model, and related models from the literature that assume
regressor exogeneity, suffer from weak identification when the effect of interest is small. To address this difficulty, we exploit the inequality restrictions that emerge from our derivation of the sharp identified set under mean independence only. These restrictions remain informative irrespective of the strength of identification. Combining these with the moment equalities that emerge from our identification result, we propose a robust inference procedure using tools from the moment inequality literature. Our method
performs well in simulations.
A More Powerful Subvector Anderson Rubin Test in Linear Instrumental Variable Regression
Abstract
We study subvector inference in the linear instrumental variables model assuming homoskedasticity but allowing for weak instruments. The subvector Anderson and Rubin (1949) test that uses chi square critical values with degrees of freedom reduced by the number of parameters not under test, proposed by Guggenberger et al (2012), controls size but is generally conservative. We propose a conditional subvector Anderson and Rubin test that uses data-dependent critical values that adapt to the strength of identification of the parameters not under test. This test has correct size and strictly higher power than the subvector Anderson and Rubin test by Guggenberger et al (2012). We provide tables with conditional critical values so that the new test is quick and easy to use.JEL Classifications
- C12 - Hypothesis Testing: General
- C26 - Instrumental Variables (IV) Estimation