AbstractConcerns that algorithms may discriminate against certain groups have led to numerous efforts to 'blind' the algorithm to race. We argue that this intuitive perspective is misleading and may do harm. Our primary result is exceedingly simple, yet often overlooked. A preference for fairness should not change the choice of estimator. Equity preferences can change how the estimated prediction function is used (e.g., different threshold for different groups) but the function itself should not change. We show in an empirical example for college admissions that the inclusion of variables such as race can increase both equity and efficiency.
CitationKleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. "Algorithmic Fairness." AEA Papers and Proceedings, 108: 22-27. DOI: 10.1257/pandp.20181018
- C38 Multiple or Simultaneous Equation Models: Classification Methods; Cluster Analysis; Principal Components; Factor Models
- D63 Equity, Justice, Inequality, and Other Normative Criteria and Measurement
- I23 Higher Education; Research Institutions
- J15 Economics of Minorities, Races, Indigenous Peoples, and Immigrants; Non-labor Discrimination