The Risk of Machine Learning
Many applied settings in empirical economics involve simultaneous estimation of a large number of parameters. In particular, applied economists are often interested in estimating the effects of many treatments (like teacher effects or location effects), treatment effects for many groups, and prediction models with many regressors. In these settings, machine learning methods such as ridge, lasso, and pretest, which combine regularized estimation and data-driven choices of regularization parameters, are useful to avoid over-fitting. In this article, we analyze the performance of machine learning methods in contexts that require simultaneous estimation of many parameters. Our analysis aims to provide guidance to applied researchers on (i) the choice between machine learning estimators in practice, and (ii) data-driven selection of regularization parameters. To address (i), we characterize the risk (mean squared error) of machine learning estimators and derive their relative performance as a function of simple features of the data generating process. To address (ii), we show that data-driven choices of regularization parameters, based on Stein’s unbiased risk estimate or on cross-validation, yield estimators with risk uniformly close to the risk attained under the optimal (unfeasible) choice of regularization parameters. We use data from recent examples in the empirical economics literature to illustrate the practical applicability of our results.