Minimizing Generalization Error in Mean-Variance Optimization
Mean-variance optimization (MVO) is often criticized as being sensitive to sampling error. In particular, MVO portfolios tend to overfit the sample data and often underperform simple linear models. We present mean-variance optimization as a regression problem with the objective of minimizing the expected generalization error. It follows that methods for optimizing the bias-variance trade-off in regression extend to the portfolio optimization setting. We follow the work of López de Prado, 2016 and Kinn, 2018, and provide two applications: Hierarchical Random Subspace Optimization (HRSO) and Dynamically Regularized MVO (DRMVO). HRSO applies unsupervised subset resampling for overcoming issues related to overfitting and sparsity in the sample data. DRMVO introduces regularization terms and applies cross-validation for attenuating estimation error in sample moments. We conclude with a small simulation case study.