The Extreme of Interpretability in Machine Learning
With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice, flawed models in healthcare, and black box loan decisions in finance. Transparency and interpretability of machine learning models is critical in high stakes decisions. In this talk, I will focus on two of the most fundamental problems in the field of interpretable machine learning: optimal sparse decision trees and optimal sparse generalized additive models. I will also discuss a hypothesis for why we can find interpretable models with the same accuracy as black box models and discuss recent and future work on dimension reduction for data visualization and model class visualization in variable importance space.
I will mainly discuss work from these papers:
Jiachang Liu, Chudi Zhong, Margo Seltzer, and Cynthia Rudin
Fast Sparse Classification for Generalized Linear and Additive Models, AISTATS 2022
https://arxiv.org/abs/2202.11389
Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, Margo Seltzer
Fast Sparse Decision Tree Optimization via Reference Ensembles, AAAI, 2022
https://arxiv.org/abs/2112.00798
Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer
Generalized and Scalable Optimal Sparse Decision Trees. ICML, 2020.
https://arxiv.org/abs/2006.08690
Bio: Cynthia Rudin is a professor of computer science and engineering at Duke University, and directs the Interpretable Machine Learning Lab. She holds degrees from the University at Buffalo and Princeton. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from AAAI (the "Nobel Prize of AI"). She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award, and a 2022 Guggenheim Fellow. She is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, and AAAI. Her goal is to design predictive models that people can understand.