It is extremely important in many application domains to have transparency in predictive modeling. Domain experts do not tend to prefer "black box" predictive model models. They would like to understand how predictions are made, and possibly, prefer models that emulate the way a human expert might make a decision, with a few important variables, and a clear convincing reason to make a particular prediction.
I will discuss recent work, performed in large part by ORC students, on interpretable predictive modeling with decision lists. I will describe several approaches, including:
- an algorithm where not only the predictions, but the whole algorithm itself is interpretable to a human
- an algorithm based on Bayesian analysis
- an algorithm based on optimization
This talk will feature work by ORC student Ben Letham, ex-ORC student Allison Chang, and ORC co-director Dimitris Bertsimas. It will also feature work by MIT undergraduate Shawn Qian. Other collaborators are David Madigan (Columbia), Tyler McCormick (U. Washington), and Gene Kogan (Independent).