I am a major developer in the following software toolkits for interpretable machine learning:
- L0Learn: Fast, approximate algorithms for L0-regularized learning.
Handles sparse regression and classification problems, in times comparable to the fast Lasso solvers.
Downloaded over 15,000 times.
Based on “Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms”. Link.
- L0BnB: A specialized exact algorithm for L0-regularized regression, based on branch-and-bound.
Can solve instances with 10^7 features to optimality, in minutes to hours (assuming highly sparse solutions).
Based on “Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization”. Link.
- hierScale: A scalable convex optimization algorithm for regression with pairwise interactions.
Based on “Learning Hierarchical Interactions at Scale: A Convex Optimization Approach”. Link.
- Tree Ensemble Layer: A layer of differentiable trees that can be used within neural networks.
The layer can speed up and enhance interpretability in neural networks.
Joint work with Google collaborators.
Based on “The Tree Ensemble Layer: Differentiability meets Conditional Computation”. Link.