Vishal Gupta - Research


My research interests focus on using data to formulate new, tractable models for uncertainty and behavior in optimization. I am particularly interested in applications in energy, risk management, consumer preferences, and data-analytics.


Journal Papers

Advanced Software Tools for Operations Research
(with Iain Dunning, Angela King, Jerry Kung, Miles Lubin and Jon Silberholz.)
Submitted to INFORMS Transactions on Education, under review.
[Abstract] [PDF] [e-companion]
In the age of big data analytics, it is increasingly important for researchers and practitioners to be familiar with methods and software tools for analyzing large data sets, formulating and solving large-scale mathematical optimization models, and sharing solutions using interactive media. Unfortunately, advanced software tools are seldom included in curricula of graduate-level operations research (OR) programs. We describe a course consisting of eight three-hour modules intended to introduce Master's and PhD students to advanced software tools for OR: Machine Learning in R, Data Wrangling, Visualization, Big Data, Algebraic Modeling with JuMP, High-Performance and Distributed Computing, Internet and Databases, and Advanced Mixed Integer Linear Programming (MILP) Techniques. For each module, we outline content, provide course materials, summarize student feedback, and share lessons learned from two iterations of the course. Student feedback was very positive, and all students reported that the course equipped them with software skills useful for their own research. We believe our course materials could serve as a template for the development of effective OR software tools courses and discuss how they could be adapted to other educational settings.
Data-Driven Robust Optimization
Finalist in the 2013 George Nicholson Student Paper Competition Award.
(with Dimitris Bertsimas and Nathan Kallus.)
Submitted to Operations Research, under review.
[Abstract] [PDF] [arXiv : 1401.0212]
The last decade has seen an explosion in the availability of data for operations research applications as part of the Big Data revolution. Motivated by this data rich paradigm, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using statistical hypothesis tests. The approach is flexible and widely applicable, and robust optimization problems built from our new sets are computationally tractable, both theoretically and practically. Furthermore, optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. We also propose concrete guidelines for practitioners and illustrate our approach with applications in portfolio management and queueing. Computational evidence confirms that our data-driven sets significantly outperform conventional robust optimization techniques whenever data is available.
A Comparison of Monte Carlo Tree Search and Mathematical Optimization for Large Scale Dynamic Resource Allocation
(with Dimitris Bertsimas, John D. Griffith, Mykel Kochenderfer, Velibor Misic, and Robert Moss.)
Submitted to Artificial Intelligence, under review.
Dynamic resource allocation (DRA) problems constitute an important class of dynamic stochastic optimization problems that arise in a variety of important real-world applications. DRA problems are notoriously difficult to solve to optimality since they frequently combine stochastic elements with intractably large state and action spaces. Although the artificial intelligence and operations research communities have independently proposed two successful frameworks for solving dynamic stochastic optimization problems —- Monte Carlo tree search (MCTS) and mathematical optimization (MO), respectively -- the relative merits of these two approaches are not well understood. In this paper, we adapt both MCTS and MO to a problem inspired by tactical wildfire management and undertake an extensive computational study comparing the two methods on large scale instances in terms of both the state and the action spaces. We show that both methods are able to greatly improve on a baseline, problem-specific heuristic. On smaller instances, the MCTS and MO approaches perform comparably, but the MO approach outperforms MCTS as the size of the problem increases for a fixed computational budget.
[PDF] [arXiv : 1405.5498]
Data-Driven Estimation in Equilibrium Using Inverse Optimization
(with Dimitris Bertsimas and Ioannis Ch. Paschalidis.)
Submitted to Mathematical Programming, second round review.
[Abstract] [PDF] [arXiv : 1308.3397]
Equilibrium modeling is common in a variety of fields such as game theory and transportation science. The inputs for these models, however, are often difficult to estimate, while their outputs, i.e., the equilibria they are meant to describe, are often directly observable. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the parameters of these models from observed equilibria. We use this technique to estimate the utility functions of players in a game from their strategies and to estimate the congestion function on a road network from traffic count data. A distinguishing feature of our approach is that it supports parametric and nonparametric estimation by leveraging ideas from statistical learning (kernel methods and regularization operators). In computational experiments involving Nash and Wardrop equilibria in a nonparametric setting, we find that 1) we effectively estimate the unknown demand or congestion function, respectively, and 2) our proposed regularization technique improves the quality of fit.
Inverse Optimization: A New Perspective on the Black-Litterman Model
(with Dimitris Bertsimas and Ioannis Ch. Paschalidis.)
Operations Research November/December 2012 vol. 60.
[Abstract] [PDF] [doi: 10.1287/opre.1120.1115]
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct ‘‘BL“-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new ‘‘BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.

In Preparation

Robust SAA
Winner of the 2013 MIT Operations Research Center Best Student Paper Award.
(with Dimitris Bertsimas and Nathan Kallus.)
[ Abstract]
Sample average approximation (SAA) is possibly the most popular approach to modeling decisionmaking under uncertainty in data-driven settings. In SAA, one approximates a true, unknown probability distribution by the empirical distribution defined by the data. Under mild assumptions, as the amount of data grows, the solutions of SAA-based optimization problems converge asymptotically to the solutions that would be obtained if the underlying distribution were known. In this paper, we propose a general purpose, modification of SAA that retains this asymptotic convergence, but also enjoys a strong finite-sample performance guarantee. The key idea is to define a suitable robust optimization over the set of distributions which are close to the empirical distribution using tools from statistical hypothesis testing. The resulting optimization problem is computationally tractable and solvable using off-the-shelf solvers. We illustrate the approach by studying some specific inventory models in data-driven settings. Computational evidence confirms that our approach significantly outperforms other data-driven approaches to such problems.
A Data-Driven Approach to Investor Risk-Preferences
(with Dimitris Bertsimas.)
Accurately specifying an investor's risk preferences is critical to many financial applications. Typical industry practice asks investors to self-describe as 'conservative’ or 'risky,’ yet, such self-descriptions are notoriously vague or uninformative. In this work we take a data-driven perspective. Using ideas from inverse and robust optimization, we construct risk preferences that are consistent with an investor's historical portfolio holdings. Specifi cally, our technique recovers a coherent risk measure approximately describing her behavior. This risk measure can then be used to suggest portfolio allocation strategies or cluster similar investors. Empirical evidence suggests the fitting procedure is robust to data error and computationally efficient.
Data-Driven Approaches to Robust Unit Commitment: A Case-Study in NE ISO
(with Dimitris Bertsimas.)


  • Data-Driven Robust Optimization, INFORMS 2013, MSOM 2013, ICCMS 2013

  • Learning Investor Risk Preferences through Inverse Optimization, INFORMS 2013

  • Constructing Risk Preferences from Data, INFORMS 2012

  • Inverse Optimization and Estimation, ISMP 2012, INFORMS 2012 (Invited Session Chair)

  • Inverse Optimization and the Black-Litterman Model, INFORMS 2011