Workshop on the Intersections of Machine Learning and Parameter Estimation in Control

57th IEEE Conference on Decision and Control
Fontainebleau, Miami Beach, FL
December 16, 2018



The focus of this workshop is the analytical foundations of Machine Learning (ML) algorithms and their intersection with parameter estimation in control. By machine learning we mean the use of statistical techniques to enable computer systems to “learn” using data, typically using offline training methods. Starting from a functional approximation perspective, this learning corresponds to iterative adjustments of parameters of the approximation function, frequently based on a gradient descent approach, with the adjustments carried out so as to minimize an underlying cost function. Control systems, on the other hand, focus on methods for automated regulation and tracking in engineering systems, with guidelines for providing guarantees of stability, robustness and convergence in the presence of various uncertainties. Off-line and on-line learning of the underlying control parameters have been analyzed extensively, over the past four to five decades, in sub-disciplines in control such as system identification and adaptive control. It is in these subdisciplines whereby parameter estimation occurs, either implicitly or explicitly, and thus an obvious intersection with ML arises. In fact, when one writes down the dynamics of Stochastic Gradient Descent (SGD), the control gain update law in adaptive control, or the MAP parameter estimate in regression (Bayesian or otherwise), the three sets of equations are basically identical in structure. Interesting and rather subtle differences do exist however. For instance, normalization has always been explicit in adaptive control and regression, but not in SGD. SGD, on the other hand, has been studied within the context of momentum, leading to a class of accelerated methods, which are ideas not present in the adaptive control literature. Another area of intersection between the two fields arises when one treats SGD explicitly as a dynamical system and leverages the broad array of tools in control theory to study its robustness properties. The goal of this workshop is to understand the aforementioned intersections between ML and control systems. The three major thrusts of this workshop are :

  1. Methods for combining the analytical rigor in control systems with the representational power of ML algorithms.

  2. Parameter update laws combining features and analysis from control and accelerated methods.

  3. Characterizing convergence in deep learning.

The speakers in this workshop were selected so as to present a cohesive story of the state of the art in training machine learning models and parameter convergence in dynamical systems.


9:00am – 9:30am Introduction

  • Joseph E. Gaudio

9:30am – 10:15am Andre Wibisono | Georgia Tech

  • A variational perspective on accelerated methods in optimization

10:15am – 10:30am Coffee Break

10:30am – 11:15am Ashia Wilson | Microsoft Research

  • Dynamical systems perspective toward gradient based optimization

11:15am – 12:00pm Anuradha Annaswamy | MIT

  • The Use of Neural Networks for Parameter Estimation and Control in Dynamic Systems

12:00pm –1:30pm Lunch

1:30pm – 2:15pm Laurent Lessard | University of Wisconsin-Madison

  • Robust control approach to algorithm analysis and design

2:15pm – 3:00pm Cyril Zhang | Princeton

  • Spectral Filtering for General Linear Dynamical Systems

3:00pm – 3:15pm Coffee Break

3:15pm – 4:00pm Frank Lewis | University of Texas at Arlington

  • Unified Reinforcement Learning/Parameter Estimation Structures for Real-Time Optimal Control and Differential Graphical Games

4:00pm – 4:45pm Travis Gibson | Harvard Medical School

  • Topics on gradient descent for deep linear networks

4:45pm – 5:30pm Panel Discussion

  • Moderators: Travis E. Gibson | Joseph E. Gaudio