BACK TO INDEX

Publications about 'LQR'
Articles in journal or book chapters
  1. L. Cui, Z.P. Jiang, and E. D. Sontag. Small-disturbance input-to-state stability of perturbed gradient flows: Applications to LQR problem. Systems and Control Letters, 2024. Note: To appear. Preprint (2023): arXiv arxiv.org/abs/2310.02930. [PDF] Keyword(s): gradient systems, direct optimization, input-to-state stability, ISS.
    Abstract:
    This paper studies the effect of perturbations on the gradient flow of a general constrained nonlinear programming problem, where the perturbation may arise from inaccurate gradient estimation in the setting of data-driven optimization. Under suitable conditions on the objective function, the perturbed gradient flow is shown to be small-disturbance input-to-state stable (ISS), which implies that, in the presence of a small-enough perturbation, the trajectory of the perturbed gradient flow must eventually enter a small neighborhood of the optimum. This work was motivated by the question of robustness of direct methods for the linear quadratic regulator problem, and specifically the analysis of the effect of perturbations caused by gradient estimation or round-off errors in policy optimization. Interestingly, we show small-disturbance ISS for three of the most common optimization algorithms: standard gradient flow, natural gradient flow, and Newton gradient flow.


Conference articles
  1. A.C.B de Olivera, M. Siami, and E.D. Sontag. Remarks on the gradient flow for linear neural network based feedback for the LQR Problem. In Proc. 2024 63rd IEEE Conference on Decision and Control (CDC), 2024. Note: Submitted.Keyword(s): neural networks, overparametrization, gradient descent, input to state stability, gradient systems, feedback control, LQR.
    Abstract:
    Motivated by the current interest in using Artificial intelligence (AI) tools in control design, this paper takes the first steps towards bridging results from gradient methods for solving the LQR control problem, and neural networks. More specifically, it looks into the case where one wants to find a Linear Feed-Forward Neural Network (LFFNN) that minimizes the Linear Quadratic Regulator (LQR) cost. This work develops gradient formulas that can be used to implement the training of LFFNNs to solve the LQR problem, and derives an important conservation law of the system. This conservation law is then leveraged to prove global convergence of solutions and invariance of the set of stabilizing networks under the training dynamics. These theoretical results are then followed by and extensive analysis of the simplest version of the problem (the ``scalar case'') and by numerical evidence of faster convergence of the training of general LFFNNs when compared to traditional direct gradient methods. These results not only serve as indication of the theoretical value of studying such a problem, but also of the practical value of LFFNNs as design tools for data-driven control applications.



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.




Last modified: Wed Apr 17 19:59:03 2024
Author: sontag.


This document was translated from BibTEX by bibtex2html