BACK TO INDEX

Publications by Eduardo D. Sontag in year 2023
Articles in journal or book chapters
  1. M.A. Al-Radhawi, D. Del Vecchio, and E.D. Sontag. Identifying competition phenotypes in synthetic biochemical circuits. IEEE Control Systems Letters, 7:211-216, 2023. Note: (Online published in 2022; in print 2023.). [PDF] Keyword(s): Resource competition, model discrimination, synthetic biology, system identification.
    Abstract:
    Synthetic gene circuits require cellular resources, which are often limited. This leads to competition for resources by different genes, which alter a synthetic genetic circuit's behavior. However, the manner in which competition impacts behavior depends on the identity of the "bottleneck" resource which might be difficult to discern from input-output data. In this paper, we aim at classifying the mathematical structures of resource competition in biochemical circuits. We find that some competition structures can be distinguished by their response to different competitors or resource levels. Specifically, we show that some response curves are always linear, convex, or concave. Furthermore, high levels of certain resources protect the behavior from low competition, while others do not. We also show that competition phenotypes respond differently to various interventions. Such differences can be used to eliminate candidate competition mechanisms when constructing models based on given data. On the other hand, we show that different networks can display mathematically equivalent competition phenotypes.


  2. S. Wang, E.D. Sontag, and D.A. Lauffenburger. What cannot be seen correctly in 2D visualizations of single-cell 'omics data?. Cell Systems, 14:723-731, 2023. [WWW] [PDF] Keyword(s): visualization, single-cell data, tSNE, UMAP.
    Abstract:
    Single-cell -omics datasets are high-dimensional and difficult to visualize. A common strategy for exploring such data is to create and analyze 2D projections. Such projections may be highly nonlinear, and implementation algorithms are designed with the goal of preserving aspects of the original high-dimensional shape of data such as neighborhood relationships or metrics. However, important aspects of high-dimensional geometry are known from mathematical theory to have no equivalent representation in 2D, or are subject to large distortions, and will therefore be misrepresented or even invisible in any possible 2D representation. We show that features such as quantitative distances, relative positioning, and qualitative neighborhoods of high-dimensional data points will always be misrepresented in 2D projections. Our results rely upon concepts from differential geometry, combinatorial geometry, and algebraic topology. As an illustrative example, we show that even a simple single-cell RNA sequencing dataset will always be distorted, no matter what 2D projection is employed. We also discuss how certain recently developed computational tools can help describe the high-dimensional geometric features that will be necessarily missing from any possible 2D projections.


Conference articles
  1. A. Duvall and E. D. Sontag. Global exponential stability or contraction of an unforced system do not imply entrainment to periodic inputs. In Proc. 2024 Automatic Control Conference, 2023. Note: To appear. Preprint in arXiv:2310.03241.
    Abstract:
    It is often of interest to know which systems will approach a periodic trajectory when given a periodic input. Results are available for certain classes of systems, such as contracting systems, showing that they always entrain to periodic inputs. In contrast to this, we demonstrate that there exist systems which are globally exponentially stable yet do not entrain to a periodic input. This could be seen as surprising, as it is known that globally exponentially stable systems are in fact contracting with respect to some Riemannian metric. The paper also addresses the broader issue of entrainment when an input is added to a contractive system.


  2. Z. Liu, N. Ozay, and E. D. Sontag. On the non-existence of immersions for systems with multiple omega-limit sets. In 22nd IFAC World Congress, IFAC-PapersOnLine, volume 56, pages 60-64, 2023. Note: This is a preliminary version of the journal paper Properties of immersions for systems with multiple limit sets with implications to learning Koopman embeddings.[PDF] [doi:https://doi.org/10.1016/j.ifacol.2023.10.1408] Keyword(s): linear systems, nonlinear systems, observables, Koopman embedding, duality.
    Abstract:
    Linear immersions (or Koopman eigenmappings) of a nonlinear system have wide applications in prediction and control. In this work, we study the existence of one-to-one linear immersions for nonlinear systems with multiple omega-limit sets. For this class of systems, existing work shows that a discontinuous one-to-one linear immersion may exist, but it is unclear if a continuous one-to-one linear immersion exists. Under mild conditions, we prove that systems with multiple omega-limit sets cannot admit a continuous one-to-one immersion to a class of systems including linear systems.


  3. A.C.B de Olivera, M. Siami, and E.D. Sontag. Dynamics and perturbations of overparameterized linear neural networks. In Proc. 2023 62st IEEE Conference on Decision and Control (CDC), pages 7356-7361, 2023. Note: Extended version is On the ISS property of the gradient flow for single hidden-layer neural networks with linear activations, arXiv https://arxiv.org/abs/2305.09904. [PDF] [doi:10.1109/CDC49753.2023.10383478] Keyword(s): neural networks, overparametrization, gradient descent, input to state stability, gradient systems.
    Abstract:
    Recent research in neural networks and machine learning suggests that using many more parameters than strictly required by the initial complexity of a regression problem can result in more accurate or faster-converging models -- contrary to classical statistical belief. This phenomenon, sometimes known as ``benign overfitting'', raises questions regarding in what other ways might overparameterization affect the properties of a learning problem. In this work, we investigate the effects of overfitting on the robustness of gradient-descent training when subject to uncertainty on the gradient estimation. This uncertainty arises naturally if the gradient is estimated from noisy data or directly measured. Our object of study is a linear neural network with a single, arbitrarily wide, hidden layer and an arbitrary number of inputs and outputs. In this paper we solve the problem for the case where the input and output of our neural-network are one-dimensional, deriving sufficient conditions for robustness of our system based on necessary and sufficient conditions for convergence in the undisturbed case. We then show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is minimized, and discuss directions of future work that might extend our current results for more general formulations.



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.




Last modified: Wed Apr 17 19:59:02 2024
Author: sontag.


This document was translated from BibTEX by bibtex2html