This seminar features talks on interdisciplinary research which involves applications of fields related to algebra, geometry, topology, and combinatorics to fields like statistics, optimization, computer science, electrical engineering, biology, physics, and other sciences. One of the main goals of the seminar is to connect people from pure mathematics with people from applied fields.

**Organizers:** Daniel Bernstein dibernst@mit.edu; Diego Cifuentes diegcif@mit.edu;

**Mailing list:** You can sign up for the mailing list here.

*
The Spring'20 seminar is cancelled due to the COVID19 pandemic.
We will resume the seminar in the Fall.
*

This seminar will meet approximately every two weeks.
Abstracts and speaker information will appear below.
If you have some relevant work that you would like to give a talk about, please email one of the organizers.

**When:** February 26, 2020, 12 - 1pm

**Where:** E18-304 (IDSS)

**Speaker:** **Adit Radha**, MIT

**RSVP:** If MIT affiliated, use this link. Otherwise,
email one of the organizers.

**Title:** Over-parameterized Neural Networks Implement Associative Memory

**Abstract:** Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. Empirically, we show that: (1) overparameterized autoencoders store training samples as attractors, and thus, iterating the learned map leads to sample recovery; (2) the same mechanism allows for encoding sequences of examples, and serves as an even more efficient mechanism for memory than autoencoding. We mathematically prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.