Ferran Alet

I am a PhD student at MIT CSAIL, where I work on machine learning with Leslie Kaelbling and Tomas Lozano-Perez, and Josh Tenenbaum. I am also the organizer of the MIT Embodied Intelligence Seminar.

Research: I aim to reformulate basic concepts in machine learning to radically increase its generalizability. To accomplish this, I leverage techniques from meta-learning, learning to search, program synthesis, and insights from mathematics and the physical sciences. I enjoy building collaborations to work across the entire theory-application spectrum.

Mentoring: I love mentoring students and working with them. I was recently honored with the MIT Outstanding Direct Mentor Award '21 (given to 2 PhDs across all MIT). If you're an MIT or Harvard student interested in a UROP or an MEng don't hesitate to reach out!

Twitter  /  Email  /  CV  /  Google Scholar  /  LinkedIn

profile photo
Invited talks
  • Princeton, April 2022: A flexible framework of machine learning
  • EPFL, March 2022: A flexible framework of machine learning
  • DeepMind continual & meta-learning seminar, March 2022: Tailoring: why adaptation is useful even when nothing changes
  • CMU Scientific ML Seminar, Jan 2022: Learning to encode and discover physics-based inductive biases
  • Caltech, Jan. 2022: Learning to encode and discover physics-based inductive biases
  • DLBCN 2021: Learning to encode and discover inductive biases(video here)
  • Meta-learning and multi-agent workshop 2020: Meta-learning and compositionality
  • ICML Graph Neural Network workshop 2020: Scaling from simple problems to complex problems using modularity.
  • INRIA, June 2020: Meta-learning curiosity algorithms.
  • MIT Machine Learning Tea 2019: Meta-learning and combinatorial generalization.
  • UC Berkeley, Nov. 2019: Meta-learning structure (slides here).
  • KR2ML@IBM Workshop 2019: Graph Element Networks (slides here, video of very similar talk at ICML).
Papers
Noether Networks: meta-learning useful conserved quantities
Ferran Alet* , Dylan Doblar*, Allan Zhou, Joshua B. Tenenbaum, Kenji Kawaguchi, Chelsea Finn
NeurIPS 2021  

We propose to encode symmetries as conservation tailoring losses and meta-learn them from raw inputs in sequential prediction problems.

website, code Interview with Yannic Kilcher(100k-subscribers Youtube channel)
Tailoring: encoding inductive biases by optimizing unsupervised objctives at prediction time
Ferran Alet , Maria Bauza, Kenji Kawaguchi, Nurullah Giray Kuru, Tomás Lozano-Pérez, Leslie Pack Kaelbling,
NeurIPS 2021; Workshop version was a Spotlight at the physical inductive biases workshop

We optimize unsupervised losses for the current input. By optimizing where we act, we bypass generalization gaps and can impose a wide variety of inductive biases.

15-minute talk
A large-scale benchmark for few-shot program induction and synthesis
Ferran Alet* , Javier Lopez-Contreras*, James Koppel, Maxwell Nye, Armando Solar-Lezama, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Joshua B. Tenenbaum
ICML 2021  
website

We generate a large quantity of diverse real programs by running code instruction-by-instruction and obtain I/O pairs for 200k subprograms.

Meta-learning curiosity algorithms
Ferran Alet* , Martin Schneider*, Tomás Lozano-Pérez, Leslie Pack Kaelbling
ICLR 2020
code, press

By meta-learning programs instead of neural network weights, we can increase meta-learning generalization. We discover new algorithms in simple environments that generalize to complex ones.

Neural Relational Inference with Fast Modular Meta-learning
Ferran Alet , Erica Weng, Tomás Lozano-Pérez, Leslie Pack Kaelbling
NeurIPS, 2019  
code

We frame neural relational inference as a case of modular meta-learning and speed up the original modular meta-learning algorithms by two orders of magnitude, making them practical.

Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGB-D video
Maria Bauza, Ferran Alet Yen-Chen Lin, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Phillip Isola, Alberto Rodriguez
IROS, 2019
project website / code / data / press

Diverse dataset of 250 objects pushed 250 times each, all with RGB-D video. First probabilistic meta-learning benchmark.

Graph Element Networks: adaptive, structured computation and memory
Ferran Alet , Adarsh K. Jeewajee, Maria Bauza, Alberto Rodriguez, Tomás Lozano-Pérez, Leslie Pack Kaelbling
ICML, 2019   (Long talk)
talk/ code

We learn to map functions to functions by combining graph networks and attention to build computational meshes and show this new framework can solve very diverse problems.

Modular meta-learning
Ferran Alet , Tomás Lozano-Pérez, Leslie Pack Kaelbling
CoRL, 2018  
video/ code

We propose to do meta-learning by training a set of neural networks to be composable, adapting to new tasks by composing modules in novel ways, similar to how we compose known words to express novel ideas.

Finding Frequent Entities in Continuous Data
Ferran Alet , Rohan Chitnis, Tomás Lozano-Pérez, Leslie Pack Kaelbling
IJCAI, 2018  
video/

People often find entities by clustering; we suggest that, instead, entities can be described as dense regions and propose a very simple algorithm for detecting them, with provable guarantees.

Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
Andy Zeng et al.
ICRA, 2018   (Best Systems Paper Award by Amazon Robotics)
talk/ project website

Description of the system for the Amazon Robotics Challenge 2017 competition, in which we won the stowing task.

Mentees

Graduate students

Undergraduate students

  • Jan Olivetti
  • Javier Lopez-Contreras; moved to visiting student at UC Berkeley
  • Max Thomsen (with Maria Bauza); moved to MEng in MechE at MIT
  • Catherine Wu (with Yilun Du); continued undergrad at MIT
  • Nurullah Giray Kuru; continued undergrad at MIT
  • Margaret Wu; continued undergrad at MIT
  • Edgar Moreno; continued undergrad at UPC-CFIS
  • Shengtong Zhang; continued undergrad at MIT
  • Patrick John Chia ; moved to masters at Imperial College London
  • Catherine Zeng; continued undergrad at Harvard
  • Scott Perry; continued undergrad at MIT

He is a generous guy with a cool website.