Matt Caren
mcaren [at] mit.edu
Research
At MIT, I develop computational models of how humans use their voices to communicate about sound. I'm advised by Josh Tenenbaum and Jonathan Ragan-Kelley, and mentored by Kartik Chandra and Karima Ma.
I also research real-time machine listening and MIR systems under Eran Egozy in the MIT Music Technology lab.
At Apple, I spent several summers developing multimodal LLM systems and bioinformatics algorithms for Apple Intelligence and the Health app.
I was also a founding member of the MIT Voxel Lab, and lead undergraduate committees on AI and interdisciplinary computing at the MIT Schwarzman College of Computing.
Publications
Sketching
With Your Voice: "Non-Phonorealistic" Rendering of Sounds via Vocal Imitation
(SIGGRAPH Asia 2024)
Matthew Caren, Kartik Chandra, Karima Ma, Joshua Tenenbaum, Jonathan Ragan-Kelley
Real-time
In-browser Time Warping for Live Score Following (WAC
2024)
Matthew Caren, Eran Egozy
The KeyWI: An Expressive and Accessible Electronic Wind Instrument (NIME 2020)
Matthew Caren, Romain Michon, Matthew Wright
TRoco: A generative algorithm using jazz music theory (AIMC 2020)
Matthew Caren
Sound & Color
Melia
A bespoke digital instrument searching for expressiveness in the failures of AI audio-to-audio models.
> PerformanceKeyWI
A next-generation electronic wind instrument.
Developed at Stanford's CCRMA, played by Grammy award-winning artists on international stages.
Etude
Original music composed for short film. Winner, Film/Media Scoring at the 2021 Marvin Hamlisch International Music Awards.
> WatchLast updated March 2025