Assistant Professor and Director of the Computational Core
A.A. Martinos Center for Biomedical Imaging
Massachusetts General Hospital, Harvard Medical School
Research Scientist, Computer Science and Artificial Intelligence Lab
EECS, Massachusetts Institute of Technology
Contact: adalca at mit dot edu. 32 Vassar St 32-G904,
Cambridge, MA, 02139
My research focuses on developing machine learning solutions for medical image analysis, with a focus on creating systems that enable new applications. This goal naturally introduces unique and exciting technical challenges. By bridging new AI methods with clinical and scientific needs, the work is aimed at opening new possibilities for both clinical workflows and research studies of human disease. Our work is published in premier venues across AI, healthcare, and scientific domains.
I also consult on various topics on AI and machine learning for healthcare, computer vision, and medical image analysis, drawing on our AI research and integration with clinical and scientific workflows.
I was a postdoctoral fellow at CSAIL, MIT and MGH, Harvard Medical School, working with Mert Sabuncu and John Guttag. I completed my PhD in the Medical Vision Group, CSAIL, EECS, MIT, advised by Polina Golland.
My wife, Monica, completed her PhD at MIT in the Biology department doing exciting research in cancer biology.
Most existing imaging AI tools only solve the narrow task they were trained for, making it impossible to apply them off-the-shelf to new problems or complex end-to-end workflows. We build universal AI systems that adapt to varied imaging tasks from simple prompts, enabling segmentation, registration, measurement, and detection without task-specific retraining. Methods such as UniverSeg, Tyche, ScribblePrompt, MultiVerSeg, and most recently VoxelPrompt and Pancakes show how prompting and in-context learning can turn a single model into an adaptable assistant for real clinical and research needs.
Traditional models often fail outside the exact modality, contrast, or resolution they were trained on, limiting real-world deployment. We developed a framework for building procedural simulations for anatomy, pathology, contrast, and artifacts—to train AI models that generalize across scanners, sites, and populations. This is now a central component of our workflow for training AI models. Tools like SynthSeg, SynthMorph, SynthStrip, and Anatomix demonstrate how synthetic diversity can produce robust medical imaging algorithms.
Existing image registration methods require manual tuning and substantial effort to get right, and are difficult to adapt to new imaging setups or clinical needs. We established core approaches in learning-based image registration, and continued to expand our infrastructure to design fast, accurate registration methods that blend learning with rigorous deformation modeling. Tools such as VoxelMorph, SynthMorph, HyperMorph, and MultiMorph enable modality-invariant alignment, large-deformation handling, and orders-of-magnitude acceleration for clinical and population-scale imaging studies.
Webdesign: Adrian Dalca. Based on: MiniFolio