'Have a cup of THINK' (every morning at my desk.)

Li Ding | 丁立

I'm currently at MIT, working on deep learning for perception and control of autonomous vehicles.

I will be TAing MIT 6.S094: Deep Learning for Self-driving Cars in January 2018.

My current research involves:

  • driving scene perception
  • dense motion tracking with optical flow
  • label propagation in video sequences
  • human-computer interaction in video annotation
  • edge case study for image recognition

Prior to joining MIT, I worked on deep learning for human action recognition at University of Rochester.

In addition, I'm a casual kaggler and fond of playing with different kinds of data. I got a bronze medal (top 6%) in Data Science Bowl 2017.

I'm from Shanghai, China. I like photography, electro-funk, all kinds of cuisine, and at the moment, walking and traveling around with Pokémon Go.

Github | LinkedIn | Kaggle
liding [at] mit.edu


Driving Scene Perception

Pixel-level Tracking with Optical Flow

Edge Cases in Image Recognition

Weakly Supervised Action Localization


MIT Autonomous Vehicle Technology Study: Large-Scale Deep Learning Based Analysis of Driver Behavior and Interaction with Automation
[2017 under review] [arXiv:1711.06976]

TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation
[2017 under review] [arXiv:1705.07818]