The Learning Problem and Regularization
Tomaso Poggio
Description
We introduce the problem of learning from sparse examples. We introduce key terms and
concepts such as loss functions, empirical risk, true risk, generalization error, hypothesis
spaces, approximation error and sample error. We introduce two key requirements on learning
algorithms: stability and consistency. We then describe Tikhonov regularization -- which
in our course is the algorithm with the magic.
Slides
Slides for this lecture: PDF.
Class Reference Matterial
Chapter 1 - Statistical Learning Theory,
Chapter 2 - Consistency, Learnability and Regularization,
L. Rosasco, T. Poggio, A Regularization Tour of Machine Learning, MIT-9.520 Lectures Notes (book draft), 2015.
Note: The course notes, in the form of the book draft circulated is the reference material for this class.
Related and older material for this class can be accessed through the link for the previous year offerings of the course.
Further Reading