Lecture 20: Leave-one-out approximations
Sayan Mukherjee
Description
We introduce the idea of cross-validation,
leave-one-out in its extreme form. We show that the leave-one-out
estimate is almost unbiased. We then show a series of approximations
and bounds on the leave-one-out error that are used for computational
efficiency. First this is shown for least-squares loss then for
the SVM loss function. We close by reporting in a worst case analysis
the leave-one-out error is not a significantly better estimate of
expected error than is the training error.
Slides
Slides for this lecture: PS,PDF
Suggested Reading