Leave-one-out (LOO) and leave-p-out (LPO) cross-validation
1 min read
Leave--out (LPO) cross-validation, as its name suggests, involves reserving examples as test examples, and using the remainder as training data. This is extremely data-efficient for small .
LPO also allows the practitioner to trade off computational cost with score variance. Shuffle-and-split is generally better for this purpose, as it offers a linear tradeoff between complexity and variance. However, it is not exhaustive like LPO.
It is necessary to train choose models for a complete pass. For small , this can be an impracticably large number. Hence this strategy is usually reserved for very small datasets where every example is precious.
When , we call this procedure Leave-one-out (LOO) cross validation. This is as data-efficient as it gets, and requires training models.