Shuffle-and-split cross-validation involves first randomly shuffling the dataset, and then taking the first (or last) examples as test data, with the rest being used for training data. A model is then trained on this train-and-test split. Repeating this procedure times results in models trained on independent samplings of the data.

The advantage of shuffle-and-split is that it provides a linear tradeoff between evaluation metric variance and computational cost. On the other hand, it is impossible to perform an exhaustive shuffle-and-split CV. When both thoroughness and control is required, leave-p-out is preferred.

Sort of like bootstrap, but not really

Shuffle-and-split is superficially similar to bootstrap sampling. The difference is that, in bootstrap, the resampled data are used to train the same model.