Shuffle-and-split cross-validation involves first randomly shuffling the dataset, and then taking the first (or last)
The advantage of shuffle-and-split is that it provides a linear tradeoff between evaluation metric variance and computational cost. On the other hand, it is impossible to perform an exhaustive shuffle-and-split CV. When both thoroughness and control is required, leave-p-out is preferred.
Sort of like bootstrap, but not really
Shuffle-and-split is superficially similar to bootstrap sampling. The difference is that, in bootstrap, the resampled data are used to train the same model.