When classes are highly imbalanced, using a naive cross-validation strategy may result in excessive evaluation metric variance. To prevent this, it can be helpful to stratify the train and test set selections, such that these sets preserve the expected class ratios. The downside, of course, is that the cross-validation is artificially constrained and can therefore provide misleading results.

Stratification can be applied to most cross-validation methods, though obviously not to methods that select very small test sets (such as leave-one-out).

SciKit-Learn chose for some reason to create a separate, stratified Python class for each compatible CV method.