Standard forms of cross-validation assume that examples are i.i.d.. When this is not the case, naive partitioning may result in data leakage. In these cases, a cross-validation protocol may be modified to keep groups together. This adjustment can be applied to any cross-validation protocol, so long as a large enough split is used to ensure that the largest group can fit entirely in the test set.

Note that, depending on the size of groups, this strategy may still not provide satisfactory performance. For example, in the limit where one group comprises the entire test set, the results are likely to be distorted.

For reasons I cannot fully fathom, SciKit-Learn chose to create separate classes for each supported combination of grouping, stratification, and cross-validation strategy.