The implementation of the decision tree checks that there are more than 2 * min_samples_leaf nodes before calling the splitter, which is all well and good.
Then, in the implementation of the splitter, after sorting by a chosen feature, we have this while loop which runs over all possible splits and picks the best one depending on impurity:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_splitter.pyx#L401
It seems to me that it's entirely possible--due to this while loop--that we don't find any appropriate splits. An example would be:
Xf = [0,0,0,1]
min_samples_leaf = 2
In this case we don't find any appropriate splits, and 'best position' defaults to 'end position'. Am I missing something here?
Found my own answer ha-
If we don't find any appropriate splits 'best position' defaults to 'end position', but 'end position' is not a valid position, and the caller of the splitter will check for this and mark itself as a leaf if this happens.