Search code examples
pythonmachine-learningcluster-analysisscikit-learn

Is silhouette coefficient subsampling stratified in sklearn ?


I'm again having trouble using the scikit-learn silhouette coefficient. (first question was here : silhouette coefficient in python with sklearn). I make a clustering that can be very unbalanced but with a lot of individuals so I want to use the sampling parameter of the silhouette coefficient. I was wondering if the subsampling was stratified, meaning sampling with respect to clusters. I take the iris dataset as an example but my dataset is far bigger (and that's why I need sampling). My code is :

from sklearn import datasets
from sklearn.metrics import *
iris = datasets.load_iris()
col = iris.feature_names
name = iris.target_names
X = pd.DataFrame(iris.data, columns = col)
y = iris.target
s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)

which works. But now If I biased that with :

y[0:148] =0
y[148] = 1
y[149] = 2
print y
s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)

I get :

ValueError                                Traceback (most recent call last)
<ipython-input-12-68a7fba49c54> in <module>()
      4 y[149] =2
      5 print y
----> 6 s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_score(X, labels, metric, sample_size, random_state, **kwds)
     82         else:
     83             X, labels = X[indices], labels[indices]
---> 84     return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
     85 
     86 

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_samples(X, labels, metric, **kwds)
    146                   for i in range(n)])
    147     B = np.array([_nearest_cluster_distance(distances[i], labels, i)
--> 148                   for i in range(n)])
    149     sil_samples = (B - A) / np.maximum(A, B)
    150     # nan values are for clusters of size 1, and should be 0

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in _nearest_cluster_distance(distances_row, labels, i)
    200     label = labels[i]
    201     b = np.min([np.mean(distances_row[labels == cur_label])
--> 202                for cur_label in set(labels) if not cur_label == label])
    203     return b

/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.pyc in amin(a, axis, out, keepdims)
   1980         except AttributeError:
   1981             return _methods._amin(a, axis=axis,
-> 1982                                 out=out, keepdims=keepdims)
   1983         # NOTE: Dropping the keepdims parameter
   1984         return amin(axis=axis, out=out)

/usr/lib/python2.7/dist-packages/numpy/core/_methods.pyc in _amin(a, axis, out, keepdims)
     12 def _amin(a, axis=None, out=None, keepdims=False):
     13     return um.minimum.reduce(a, axis=axis,
---> 14                             out=out, keepdims=keepdims)
     15 
     16 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):

ValueError: zero-size array to reduction operation minimum which has no identity

an error which is due I think to the fact that sampling is random not stratified so it has not taken into account the two small clusters.

Am I correct ?


Solution

  • I think you are right, the current implementation does not support balanced resampling.