Search code examples
pythonk-meansunsupervised-learning

K-Means Result-Index differs in second run


I am running K-Means on some statistical Data. My Matrix size is [192x31634]. K-Means performs well and creates the amount of 7 centroids, that I want it to. So my Result is [192x7]

As some self-check I store the index-Values I obtain in the K-Means run to a dictionary.

    centroids,idx = runkMeans(X_train, initial_centroids, max_iters)
    resultDict.update({'centroid' : centroids})
    resultDict.update({'idx' : idx})

Then I test my K-Means on the same Data I used to find the centroids. Strangely my Result differs:

    dict= pickle.load(open("MyDictionary.p", "rb"))         
    currentIdx = findClosestCentroids(X_train, dict['centroid'])
    print("idx Differs: ",np.count_nonzero(currentIdx != dict['idx']))

Output:

idx Differs: 189

Can someone explain this Difference to me? I turned up the max-iterations of the Algorithm to 50 which seems to be way too much. @Joe Halliwell pointed out, that K-Means is non-deterministic. findClosestCentroids gets called by runkMeans. I do not see, why the Results of the two idx can differ. Thanks for any Ideas.

Here is my code:

    def findClosestCentroids(X, centroids):
        K = centroids.shape[0]
        m = X.shape[0]
        dist = np.zeros((K,1))
        idx = np.zeros((m,1), dtype=int)
        #number of columns defines my number of data points
        for i in range(m):
            #Every column is one data point
            x = X[i,:]
            #number of rows defines my number of centroids
            for j in range(K):
                #Every row is one centroid
                c = centroids[j,:]
                #distance of the two points c and x
                dist[j] = np.linalg.norm(c-x)
                #if last centroid is processed
                if (j == K-1):
                    #the Result idx is set with the index of the centroid with minimal distance
                    idx[i] = np.argmin(dist)
        return idx

    def runkMeans(X, initial_centroids, max_iters):
        #Initialize values
        m,n = X.shape
        K = initial_centroids.shape[0]
        centroids = initial_centroids
        previous_centroids = centroids
        for i in range(max_iters):
            print("K_Means iteration:",i)
            #For each example in X, assign it to the closest centroid
            idx = findClosestCentroids(X, centroids)
            #Given the memberships, compute new centroids
            centroids = computeCentroids(X, idx, K)
        return centroids,idx

Edit: I turned my max_iters to 60 and get a

idx Differs: 0

Seems that was the problem.


Solution

  • K-means is a non-deterministic algorithm. One typically controls for this by setting the random seed. For example, SciKit Learn's implementation provides the random_state argument for this purpose:

    from sklearn.cluster import KMeans
    import numpy as np
    X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
    kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
    

    See the documentation at https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html