k 个初始类聚类中心点的选取对聚类结果具有较大的影响,因为在该算法第一步中是随机的选取任意 k 个对象作为初始聚类的中心,初始地代表一个簇。该算法在每次迭代中对数据集中剩余的每个对象,根据其与各个簇中心的距离将每个对象重新赋给最近的簇。当考察完所有数据对象后,一次迭代运算完成,新的聚类中心被计算出来。如果在一次迭代前后,J 的值没有发生变化,说明算法已经收敛。
测试数据集
首先从 sklearn 导入数据集。我们用非常著名的 iris 数据集。
1 2 3 4 5 6 7 8
from sklearn import datasets from matplotlib import pyplot as plt
iris = datasets.load_iris () X, y = iris.data, iris.target
data = X [:,[1,3]] # 为了便于可视化,只取两个维度 plt.scatter (data [:,0],data [:,1]);
defrand_center(data,k): """Generate k center within the range of data set.""" n = data.shape [1] # features centroids = np.zeros ((k,n)) # init with (0,0).... for i inrange(n): dmin, dmax = np.min(data [:,i]), np.max(data [:,i]) centroids [:,i] = dmin + (dmax - dmin) * np.random.rand (k) return centroids
defkmeans(data,k=2): def_distance(p1,p2): """ Return Eclud distance between two points. p1 = np.array ([0,0]), p2 = np.array ([1,1]) => 1.414 """ tmp = np.sum((p1-p2)**2) return np.sqrt (tmp) def_rand_center(data,k): """Generate k center within the range of data set.""" n = data.shape [1] # features centroids = np.zeros ((k,n)) # init with (0,0).... for i inrange(n): dmin, dmax = np.min(data [:,i]), np.max(data [:,i]) centroids [:,i] = dmin + (dmax - dmin) * np.random.rand (k) return centroids
def_converged(centroids1, centroids2): # if centroids not changed, we say 'converged' set1 = set([tuple(c) for c in centroids1]) set2 = set([tuple(c) for c in centroids2]) return (set1 == set2)
n = data.shape [0] # number of entries centroids = _rand_center (data,k) label = np.zeros (n,dtype=np.int) # track the nearest centroid assement = np.zeros (n) # for the assement of our model converged = False
whilenot converged: old_centroids = np.copy (centroids) for i inrange(n): # determine the nearest centroid and track it with label min_dist, min_index = np.inf, -1 for j inrange(k): dist = _distance (data [i],centroids [j]) if dist < min_dist: min_dist, min_index = dist, j label [i] = j assement [i] = _distance (data [i],centroids [label [i]])**2
# update centroid for m inrange(k): centroids [m] = np.mean (data [label==m],axis=0) converged = _converged (old_centroids,centroids) return centroids, label, np.sum(assement)