Can someone explain the difference between two normalized versions of measurements (NMI and AMI) that measure the agreement of the two assignments, ignoring permutations.
Let consider this code:
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
# AMI score:
score_ami = metrics.adjusted_mutual_info_score(labels_true, labels_pred)
print(score_ami)
# NMI Score
score_nmi = metrics.normalized_mutual_info_score(labels_true, labels_pred)
print(score_nmi)
Adjusted Mutual Information is rescaled such that a random clustering has score 0.
With NMI, even randomly shuffled labels will get a positive score.