Search code examples
pythonmachine-learningscikit-learnprobabilityanomaly-detection

Conversion of IsolationForest decision score to probability algorithm


I am looking to create a generic function to convert the output decision_scores of sklearn's IsolationForest into true probabilities [0.0, 1.0].

I am aware of, and have read, the original paper and I understand mathematically that the output of that function is not a probability, but is instead an average of the path length constructed by each base estimator to isolate an anomaly.

Problem

I want to convert that output to a probability in the form of a tuple (x,y) where x=P(anomaly) and y=1-x.

Current Approach

def convert_probabilities(predictions, scores):
    from sklearn.preprocessing import MinMaxScaler

    new_scores = [(1,1) for _ in range(len(scores))]

    anomalous_idxs = [i for i in (range(len(predictions))) if predictions[i] == -1]
    regular_idxs = [i for i in (range(len(predictions))) if predictions[i] == 1]

    anomalous_scores = np.asarray(np.abs([scores[i] for i in anomalous_idxs]))
    regular_scores = np.asarray(np.abs([scores[i] for i in regular_idxs]))

    scaler = MinMaxScaler()

    anomalous_scores_scaled = scaler.fit_transform(anomalous_scores.reshape(-1,1))
    regular_scores_scaled = scaler.fit_transform(regular_scores.reshape(-1,1))

    for i, j in zip(anomalous_idxs, range(len(anomalous_scores_scaled))):
        new_scores[i] = (anomalous_scores_scaled[j][0], 1-anomalous_scores_scaled[j][0])
    
    for i, j in zip(regular_idxs, range(len(regular_scores_scaled))):
        new_scores[i] = (1-regular_scores_scaled[j][0], regular_scores_scaled[j][0])

    return new_scores

modified_scores = convert_probabilities(model_predictions, model_decisions)

Minimum, Reproducible Example

import pandas as pd
from sklearn.datasets import make_classification, load_iris
from sklearn.ensemble import IsolationForest
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split

# Get data
X, y = load_iris(return_X_y=True, as_frame=True)
anomalies, anomalies_classes = make_classification(n_samples=int(X.shape[0]*0.05), n_features=X.shape[1], hypercube=False, random_state=60, shuffle=True)
anomalies_df = pd.DataFrame(data=anomalies, columns=X.columns)

# Split into train/test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=60)

# Combine testing data
X_test['anomaly'] = 1
anomalies_df['anomaly'] = -1
X_test = X_test.append(anomalies_df, ignore_index=True)
y_test = X_test['anomaly']
X_test.drop('anomaly', inplace=True, axis=1)

# Build a model
model = IsolationForest(n_jobs=1, bootstrap=False, random_state=60)

# Fit it
model.fit(X_train)

# Test it
model_predictions = model.predict(X_test)
model_decisions = model.decision_function(X_test)

# Print results
for a,b,c in zip(y_test, model_predictions, model_decisions):
    print_str = """
    Class: {} | Model Prediction: {} | Model Decision Score: {}
    """.format(a,b,c)

    print(print_str)

Problem

modified_scores = convert_probabilities(model_predictions, model_decisions)

# Print results
for a,b in zip(model_predictions, modified_scores):
    ans = False
    if a==-1:
        if b[0] > b[1]:
            ans = True
        else:
            ans = False
    elif a==1:
        if b[1] > b[0]:
            ans=True
        else:
            ans=False
    print_str = """
    Model Prediction: {} | Model Decision Score: {} | Correct: {}
    """.format(a,b, str(ans))

    print(print_str)

Shows some odd results, such as:

Model Prediction: 1 | Model Decision Score: (0.17604259932311161, 0.8239574006768884) | Correct: True
Model Prediction: 1 | Model Decision Score: (0.7120367886017022, 0.28796321139829784) | Correct: False
Model Prediction: 1 | Model Decision Score: (0.7251531538304419, 0.27484684616955807) | Correct: False
Model Prediction: -1 | Model Decision Score: (0.16776449326185877, 0.8322355067381413) | Correct: False
Model Prediction: 1 | Model Decision Score: (0.8395087028516501, 0.1604912971483499) | Correct: False

Model Prediction: 1 | Model Decision Score: (0.0, 1.0) | Correct: True

How could it be possible for the prediction to be -1 (anomaly), but the probability to only be 37%? Or for the prediction to be 1 (normal), but the probability is 26%?

Note, the toy dataset is labeled but an unsupervised anomaly detection algorithm obviously assumes no labels.


Solution

  • Though months later, there is an answer to this question.

    A paper was published in 2011 that attempted to show research on just this topic; unifying anomaly scores into probabilities.

    In fact, the pyod library has a common predict_proba method, which gives an option to use this unifying method.

    Here is a code implementation of that (influenced from their source):

    def convert_probabilities(data, model):
        decision_scores = model.decision_function(data)
        probs = np.zeros([data.shape[0], int(model.classes)])
        pre_erf_score = ( decision_scores - np.mean(decision_scores) ) / ( np.std(decision_scores) * np.sqrt(2) )
        erf_score = erf(pre_erf_score)
        probs[:, 1] = erf_score.clip(0, 1).ravel()
        probs[:, 0] = 1 - probs[:, 1]
        return probs
    

    (For reference, pyod does have an Isolation Forest implementation)