Search code examples
python-2.7scikit-learnlibsvmsoftmax

How can convert SVM class probabilities to logits?


I would like to convert probability classes output by an SVM to logits .

In order to get the probability of each class

model = svm.SVC(probability=True)
model.fit(X, Y)
results = model.predict_proba(test_data)[0]
# gets a dictionary of {'class_name': probability}
prob_per_class_dictionary = dict(zip(model.classes_, results))
# gets a list of ['most_probable_class', 'second_most_probable_class', ..., 'least_class']
results_ordered_by_probability = map(lambda x: x[0], sorted(zip(model.classes_, results), key=lambda x: x[1], reverse=True))

What l want to do with these probabilities ?

Convert the probabilities to logits.

Why ?

I would like to merge the results of the SVM with the results of a neural network. Such that loss neural network output logits. As a consequence, l'm looking for a way to transform the probabilities output by the SVM to logits than merge neural network logits with SVM logits using equal weights :

SVM logits + neural network logits = overal_logits

overal_probabilities= softmax(overal_logits)

EDIT :

Is it equivalent to sum logits then get probabilities to summing directly on the probabilities dividing by 2 ?

proba_nn_class_1=[0.8,0.002,0.1,...,0.00002]

proba_SVM_class_1=[0.6,0.1,0.21,...,0.000003]

overall_proba=[(0.8+0.6)/2,(0.002+0.1)/2,(0.1+0.21)/2,..., (0.00002+0.000003)/2 ]

Is this process numerically equivalent to sum logits of SVM and NN then get the probabilities via softmax ?

Thank you


Solution

  • def probs_to_logits(probs, is_binary=False):
        r"""
        Converts a tensor of probabilities into logits. For the binary case,
        this denotes the probability of occurrence of the event indexed by `1`.
        For the multi-dimensional case, the values along the last dimension
        denote the probabilities of occurrence of each of the events.
        """
        ps_clamped = clamp_probs(probs)
        if is_binary:
            return torch.log(ps_clamped) - torch.log1p(-ps_clamped)
        return torch.log(ps_clamped)
    
    def clamp_probs(probs):
        eps = _finfo(probs).eps
        return probs.clamp(min=eps, max=1 - eps)
    

    From https://github.com/pytorch/pytorch/blob/master/torch/distributions/utils.py#L107