I want to create a version of scikit-learns predict_proba from a list of predictions.
I currently have a list that looks like this:
[[0,1,0,0,0,1,1,0,0,0],[0,1,0,1,0,1,1,1,0,0],[0,0,0,0,0,1,1,0,0,0]]
I want to find the probabity of the first value of each list being a 0 or a 1 and then the same for each consecutive value.
I.e the output would be like this:
[[0.33,0.66],[0,1],[0.66,0.3]........etc
I've written the below code and it works fine but it seems klunky and im sure there is a better way to achieve my goal?
Any suggestons?
#create np array from list
ar = np.array([[0,1,0,0,0,1,1,0,0,0],[0,1,0,1,0,1,1,1,0,0],[0,0,0,0,0,1,1,0,0,0]])
#calculate unique values and sort in order
uni = np.unique(ar)
uni.sort()
#create new pred list
new_pred = []
#transpose and iterate
for row in ar.transpose():
# create dic with keys as unique values
val_dic = {k: 0 for k in uni}
#create list for row probabilities
row_pred = []
#iterate row and incremnet dic if found
for val in row:
if val in val_dic.keys():
val_dic[val] = val_dic.get(val, 0) + 1
#calc row total
total = sum(val_dic.values())
#append row list with probabilities
for val in val_dic.values():
row_pred.append(val/total)
#append final output list
new_pred.append(row_pred)
print(new_pred)
output:
[[1.0, 0.0], [0.3333333333333333, 0.6666666666666666], [1.0, 0.0], [0.6666666666666666, 0.3333333333333333], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.6666666666666666, 0.3333333333333333], [1.0, 0.0], [1.0, 0.0]]
If your ar
is consisting only of 0
, 1
as in your question, you can do this to simplify your code:
import numpy as np
ar = np.array([[0,1,0,0,0,1,1,0,0,0],[0,1,0,1,0,1,1,1,0,0],[0,0,0,0,0,1,1,0,0,0]])
prob_1 = ar.T.sum(axis=1) / len(ar) # <-- max sum of row is len(ar) == 3
prob_0 = 1.0 - prob_1
print(np.column_stack((prob_0, prob_1)))
Prints:
[[1. 0. ]
[0.33333333 0.66666667]
[1. 0. ]
[0.66666667 0.33333333]
[1. 0. ]
[0. 1. ]
[0. 1. ]
[0.66666667 0.33333333]
[1. 0. ]
[1. 0. ]]