I'm trying to predict several labels for a given text. It works well for a single label, but I don't know how to implement confidence score for multi-label prediction.
I have data in the following denormalized format:
┌────┬──────────┬────────┐
│ id │ Topic │ Text │
├────┼──────────┼────────┤
│ 1 │ Apples │ FooBar │
│ 1 │ Oranges │ FooBar │
│ 1 │ Kiwis │ FooBar │
│ 2 │ Potatoes │ BazBak │
│ 3 │ Carrot │ BalBan │
└────┴──────────┴────────┘
Each text can have one or multiple topics assigned. So far I came up with this. First, I prepare my data - tokenize, stem, etc.
df = #read data from csv
categories = [ "Apples", "Oranges", "Kiwis", "Potatoes", "Carrot"]
words = []
docs = []
for index, row in df.iterrows():
stems = tokenize_and_stem(row, stemmer)
words.extend(stems)
docs.append((stems, row[1]))
# remove duplicates
words = sorted(list(set(words)))
# create training data
training = []
output = []
# create an empty array for our output
output_empty = [0] * len(categories)
for doc in docs:
# initialize our bag of words(bow) for each document in the list
bow = []
# list of tokenized words for the pattern
token_words = doc[0]
# create our bag of words array
for w in words:
bow.append(1) if w in token_words else bow.append(0)
output_row = list(output_empty)
output_row[categories.index(doc[1])] = 1
# our training set will contain a the bag of words model and the output row that tells which catefory that bow belongs to.
training.append([bow, output_row])
# shuffle our features and turn into np.array as tensorflow takes in numpy array
random.shuffle(training)
training = np.array(training)
# trainX contains the Bag of words and train_y contains the label/ category
train_x = list(training[:, 0])
train_y = list(training[:, 1])
Next, I create my training model
# reset underlying graph data
tf.reset_default_graph()
# Build neural network
net = tflearn.input_data(shape=[None, len(train_x[0])])
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, len(train_y[0]), activation='softmax')
net = tflearn.regression(net)
# Define model and setup tensorboard
model = tflearn.DNN(net, tensorboard_dir='tflearn_logs')
# Start training (apply gradient descent algorithm)
model.fit(train_x, train_y, n_epoch=1000, batch_size=8, show_metric=True)
model.save('model.tflearn')
After that I'm trying to predict my topics:
df = # read data from excel
for index, row in df.iterrows():
prediction = model.predict([get_bag_of_words(row[2])])
return categories[np.argmax(prediction)]
As you can see, I pick maximum of prediction
, which works well for a single topic. In order to pick multiple topics, I need some confidence score or something, which can tell me when to stop, because I can't blindly put an arbitrary threshold.
Any suggestions?
Instead of using a softmax activation on your output layer you should use a sigmoid activation. Your loss function should be cross entropy still. This is the key change you should need for multi-class.
The problem with softmax is that it creates a probability distribution over your outputs. So if class A and B are both strongly represented, softmax over 3 classes might give you a result like [0.49, 0.49, 0.02], but you would prefer something more like [0.99, 0.99, 0.01].
The sigmoid activation does exactly this, it squashes the real-valued logits (the value of the last layer before a transformation is applied) to a [0, 1] range (which is necessary to use the cross-entropy loss function). And it does that for each output independently.