I am making a network using keras library.
Lets suppose that I have 2D matrix
[ 0 0 1 2
0 1 2 5
1 0 0 1 ]
what I want to do is obtaining the following matrix
[ 0.00 0.00 0.02 0.10
0.00 0.02 0.10 0.99
0.02 0.00 0.00 0.02 ]
As shown I want to make the layer to express the largest element of 2d array to be emphasized only.
How can I achieve this?
Is this can be simply achieved by simply adjusting softmax twice ?
If I understand correctly, you want to take the softmax over the entire 2D array. If so, applying directly the softmax to a 2D array will return the softmax over each column (separately!). E.g.:
X = np.log([[1, 1, 2], [3, 3, 3]])
Y = tf.keras.layers.Activation('softmax')(X)
assert np.allclose(Y, [[0.25, 0.25, 0.5], [0.3333, 0.3333, 0.3333]], atol=1e-4)
If you want the softmax over all elements of the 2D vector, this should do:
X = np.log([[1, 1, 1], [1, 2, 4]])
X = np.expand_dims(X, axis=0) # add batch dim
X = tf.keras.layers.Reshape((-1,))(X) # the batch dimension will be preserved (shape in Reshape doesn't include the batch dim)
# equivalent to: X = X.reshape(m, -1), where m is the batch dim.
# This however will not keep track of the gradients for backprop.
#That's why, it's better to use a Reshape layer.
Y = tf.keras.layers.Activation('softmax')(X)
assert np.allclose(Y, [[0.1, 0.1, 0.1, 0.1, 0.2, 0.4]])