Search code examples
tensorflowmachine-learningkerasrecurrent-neural-network

Practical meaning of output in simple recurrent neural network


I am trying to learn RNN model. Here is the model I built:

N = 3 # number of samples
T = 10 # length of a single sample
D = 3 # number of features
K = 2 # number of output units
X = np.random.randn(N, T, D)

# Make an RNN
M = 5 # number of hidden units

i = tf.keras.layers.Input(shape=(T, D))
x = tf.keras.layers.SimpleRNN(M)(i)
x = tf.keras.layers.Dense(K)(x)

model = tf.keras.Model(i, x)

Yhat = model.predict(X[0].reshape(1, -1, D)) # output: array([[-0.67114466, -0.65754676]], dtype=float32)

I don't understand the meaning of Yhat. Here I consider X as sequential data:

[data_point0...data_pointT], [data_point0...data_pointT], [data_point0...data_pointT]

Each data point has D=3features.

Here Yhat.shape==(1, 2).

2 doesn't equal to D which is a number of features. I guess, model.predict() doesn't make a prediction on the next data point. If model.predict() do prediction on the next data point, The shape of the result should be (1, D).

Then what's the practical meaning of Yhat?


Solution

  • You should pay attention to the model.

    After the RNN layer you used a Dense layer where the output dimension is 2! So the size of the output you get by running the model.predict() is fine.

    If you wish it to have other dimensions alter the output size of the Dense layer. From x = tf.keras.layers.Dense(K)(x) to x = tf.keras.layers.Dense(D)(x)

    The question if the model does/doesn't predict properly depend on multiple issues such as the training data, hyper-parameter etc.