I read an example of using LSTM
with CONV1
.
(Took it from: CNN LSTM)
Conv1D(filters=64, kernel_size=1, activation='relu')
filters=64
? what does it mean ?relu
activation function work on the output of the convolutional ? (from what I read it seems like that, but I'm not sure)kernel_size = 1
, as we do here ?filters = 64
means number of separate filters used is 64.
Each filter will output 1 channel. i.e. here 64 filters operate on input to produce 64 different channels(or vectors). Hence filters
parameter determines number of output channels.
kernel_size
determines the size of the convolution window. Suppose kernel_size = 1
then each kernel will have dimension of in_channels x 1
. Hence each kernel weight will be in_channels x 1
dimension tensor.
That means relu
activation will be applied on the output of convolution operation.
Used to reduce depth channels with applying non-linearity. It will do something like weighted average across the channels while keeping receptive field.
In your eg: filters = 64, kernel_size = 1, activation = relu
Suppose input feature map has size of 100 x 10
(100 channels). Then the layer weight will of dimension 64 x 100 x 1
. The output size will be 64 x 10
.