Search code examples
keraskeras-layer

Conv1D on 2D input


Can someone explain to me what happens when a keras Conv1D layer is fed 2D input? Such as:

model=Sequential()
model.add(Conv1D(input_shape=(9000,2),kernel_size=200,strides=1,filters=20))

Varying the input size between (9000,1) and (9000,2) and calling model.summary(), I see that the output shape stays the same, but the number of parameters changes. So, does that mean that different filters are trained for each channel, but the output is summed/averaged across the 2nd dimension before outputting? Or what?


Solution

  • In the doc you can read that the input MUST be 2D.

    Conv1D can be seen as a time-window going over a sequence of vectors. The kernel will 2dimensions window, as large as the vectors length (so the 2nd dimension of your input) and will be as long as your window size...

    So indeed it is perfectly normal that your two networks have the same output shape... and the number of parameters is higher because the kernels are 2times bigger due to the second dimension.

    I hope this helps :-)