I defined a three layer convolution layer(self.convs) ,the input tensor has the shape([100,10,24])
x_convs = self.convs(Variable(torch.from_numpy(X).type(torch.FloatTensor)))
>>Variable(torch.from_numpy(X).type(torch.FloatTensor)).shape
torch.Size([100, 10, 24])
>>self.convs
ModuleList(
(0): ConvBlock(
(conv): Conv1d(24, 8, kernel_size=(5,), stride=(1,), padding=(2,))
(relu): ReLU()
(maxpool): AdaptiveMaxPool1d(output_size=10)
(zp): ConstantPad1d(padding=(1, 0), value=0)
)
(1): ConvBlock(
(conv): Conv1d(8, 8, kernel_size=(5,), stride=(1,), padding=(2,))
(relu): ReLU()
(maxpool): AdaptiveMaxPool1d(output_size=10)
(zp): ConstantPad1d(padding=(1, 0), value=0)
)
(2): ConvBlock(
(conv): Conv1d(8, 8, kernel_size=(5,), stride=(1,), padding=(2,))
(relu): ReLU()
(maxpool): AdaptiveMaxPool1d(output_size=10)
(zp): ConstantPad1d(padding=(1, 0), value=0)
)
)
When I excuate x_convs = self.convs(Variable(torch.from_numpy(X).type(torch.FloatTensor)))
, it gives me the error
`94 registered hooks while the latter silently ignores them.
95 """
---> 96 raise NotImplementedError`
The ConvBlock is defined as below
class ConvBlock(nn.Module):
def __init__(self, T, in_channels, out_channels, filter_size):
super(ConvBlock, self).__init__()
padding = self._calc_padding(T, filter_size)
self.conv=nn.Conv1d(in_channels, out_channels, filter_size, padding=padding)
self.relu=nn.ReLU()
self.maxpool=nn.AdaptiveMaxPool1d(T)
self.zp=nn.ConstantPad1d((1, 0), 0)
def _calc_padding(self, Lin, kernel_size, stride=1, dilation=1):
p = int(((Lin-1)*stride + 1 + dilation*(kernel_size - 1) - Lin)/2)
return p
def forward(self, x):
x = x.permute(0,2,1)
x = self.conv(x)
x = self.relu(x)
x = self.maxpool(x)
x = x.permute(0,2,1)
return x
The "forward" function has correct indent, so I cannot figure it out what is going on.
You are trying to call a ModuleList
, which is a list
(i.e. a list object in Python), slightly modified for being used with PyTorch.
A quick fix would be to call the self.convs
as:
x_convs = self.convs[0](Variable(torch.from_numpy(X).type(torch.FloatTensor)))
if len(self.convs) > 1:
for conv in self.convs[1:]:
x_convs = conv(x_convs)
That is, although self.convs
is a list
, each member of it is a Module
. You can directly call each member of the self.convs
, using its index, e.g. ``self.convsan_index`.
Or, you can do it with the help of functools
module:
from functools import reduce
def apply_layer(layer_input, layer):
return layer(layer_input)
output_of_self_convs = reduce(apply_layer, self.convs, Variable(torch.from_numpy(X).type(torch.FloatTensor)))
P.S. Though, the Variable
keyword is not used anymore.