Search code examples
machine-learningluaneural-networktorchrepresentation

How to access intermediate layers' outputs using nngraph?


I need to apply a loss function to an intermediate layer (L2) representation in a network which has many layers after the L2 layer. I know how to get access to the output of a network in nngraph as follow:

input = nn.Identity()()
net = nn.Sequential()
net:add(nn.Linear(100, 20)):add(nn.ReLU(true)) -- L1
net:add(nn.Linear(20, 10)):add(ReLU(true)) -- L2
net:add(nn.Linear(10, 2)) -- L3
output = net(input)

gmod = nn.gModule({input}, {output})

However, I don't know how I can access the result of the second layer and apply a loss function (criterion) and do backprop on it in a neat way. Can anyone give me some help with this?


Solution

  • you should specify your layer as a separate output, then you can access it at any given time

    input = nn.Identity()()
    L1 = nn.ReLU(true)(nn.Linear(100, 20)(input))
    L2 = nn.ReLU(true)(nn.Linear(20, 10)(L1))
    L3 = nn.Linear(10, 2)(L2)
    
    gmod = nn.gModule({input}, {L3, L2})