Search code examples
machine-learningneural-networkdeep-learningtorch

Transfer function of neural network


I used torch7 building a 3 hidden layers neural network to handle a classification problem (3 classes.). But I am confused about what transfer function to use and how to use them. Below is the structure of my network:

net:add(nn.Linear(inputs, hid_1))
net:add(nn.Tanh())
net:add(nn.Linear(hid_1, hid_2))
net:add(nn.Tanh())
net:add(nn.Linear(hid_2, hid_3))
net:add(nn.Tanh())
net:add(nn.Linear(hid_3, outputs))
net:add(nn.LogSoftMax())

criterion = nn.ClassNLLCriterion()

As above, I used all Tanh() transfer functions, is that correct? Can I use other transfer fucntion (like Sigmoid().. )? And do I have to insert transfer function between each layer?

Many thanks ahead.


Solution

  • As above, I used all Tanh() transfer functions, is that correct?

    It is correct, as well as using any other transfer function would be correct.

    Can I use other transfer fucntion (like Sigmoid().. )?

    Yes, you can use any transfer function, each has its own properties, which are way to long to express in SO form, however nowadays you might find that ReLU is one of the most commonly used ones, especially in deeper networks.

    And do I have to insert transfer function between each layer?

    Yes, if you do not - mathematically speaking your layers collapse (consequtive linear layers behave as a single linear layer at least in the sense of final behaviour - training might be different).