Search code examples
neural-networkbackpropagation

Can a Neural Network using SGD change only one output of many with backprop?


Let's say I have a Neural Network with this structure: network([256, 100, 4]) where there are 256 input neurons, 100 hidden, and 4 outputs. The network uses the sigmoid function as it's activation function and the output neurons return a value in the range [0:1].

With every epoch, I can know that one of the four outputs is right or wrong. For instance, if the network gives me [1, 0, 1, 0], but I know that the first output should be a 0 and I know nothing else about the other three outputs.

Is there a way I can train the network so that only the first output is affected?

My intuition tells me that using backprop with the target set as [0,0,1,0] will solve my problem, but I'm also curious if [0, .5, .5, .5] makes more sense.


Solution

  • what you should do is set the gradient of the unknown outputs to zero during the back propagation stage. You should not set the label itself to any value because if the number of sample with unknown labels are large, you will bias the network output to that number. For example if you set [0, .5, .5, .5] and the ratio of unknown to known is maybe 20:1 its likely the network will simply output a constant [.5,.5,.5,.5]