I have a network that has as a part of it a unet-like structure. I would like to make the convolutional layers shared between two inputs. An example of my code:
conv_layer = Conv(parameters)
out1 = con_layer(input1)
out2 = con_layer(input2)
Does this part create two outputs that each one of them depends only on the correspondent input and the shared weights?Or does it concatenate the inputs and passes them from convolution? Are the weights the same in the two calls of this layer? Also, a question about the learning. When it comes to backpropagate, does the loss propagate once from the shared layers? Is there any change in learning?
First of all, U-Net doesn't exactly have any shared layer. It uses skip connections and concatenation to reuse features.
A shared layer looks something like this
x --> F(x)
==> G(F(x),F(y))
y --> F(y)
Does this part create two outputs that each one of them depends only on the correspondent input and the shared weights?
Does it concatenate the inputs and passes them from convolution?
Are the weights the same?
When it comes to backpropagate, does the loss propagate once from the shared layers? Is there any change in learning?
Some useful reading: http://neural.vision/blog/deep-learning/backpropagation-with-shared-weights/
https://datascience.stackexchange.com/questions/27506/back-propagation-in-cnn