I want to implement a deep dream on some image data I have. I decided to give cntk a go and started out using the MNIST hand writing data to develop against.
Typically, when training a neural network, you have some fixed set of input images and you have some variable weights to learn. Deep dreaming sort of flips these around and you have a variable image to learn, given some fixed weights in the network.
So I have trained a network to recognize the images quite consistently using a convolutional network. Now I want it to dream up some images, so I need to do a convolution with a variable input image. Apparently cntk does not allow this. The following code demonstrates the error I am getting.
def doConstantConv(inputX, W, b, redRank):
kernel = C.constant(value = W)
bias = C.constant(value = b)
conv = C.convolution(kernel, inputX, strides = (2,2), reduction_rank = redRank) + bias
return C.relu(conv)
W1 = np.reshape(np.arange(200.0, dtype = np.float32), (8,1,5,5))
b1 = np.reshape(np.arange(8.0, dtype = np.float32), (8,1,1))
someInput = C.parameter((1,28,28))
layer1 = doConstantConv(someInput, W1, b1, 1)
The error I am getting is "Convolution currently requires the main operand to have dynamic axes". But as far as I can tell, learn-able parameters cannot have dynamic axes. That wouldn't make sense, would it?
So is it fair to conclude that cntk cannot be used for deep dreaming? Is there a way to hack it?
Take a look at https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_205_Artistic_Style_Transfer.ipynb
You can create a learn-able input variable by specifiying need_gradient=True in its parameters.
Thanks,
Emad