Search code examples
functionmachine-learninggpgputheano

Is there a way to change a function's update list without re-compiling it in Theano?


Indeed I want to change the learning rate at different periods of training. Something like:

for i in range(iter_num):
    learn_rate = i*alpha
    do_training(learn_rate,...)

Apparently recompiling a new function for every iteration is going to be too slow. So I was wondering is there a better way to do it in Theano? Thanks!


Solution

  • You can make the learning rate a symbolic variable and pass it into the training function like this:

    import numpy
    import theano
    import theano.tensor as tt
    
    
    def compile(input_size, hidden_size, output_size):
        W_h = theano.shared(numpy.random.standard_normal(size=(input_size, hidden_size)).astype(theano.config.floatX))
        b_h = theano.shared(numpy.zeros((hidden_size,), dtype=theano.config.floatX))
        W_y = theano.shared(numpy.random.standard_normal(size=(hidden_size, output_size)).astype(theano.config.floatX))
        b_y = theano.shared(numpy.zeros((output_size,), dtype=theano.config.floatX))
    
        x = tt.matrix('x')
        z = tt.ivector('z')
        learning_rate = tt.scalar()
        h = tt.tanh(theano.dot(x, W_h) + b_h)
        y = tt.nnet.softmax(theano.dot(h, W_y) + b_y)
        cost = tt.nnet.categorical_crossentropy(y, z).mean()
        updates = [(p, p - learning_rate * tt.grad(cost, p)) for p in (W_h, b_h, W_y, b_y)]
        return theano.function([x, z, learning_rate], outputs=cost, updates=updates)
    
    
    def main():
        input_size = 5
        hidden_size = 4
        output_size = 3
        train = compile(input_size, hidden_size, output_size)
        print train([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], [1, 2], 0.1)
    
    
    main()
    

    Note that the training function now has three parameters; the third is the learning rate.