Search code examples
pythonmachine-learningtheanoconv-neural-networklasagne

Convolutional neural network: how to train it? (unsupervised)


I'm trying to implement a CNN to play a game. I'm using python with theano/lasagne. I've build the network and am now figuring out how to train it.

So now I have a batch of 32 states and for each state in that batch the action and the expected rewards for that action.

Now how can I train the network so that it learn that these actions in these states lead to these rewards?

EDIT: Clarifying my problem.

Here is my full code: http://pastebin.com/zY8w98Ng The snake import: http://pastebin.com/fgGCabzR

I'm having trouble with this bit:

def _train(self):
    # Prepare Theano variables for inputs and targets
    input_var = T.tensor4('inputs')
    target_var = T.ivector('targets')
    states = T.tensor4('states')
    print "sampling mini batch..."
    # sample a mini_batch to train on
    mini_batch = random.sample(self._observations, self.MINI_BATCH_SIZE)
    # get the batch variables
    previous_states = [d[self.OBS_LAST_STATE_INDEX] for d in mini_batch]
    actions = [d[self.OBS_ACTION_INDEX] for d in mini_batch]
    rewards = [d[self.OBS_REWARD_INDEX] for d in mini_batch]
    current_states = np.array([d[self.OBS_CURRENT_STATE_INDEX] for d in mini_batch])
    agents_expected_reward = []
    # print np.rollaxis(current_states, 3, 1).shape
    print "compiling current states..."
    current_states = np.rollaxis(current_states, 3, 1)
    current_states = theano.compile.sharedvalue.shared(current_states)

    print "getting network output from current states..."
    agents_reward_per_action = lasagne.layers.get_output(self._output_layer, current_states)


    print "rewards adding..."
    for i in range(len(mini_batch)):
        if mini_batch[i][self.OBS_TERMINAL_INDEX]:
            # this was a terminal frame so need so scale future reward...
            agents_expected_reward.append(rewards[i])
        else:
            agents_expected_reward.append(
                rewards[i] + self.FUTURE_REWARD_DISCOUNT * np.max(agents_reward_per_action[i].eval()))

    # figure out how to train the model (self._output_layer) with previous_states,
    # actions and agent_expected_rewards

I want to update the model using previous_states, actions and agent_expected_rewards so that it learn that those actions lead to those rewards.

I expect it might look something like this:

train_model = theano.function(inputs=[input_var],
    outputs=self._output_layer,
    givens={
        states: previous_states,
        rewards: agents_expected_reward
        expected_rewards: agents_expected_reward)

I just don't get how the givens would effect the model because when building the network I don't specify them. I can't find it in the theano and lasagne documentation either.

So how can I update the model/network so that it 'learns'.

If its still not clear, comment on what information is still needed. I've been trying to figure this out for a few days now.


Solution

  • After going through the documentation I've finally found the answer. I was looking in the wrong places before.

        network = self._output_layer
        prediction = lasagne.layers.get_output(network)
        loss = lasagne.objectives.categorical_crossentropy(prediction, target_var)
        loss = loss.mean()
    
        params = lasagne.layers.get_all_params(network, trainable=True)
        updates = lasagne.updates.sgd(loss, params, self.LEARN_RATE)
        givens = {
            states: current_states,
            expected: agents_expected_reward,
            real_rewards: rewards
        }
        train_fn = theano.function([input_var, target_var], loss,
                                        updates=updates, on_unused_input='warn',
                                        givens=givens,
                                        allow_input_downcast='True')
        train_fn(current_states, agents_expected_reward)