Search code examples
tensorflowreinforcement-learningq-learningopenai-gym

Feeding a tensorflow placeholder from an array


I'm trying to train CatPole-v0 using Q learning. When trying to update the replay buffer with experience I am getting the following error:

ValueError: Cannot feed value of shape (128,) for Tensor 'Placeholder_1:0', which has shape '(?, 2)'

The related code snippet is:

def update_replay_buffer(replay_buffer, state, action, reward, next_state, done, action_dim):
    # append to buffer
    experience = (state, action, reward, next_state, done)
    replay_buffer.append(experience)
    # Ensure replay_buffer doesn't grow larger than REPLAY_SIZE
    if len(replay_buffer) > REPLAY_SIZE:
        replay_buffer.pop(0)
    return None

The placeholder to be fed is

action_in = tf.placeholder("float", [None, action_dim])

Can someone clarify how action_dim should be used to resolve this error?


Solution

  • Let's start by action_in:

    action_in = tf.placeholder("float", [None, action_dim])
    

    This means that action_in can have shape like (None, action_dim), nothing other than that. And from the error:

    ValueError: Cannot feed value of shape (128,) for Tensor 'Placeholder_1:0', which has shape '(?, 2)'
    

    From error it seems your action_dim is 2. its easy to see that you're placing an object of shape (128,) in place of tensor which expects shape like (?, 2) i.e (None, 2).

    So you need to check your feed_dict that's where you're messing. Your placeholder action_in's dimensions should match with object you're putting in feed_dict.

    Can someone clarify how action_dim should be used to resolve this error?

    Seems your environment's action has two components from value of action_dim, but you are providing only one component, this is inferred from your error((128,)). You need to fix this. Hope this helps.