Search code examples
pythontensorflowmachine-learningkeraschess

Encountering error for keras when implementing neural network with multiple outputs


I am implementing a chess AI in which the output is the position and the piece to be moved. However, when I follow the multi-output tutorial on the keras API documentation, It returns the error

Failed to find data adapter that can handle input:(<class 'list'> containing values of types {'(<class \'list\'> containing values of types {\'(<class \\\'list\\\'> containing values of types {\\\'(<class \\\\\\\'list\\\\\\\'> containing values of types {"<class \\\\\\\'int\\\\\\\'>"})\\\'})\'})'}), (<class 'dict'> containing {"<class 'str'>"} keys and {"<class 'numpy.ndarray'>"} values)

Sorry if the copied section is too long, I just wanted to make sure that it could be easier to find what went wrong.

Reproducible section of code below:

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

board_inputs = keras.Input(shape=(8, 8, 12))


conv1= layers.Conv2D(10, 3, activation='relu')
conv2 = layers.Conv2D(10, 3, activation='relu')
pooling1 = layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid", data_format=None,)
pooling2 = layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid", data_format=None,)
flatten = keras.layers.Flatten(data_format=None)


x = conv1(board_inputs)
x = pooling1(x)
x = conv2(x)
x = flatten(x)
piece_output = layers.Dense(12,name = 'piece')(x)
alpha_output = layers.Dense(7,name = 'alpha')(x)
numbers_output = layers.Dense(7,name = 'number')(x)


model = keras.Model(inputs=board_inputs, outputs=[piece_output,alpha_output,numbers_output], name="chess_ai_v3")
model.compile(
    loss=keras.losses.mse,
    optimizer=keras.optimizers.Adam(),
    metrics=None,
)

keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
history = model.fit(
    trans_data[:len(trans_data)],
    {"piece": pieces[:len(trans_data)], "alpha": alphas[:len(trans_data)],"number": numbers[:len(trans_data)]},
    epochs=2,
    batch_size=32,
)
# history = model.fit(trans_data[:len(trans_data)],batch_size=64, epochs=1000,verbosity = 2)

Update: I am still having problems with the network, I tested each of the arrays of values set for the answer. They all function normally when executed individually, does anyone know of any problem that could cause this?


Solution

  • it seems that your data are in a strange format. look at this

    I have no problem running this example

    import numpy as np
    from tensorflow.keras import layers, models
    
    
    board_inputs = layers.Input(shape=(8, 8, 12))
    conv1= layers.Conv2D(10, 3, activation='relu')
    conv2 = layers.Conv2D(10, 3, activation='relu')
    pooling1 = layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid", data_format=None,)
    pooling2 = layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding="valid", data_format=None,)
    flatten = layers.Flatten(data_format=None)
    
    
    x = conv1(board_inputs)
    x = pooling1(x)
    x = conv2(x)
    x = flatten(x)
    piece_output = layers.Dense(12,name = 'piece')(x)
    alpha_output = layers.Dense(7,name = 'alpha')(x)
    numbers_output = layers.Dense(7,name = 'number')(x)
    
    
    model = models.Model(inputs=board_inputs, outputs=[piece_output,alpha_output,numbers_output], name="chess_ai_v3")
    model.compile(loss='mse', optimizer='adam')
    model.summary()
    
    X = np.random.uniform(0,1, (100,8,8,12))
    y = {"piece": np.random.uniform(0,1,(100,12)), 
         "alpha": np.random.uniform(0,1,(100,7)),
         "number": np.random.uniform(0,1,(100,7))}
    
    history = model.fit(X,y, epochs=2, batch_size=32)