Search code examples
python-3.xtensorflowtensorflow-federated

TensorFlow Federated - Adapting existing keras model


I'm having trouble adapting an existing Keras model to work with TenforFlow Federated.

The existing model is a 1D convolutional autoencoder (details shown below)

Existing Model:

input_window = Input(shape=(window_length,1))

x = Conv1D(16, 3, activation="relu", padding="same")(input_window)
x = MaxPooling1D(2, padding="same")(x)
x = Conv1D(1, 3, activation="relu", padding="same")(x)

encoded = MaxPooling1D(2, padding="same")(x)
encoder = Model(input_window, encoded)

x = Conv1D(1, 3, activation="relu", padding="same")(encoded)
x = UpSampling1D(2)(x)
x = Conv1D(16, 1, activation='relu')(x)
x = UpSampling1D(2)(x)

decoded = Conv1D(1, 3, activation='sigmoid', padding='same')(x)

autoencoder = Model(input_window, decoded)

Training data is passed as a numpy.ndarray of shape (102, 48, 1).

Conceptually, this represents 102 days worth of data, each containg 48 values. I can provide an example of this if it would assist in answering.

My attempt to convert the model is shown below.

Converted Model:

def create_compiled_keras_model():

    input_window = tf.keras.layers.Input(shape=(window_length,1))

    x = tf.keras.layers.Conv1D(16, 3, activation="relu", padding="same")(input_window)
    x = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
    x = tf.keras.layers.Conv1D(1, 3, activation="relu", padding="same")(x)

    encoded = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
    encoder = tf.keras.Model(input_window, encoded)

    x = tf.keras.layers.Conv1D(1, 3, activation="relu", padding="same")(encoded)
    x = tf.keras.layers.UpSampling1D(2)(x)
    x = tf.keras.layers.Conv1D(16, 1, activation='relu')(x)
    x = tf.keras.layers.UpSampling1D(2)(x)

    decoded = tf.keras.layers.Conv1D(1, 3, activation='sigmoid', padding='same')(x)

    autoencoder = tf.keras.Model(input_window, decoded)
    autoencoder.compile(optimizer='adam', loss='MSE')
    return autoencoder



sample_batch = train // numpy.ndarray of shape (102, 48, 1)


def model_fn():
    keras_model = create_compiled_keras_model()
    return tff.learning.from_compiled_keras_model(keras_model, train)

This produces the error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-397-9bed171c79be> in <module>
----> 1 model = model_fn()

<ipython-input-396-13bc1955a7f2> in model_fn()
      1 def model_fn():
      2     keras_model = create_compiled_keras_model()
----> 3     return tff.learning.from_compiled_keras_model(keras_model, train)

~/miniconda3/lib/python3.6/site-packages/tensorflow_federated/python/learning/model_utils.py in from_compiled_keras_model(keras_model, dummy_batch)
    190     raise ValueError('`keras_model` must be compiled. Use from_keras_model() '
    191                      'instead.')
--> 192   return enhance(_TrainableKerasModel(keras_model, dummy_batch))
    193 
    194 

~/miniconda3/lib/python3.6/site-packages/tensorflow_federated/python/learning/model_utils.py in __init__(self, inner_model, dummy_batch)
    434     # until the model has been called on input. The work-around is to call
    435     # Model.test_on_batch() once before asking for metrics.
--> 436     inner_model.test_on_batch(**dummy_batch)
    437     # This must occur after test_on_batch()
    438     if len(inner_model.loss_functions) != 1:

TypeError: test_on_batch() argument after ** must be a mapping, not numpy.ndarray

So far I have been unable to resolve this. Is this an issue relating to my model not being compiled correctly, or due to the way I'm passing data?

Any help in resolving this would be greatly appreciated, thanks!


Solution

  • The sample batch should be something that can be passed to batch_input argument of tff.learning.Model.forward_pass.

    For wrapped Keras models, this must be a dict with keys matching the arguments to tf.keras.models.Model.test_on_batch.

    For this case, I think you may be able to simply wrap the sample batch in a dict with a single key of x:

    numpy_sample_batch = train // numpy.ndarray
    sample_batch = {'x': numpy_sample_batch}