Search code examples
tensorflowmachine-learningkerasneural-networktensorflow2.0

How to improve my neural network accuracy function such that keras lazy evaluation is possible


I currently have my neural network accuracy function (and neural network) as follows:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
import numpy as np

def predoneacc(y_true, y_pred):
    y_nptrue = y_true.numpy()
    y_nppredinit = y_pred.numpy()

    y_nppred = np.round(y_nppredinit)
    return arr1eq(y_nptrue,y_nppred)

def arr1eq(y_true, y_pred):
    k = 0.0
    count1 = 0.0
    for i in range(len(y_true)):
        for j in range(len(y_true[0])):
            if y_pred[i][j] == 1.0:
                count1 = count1+1.0
                if y_true[i][j] == 1.0:
                    k = k+1.0
    if(count1==0.0): return 0
    else: return k/count1

tap=5

model = keras.Sequential([
    keras.layers.LSTM(300, input_shape=(tap,45)),
    keras.layers.Dense(45, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=[predoneacc], run_eagerly=True)
model.fit(x,y,epochs=150) #x,y data properly given input/output shape required by the network

But this accuracy function cannot be used by TensorFlow keras tensorflow.keras.models.compile() unless run_eagerly=True is set because .numpy() is used. The code runs up to model.fit(), which complains that I have to set eager evaluation on. So I would like to re-code my accuracy function so that .compile() can accept the accuracy function without setting run_eagerly=True in .compile().

What would be the way out? my accuracy function currently simply returns number of accurate 1 predictions divided by total number of 1 predictions in binary classification problems, with output layer activation function being sigmoid.

I have tensorflow 2.0.


Solution

  • I think something like this could work for you:

    def arr1eq(y_true, y_pred):
      y_pred = tf.round(y_pred)
      count = tf.reduce_sum(tf.cast(tf.equal(tf.where(tf.equal(y_pred, 1.0)), 1), dtype=tf.int32))
      k = tf.reduce_sum(tf.cast(tf.equal(tf.where(tf.equal(tf.cast(y_true, dtype=tf.float32), 1.0)), 1), dtype=tf.int32))
      return tf.where(count==0, tf.constant(0.0), tf.constant(tf.cast(k/count, dtype=tf.float32)))
    

    y_pred is first rounded to the nearest integer, then I calculate check where y_pred has 1.0 values. The number of occurrences is summed up in the variable count. The same is done to y_true without rounding.

    Here is a simplified version (sorry for any confusion):

    def arr1eq2(y_true, y_pred):
      y_pred = tf.round(y_pred)
      y_pred_indices = tf.where(tf.equal(y_pred, 1.0))
      count = tf.shape(y_pred_indices)[0]
      k = tf.reduce_sum(tf.cast(tf.equal(tf.gather_nd(y_true, y_pred_indices), 1), dtype=tf.int32))
      return tf.where(count==0, tf.constant(0.0), tf.constant(tf.cast(k/count, dtype=tf.float32)))