Search code examples
pythonandroidtensorflowkerastensorflow-lite

What structure should the accelerator data have for model prediction?


I created Keras model, which I convert to TensorFlow, then I convert it to TensorFlow lite. I want to use my TFLite model to predict human activity using accelerator signal from mobile phone. There's sequence of my model:

N_FEATURES = 3
PERIOD = 80


model = Sequential()
model.add(Reshape((const.PERIOD, const.N_FEATURES), input_shape=(const.PERIOD * const.N_FEATURES,)))
model.add(Conv1D(100, 10, activation='relu', input_shape=(const.PERIOD, const.N_FEATURES)))
model.add(Conv1D(100, 10, activation='relu'))
model.add(MaxPooling1D(const.N_FEATURES))
model.add(Conv1D(160, 10, activation='relu'))
model.add(Conv1D(160, 10, activation='relu'))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(7, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


x_test.shape = (12369, 240)
y_test.shape = (12369, 7)
x_train.shape = (49476, 240)
y_train.shape = (49476, 7)

I want to ask about the shape of data which I need to pass to the model in Android app to predict the activity. The function which I use is Interpreter.runForMultipleInputsOutputs. I need to use an array with three lists? Each of them is to have data from one axis from the accelerator(x, y and z) or I need to create something else?

It is my first model so any other tips are welcomed.

Edit:

        List<Sample> samples = collector.getSamples();
        float[][] floatInputBuffer = new float[200][3];

        for(int i = 0; i < 200; i++) {
            floatInputBuffer[i][0] = samples.get(i).getX();
            floatInputBuffer[i][1] = samples.get(i).getY();
            floatInputBuffer[i][2] = samples.get(i).getZ();
        }

        Object[] inputArray = {floatInputBuffer, new int[]{5000}};
        Map<Integer, Object> outputMap = new HashMap<>();
        outputMap.put(0, new float[1][labels.size()]);
        Interpreter interpeter = null;
        try {
            interpeter = new Interpreter(loadModel(getAssets(), MODEL_PATH.split("file:///android_asset/", -1)[1]));
        } catch (IOException e) {
            e.printStackTrace();
        }

        interpeter.runForMultipleInputsOutputs(inputArray, outputMap);

Solution

  • Whatever input size is used while training your model, you have to send the same sized input (with the exception of batch size) when you are performing the inference using the Interpreter. If you were using run method on your interpreter, then you could have passed only one input, but since you are making use of runForMultipleInputsOutputs, you can batch your data together and then send it.

    So, your model is having an input shape of (const.PERIOD * const.N_FEATURES). I believe this product is 240. So your input to TFLite model has to be of shape (<<No. of inputs>>, 240).