Search code examples
pythontensorflowraspberry-pireal-timetensorflow-lite

Tensorflow Lite inference for real time recognition


I have a project to recognize activity daily life and falls using a three-axis accelerometer. I've trained the LSTM model and got high accuracy. I'm about to finish it, but at the moment I'm facing a problem with the inference phase.

I decided to convert the model into TensorFlow lite in order to load it in my Raspberry Pi, and I've successfully done that.

Now I want to do the real-time recognition for the trained model on the accelerometer sensor that already connected to my Raspberry Pi.

I know that RNN/LSTM takes (Time_steps, Features) and it's (200,3) in my case. and I know that I have to make the accelerometer reading samples in a 200 samples window also (Right?).

I've written this code so far that accomplish this task:

def get_sample():
    sample_list = []
    sample_count = 0
    total=[0,0,0]
    last_time = time.time_ns() / 1000000000
    while sample_count < 200/3:
        if time.time() - last_time >= 0.01:
            adxl345 = ADXL345()
            axes = adxl345.getAxes(True)
            total[0] += axes['x']
            total[1] += axes['y']
            total[2] += axes['z']
            sample = list(total)
            sample_list += [
                (sample[0] + 9.80665) / (9.80665 * 2),
                (sample[1] + 9.80665) / (9.80665 * 2),
                (sample[2] + 9.80665) / (9.80665 * 2)]
            last_time = time.time_ns() / 1000000000
            sample_count += 1
        time.sleep(0.001)
    sample_list.pop()    
    return sample_list

and when I run the following code:

    interpreter = tflite.Interpreter(model_path=model_file)
    interpreter.allocate_tensors()

    # Get input and output tensors.
    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()

    sample_list = np.array(np.asarray(np.asarray(sample_list, dtype=np.float32)))

    interpreter.set_tensor(input_details[0]['index'], [[sample_list]])

    interpreter.invoke()

I got this error: enter image description here

and I know there's something wrong!. Maybe in the way that I understand inference or LSTM. Could you please help me?

This problem is taking a lot of time and till now never figured it out. Thanks in advance.


Solution

  • You can resize the input tensor by doing something like this:

    input_details = interpreter.get_input_details()
    interpreter.resize_tensor_input(input_details[0]["index"], input.shape)
    interpreter.allocate_tensors()
    interpreter.set_tensor(input_details[0]['index'], input)
    

    A heads up that I think you may run into other problems:

    • Without seeing your code of generating the LSTM model, it's hard to predict if it will work.
    • Raspberry Pi is a very resource-constrained environment. It might require further optimizations for this to run in realtime (again, depending on how you build the model and use it).