The Picamera documentation provides examples on how one would go about implementing motion detection, without the actual motion detection algorithm itself.
Although I am sure there are many more, I have thought about three ways of doing the motion detection algorithm:
motion_output
and NumPy arrays.As you can see, the example of #1 and #2 is part of a section in the documentation that provides common recipes. The example of #3 is part of the actual API documentation.
If they don't even use their built-in motion_ouput
feature in the recipe section for their motion detection algorithm, and use PIL images instead, it must mean that their PiMotionAnalysis class (used with motion_ouput
) isn't really optimized?
What is the best way to do a motion detection algorithm? And as a bonus, you can also include a motion detection algorithm if you want :).
Cheers!
From the man himself (Dave Jones aka @waveform80):
So, that's the trade-off, basically: speed (motion estimation vectors) vs accuracy and control (capture comparisons). But remember you can run multiple things at once over the splitter, so you may even want to try combining the approaches.
In my question, #1 (and #2) stands for capture comparisons and #3 stands for motion estimation vectors.
For a more thorough explanation, see this Github ticket that I made where Dave was kind enough to give a very detailed explanation.