What I wanted to do is make a live video capture VideoCapture cap
that only captures every set amount of time 'X', where X is typically anywhere between 500ms - 1s. If the difference between two captures currentIMG
& previousIMG
was high enough, it was supposed to do some kind of processing task and set the previousIMG
to the one it just took so that it will be what it compares captures to from that point onward.
However, when attempting that, I noticed that it didn't actually get what the camera was seeing at that moment on waking up. To my knowledge, it just got whatever was "next in the buffer" for the VideoCapture
object. This resulted in it just pausing before getting whatever is next in the buffer, rather than skipping all the things it saw in between those two time periods.
Is there a time efficient way to make sure I get whatever the "camera sees" on waking up and not just whatever was in the buffer next?
int main() {
// open the camera and capture a image
Mat currentIMG, previousIMG;
VideoCapture cap;
cap >> previousIMG;
struct timespec ts = {ts.tv_sec = 1, ts.tv_nsec = 0};
for (;;) {
// wait & then capture an image for 'currentIMG'
nanosleep(&ts, NULL);
cap >> currentIMG;
// check if the difference between the previous
// and current images (in terms of decimal percentage)
// exceeds the threshold
if (PSNR(currentIMG, previousIMG) > 0.20) {
// Some kind of task that takes a bit to complete
imwrite("capturedIMG.png", currentIMG);
previousIMG = currentIMG.clone();
}
}
// Exit
return 0;
}
cap.set(cv::CAP_PROP_BUFFERSIZE, 1)
I've tried setting the buffer size of cap
to 1
, so I could limit the amount of images inside of the buffer. However, despite getting a 1
back when printing that call and it seeming like it's supported, it doesn't actually do anything to solve the issue.
grab()
loopI've tried using a series of grab()
s in another thread with the main thread using a retreive()
as per other stack overflow suggestions, and unfortunately it doesn't seem to work either. Sometimes it acts as if the spun off thread takes everything in the buffer such that the main thread has to wait to get something from it.
you have to start a thread that grabs AND retrieves (or simply reads) the frames, and nothing else is allowed to read from the camera or access the VideoCapture object.
that thread stores the latest frame somewhere, so the rest of your program can get that frame whenever it needs to.
use synchronization primitives such as "condition variables" to signal consumers of a fresh frame, so they can wait for the next fresh frame instead of potentially getting the same frame twice. here's a python recipe for the problem, probably translatable into C++ with little effort.