Search code examples
objective-ciosperformanceimage-processingvideo-processing

How to synchronise video (image) input with outside source?


Let's imagine there is a LED. It blinks on and off with a period of 0.5 seconds. So on second 0 => 0.5 it is on, from second 0.5 => 1 it is off, from second 1 =>1.5 it is on again.

Let's imagine I want to read that input from outside camera (say iphone camera). What I do is: 1. Take input stream, make image out of it, scan the image for presence of certain number of white pixels, if it is there, the led is on, I write "1" to my file. If it is not there I write "0". I read input stream twice a second. So generally speaking if everything goes well and my processing does not lag somewhere I get good results. But imagine if:

0 => 0.5 LED is ON 0.49 => my camera reads info as "1"

0.5 => 1.0 LED is OFF 0.99 => my camera reads info as "0"

1.0 => 1.5 LED is ON 1.51 => my camera lagged and reads it as "0"

So we have data corruption here. Question is, how do I synchronise reading so it preferrably goes into the middle of this window, for larger margin of error. Also imagine if I'm trying to do that 10 times per second. The window becomes even less.

What can I read on the topic? What can I do to make it more reliable?

One possible solution seems to be reading input 4 times a second and to use data based on groups of 2 inputs.


Solution

  • Sounds like you might want to read about ways of encoding timecode. http://en.wikipedia.org/wiki/Timecode

    Each of these transmits 80 bits of data at the chosen frame rate (say, 30fps). I’m not sure how you’d do it in your case.

    Clearly, having a smaller window is more accurate. With LTC, as audio is often sampled at around 44kHz, it’s possible to get it almost exactly spot on.

    If the iPhone camera can only take 2 photos per second, I wonder if you could try taking photos at a different interval (say, even 0.7 seconds), and somehow do the maths to work out if it should be on or off (as the LED is still alternating at 0.5s). Over a period of a few seconds, it might be the same as sampling it every 0.1 seconds? (I’m just pulling numbers out of the sky, but I imagine you could work out something like that)

    Another thought: can you use the video from the camera instead of a sequence of photos? You might be able to get 30fps that way? (I’m not sure – haven’t looked into it) There might be improvements around this in iOS 6.0 too (something worth checking, if you’re a developer).