Search code examples
amazon-kinesis-video-streams

What is the correct way to organize multiple recordings from a single device: AWS Kinesis video streams


What is the correct way to create a searchable archive of videos from a single Raspberry PI type device?

Should I create a single stream per device, then whenever that device begins a broadcast it adds to that stream? I would then create a client that lists timestamps of those separate recordings on the stream? I have been trying to do this, but I have only gotten as far are ListFragments and GetClip. Neither of which seem to do the job. What is the use case for working with fragments? I'd like to get portions of the stream separated by distinct timestamps. As in, if I have a recording from 2pm to 2:10pm, that would be a separate list item from a recording taken between 3pm and 3:10pm.

Or should I do a single stream per broadcast? I would create a client to list the streams then allow users to select between streams to view each video. This seems like an inefficient use of the platform, where if I have 5 10 second recordings made by the same device over a few days, it creates 5 separate archived streams.

I realize there are implications related to data retention in here, but am also not sure how that would act if part of a stream expires, but another part does not.

I've been digging through the documentation to try to infer what best practices are related to this but haven't found anything directly answering it.

Thanks!


Solution

  • Hard to tell what your scenario really is. Some applications use sparsely populated streams per device and use ListFragments API and other means to understand the sessions within the stream.

    This doesn't work well if you have very sparse streams and large number of devices. In this case, some customers implement "stream leasing" mechanism by which their backend service or some centralized entity keeps the track of pool of streams and leases those to the requestor, potentially adding new streams to the pool. The streams leased times are then stored in a database somewhere for the consumer side application to be able to do its business logic. The producer application can also "embed" certain information within the stream using FragmentMetadata concept which really evaluates into outputting MKV tags into the stream.

    If you have any further scoped down questions regarding the implementations, etc, don't hesitate to cut GitHub issues against particular KVS assets in question which would be the fastest way to get answers.