I have a 128 x 1048576 matrix of bytes that several clients can quickly read and write bytes to. This matrix can be thought to represent the pixels of an image.
Additionally, clients need to be able to read entire 128 x 128 sectors of this matrix as they scan parts of entire dataset.
I have a few potential solutions that use redis:
Give every pixel of the image its own key and require that clients make 128 x 128 reads to get each sector.
Create 8192 hashes to represent each sector of the image. The pixels in each sector will be represented as fields in these hashes. This means that each hash will have 128 x 128 fields.
Of these 2 solutions, which would be more optimal for my requirements?
Also, would there be any advantage to storing more than 1 pixel in each key/field? If so, how do I determine the optimal amount of bytes to store at each data point? (This would give me less precision on my reads/writes, but also reduce the size of my keyspace)
If you think of a better solution that uses redis clustering, or does not use redis at all, please do not hesitate to mention it.
Thanks in advance, Dom
Sounds like a perfect use case for Redis' Bitfield data structure - it is fully documented at the Redis website: https://redis.io/commands/bitfield.
In fact, Reddit had recently (April 1st) did an amazing project called /r/Place with the Bitfield, that sound very much like what you're trying to do - here are the details: https://redditblog.com/2017/04/13/how-we-built-rplace