I've been doing some thinking about data redundancy, and just wanted to throw everything out in writing before I kept going along with this (and furthermore to double check whether or not this idea has already been put into practice).
Alright, so here goes.
The internet is filled with redundant data, including text, images, videos, etc. A lot of effort has gone into gzip and bzip2 on-the-fly compression and decompression over HTTP as a result. Large sites like Google and Facebook have entire teams that devote their time to making their pages load more quickly.
My 'question' relates to the fact that compression is done solely on a per file basis (gzip file.txt
yields file.txt.gz
). Without a doubt there are many commonalities between seemingly unrelated data scattered around the Internet. What if you could store these common chunks and combine them, either client-side or server-side, to dynamically generate content?
To be able to do this, one would have to find the most common 'chunks' of data on the Internet. These chunks could be any size (there's probably an optimal choice here) and, in combination, would need to be capable of expressing any data imaginable.
For illustrative purposes, let's say we have the following 5 chunks of common data - a, b, c, d, and e
. We have two files that only contain these chunks. We have programs called chunk
and combine
. chunk
takes data, compresses it through bzip2, gzip, or some other compression algorithm, and outputs the chunks that comprise said data (after compression). combine
expands the chunks and decompresses the concatenated result. Here's how they might be used:
$ cat gettysburg.txt
"Four score and seven years ago...cont'd"
$ cat test.txt
"This is a test"
$ chunk gettysburg.txt test.txt
$ cat gettysburg.txt.ck
abdbdeabcbdbe
$ cat test.txt.ck
abdeacccde
$ combine gettysburg.txt.ck test.txt.ck
$ cat gettysburg.txt
"Four score and seven years ago...cont'd"
$ cat test.txt
"This is a test"
When sending a file through HTTP, for instance, the server could chunk
the data and send it to the client, who then has the capability to combine
the chunked data and render it.
Has anyone attempted this before? If not I would like to know why, and if so, please post how you might make this work. A nice first step would be to detail how you might figure out what these chunks are. Once we've figured out how to get the chunks, then we figure out how these two programs, chunk
and combine
, might work.
I'll probably put a bounty on this (depending upon reception) because I think this is a very interesting problem with real-world implications.
You asked if someone had done something similar before and what the chunk size ought to be, and I thought I'd point you to the two papers that came to my mind:
(A team at) Google is trying to speed up web requests by exploiting data that is shared between documents. The server communicates a pre-computed dictionary to the client, which contains data that is common between documents and is referenced on later requests. This only works for a single domain at a time, and -- currently -- only with Google Chrome: Shared Dictionary Compression Over HTTP
(A team at) Microsoft determined in their work Optimizing File Replication over Limited-Bandwidth Networks using Remote Differential Compression that for their case of filesystem synchronization a chunk size of about 2KiB works well. They use a level of indirection, so that the list of chunks needed to recreate a file is itself split into chunks -- the paper is fascinating to read, and might give you new ideas about how things might be done.
Not sure if it helps you, but here it is in case it does. :-)