Search code examples
hashdata-integritydeduplication

What are some of the best hashing algorithms to use for data integrity and deduplication?


I'm trying to hash a large number of files with binary data inside of them in order to: (1) check for corruption in the future, and (2) eliminate duplicate files (which might have completely different names and other metadata).

I know about md5 and sha1 and their relatives, but my understanding is that these are designed for security and therefore are deliberately slow in order to reduce the efficacy of brute force attacks. In contrast, I want algorithms that run as fast as possible, while reducing collisions as much as possible.

Any suggestions?


Solution

  • You are the most right. If your system does not have any adversary, using cryptographic hash-functions is overkill given their security properties.


    Collisions depend on the number of bits, b, of your hash function and the number of hash values, N, you estimate to compute. Academic literature defends this collision probability must be bellow hardware error probability, so it is less likely to make a collision with a hash function than to be comparing data byte-by-byte [ref1,ref2,ref3,ref4,ref5]. Hardware error probability is in the range of 2^-12 and 2^-15 [ref6]. If you expect to generate N=2^q hash values then your collision probability may be given by this equation, which already takes into account the birthday paradox:
    Equation

    The number of bits of your hash function is directly proportional to its computational complexity. So you are interested in finding an hash function with the minimum bits possible, while being able to maintain collision probability at acceptable values.


    Here's an example on how to make that analysis:

    • Let's say you have f=2^15 files;
    • The average size of each file lf is 2^20 bytes;
    • You pretend to divide each file into chunks of average size lc equal to 2^10 bytes;
    • Each file will be divided into c=lf/lc=2^10 chunks;

    • You will then hash q = f*c =2^25 objects.

    From that equation the collision probability for several hash sizes is the following:

    • P(hash=64 bits) = 2^(2*25-64+1) = 2^-13 (lesser than 2^-12)
    • P(hash=128 bits) = 2^(2*25-128+1) 2^-77 (way much lesser than 2^-12)

    Now you just need to decide which non-cryptographic hash function of 64 or 128 bits you will use, knowing 64 bits it pretty close to hardware error probability (but will be faster) and 128 bits is a much safer option (though slower).


    Bellow you can find a small list removed from wikipedia of non-cryptographic hash functions. I know Murmurhash3 and it is much faster than any cryptographic hash function:

    1. Fowler–Noll–Vo : 32, 64, 128, 256, 512 and 1024 bits
    2. Jenkins : 64 and 128 bits
    3. MurmurHash : 32, 64, 128, and 160 bits
    4. CityHash : 64, 128 and 256 bits