Search code examples
hadoophdfshortonworks-data-platformambari

Why is the default hdfs blocksize set to 134.2 mb (approx)


I see in Ambari that the default block size is set to 134217728. Is there any specific reason why it is set to such value (other than 128 or 256)?


Solution

  • Let me tell you one thing first. HDFS Blocksize - 134217728 is not Amabari specific. It is HDFS default block size. Check out the below link and search for dfs.blocksize property. Most of size conventions in HDFS is binary, ie. power of 1024.

    https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

    134217728 bytes = 128 * 1024 * 1204 = 128MB 
    

    HDFS support suffix in the block size value - k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) (Eg - 128k, 128m, 128g, etc. ), however Amabari doesn't support any suffix, Block size has to be in bytes if you push the configuration through Ambari.