Accourding to Wikipedia the binary32 format has from 6 to 9 significant decimal digits precision and 64 format has from 15 to 17.
I found that these significant decimal digits have been calculated using the Mantissa but i didn't get it how can one calculate it? Any Idea ?
Mantissa of 32bit format = 24bits ,Mantissa of 64bit format = 53bits
First, for this question it is better to use the total significand sizes 24 and 53. The fact that the leading bit is not represented is just an aspect of the encoding.
If you are interested only in a vague explanation, each decimal digit contains exactly log2(10) (about 3.32) bits of information. When you have to encode one digit, you need 4 bits, but here we are talking about encoding several consecutive decimal digits efficiently, so the figure of 3.32 will do.
53 bits / log2(10) -> 15.95 (16ish decimal digits)
24 bits / log2(10) -> 7.22 (7ish decimal digits)
If you want to do this properly, you need to consider the fact that not the same numbers are representable in binary and in decimal. People who ask about the decimal precision of binary floating-point are often assumed to mean either the decimal precision that can round-trip through the binary format and come back the same, or the decimal precision necessary to round-trip a binary floating-point number through decimal to the same value, because these are the interpretations that makes the most sense. Hence the ranges “6 … 9” and “15 … 17”. 6 is the number of decimal digits that is guaranteed to round-trip through binary32, 9 is the number of decimal digits one needs to retain to round-trip a binary32 number through decimal, and so on.
The author of the blog Exploring Binary is currently writing a series on round-trips. This series is what you should read next if you are not satisfied with the log2(10)=3.32 explanation.