I am reading a C book, talking about ranges of floating point, the author gave the table:
Type Smallest Positive Value Largest value Precision
==== ======================= ============= =========
float 1.17549 x 10^-38 3.40282 x 10^38 6 digits
double 2.22507 x 10^-308 1.79769 x 10^308 15 digits
I dont know where the numbers in the columns Smallest Positive and Largest Value come from.
These numbers come from the IEEE-754 standard, which defines the standard representation of floating point numbers. Wikipedia article at the link explains how to arrive at these ranges knowing the number of bits used for the signs, mantissa, and the exponent.