My professor went over a practice final exam question where we're working with IEEE floating point format. The Binary is a 5 bit representation where in one of the cases we worked with Minus Zero. Each representation has 1 sign bit, 3 exponent bits, and 1 fraction bit.
He went over it as having a binary representation of 1 0000, which I understand. The significand M is 0, which I understand because denormalized values have a significand M = f = 0 since the fraction field is 0.
However, he put the exponent value E as -3.
This, I do not understand. I thought minus 0 was denormalized! My book says
"When the exponent field is all zeros, the represented number is in denormalized form. In this case, the exponent value is E = 1 − Bias, and the significand value is M = f , that is, the value of the fraction field without an implied leading 1."
Since minus 0 is denormalized, E should equal 1 - 3 (the bias) = -2, right?
In my book, positive 0 with IEEE 8-bit floating point format with 4 exponent bits, 3 fraction bits, and a bias of 7 (2^4 -1) has an exponent value E of -6, which is correct since E = 1 - 7 (bias) because it's denormalized.
Why is it different in this case? Or did my professor make a mistake?
Zero uses the same exponent as denormals, and like them does not have a leading 1 bit when its representation is decoded. You can call it “denormal” if you want, just be aware that others may use the word in a stricter sense (i.e. consider 0 and denormals disjoint classes of floating-point numbers).
As for the -3, it's probably a mistake in the value of the bias when converting between the biased and unbiased versions of the exponent. Don't read too much into it. The important thing is to remember that the exponent for denormals is represented as “all bits zero” and is the smallest possible value of the range of values it can have.