What makes this
long l = 1;
char c = static_cast<char>(l);
float f = 1.0f;
int i = static_cast<int>(f);
better than this
long l = 1;
char c = (char)l;
float f = 1.0f;
int i = (int)f;
when casting one primitive data type to another?
I've got much of legacy code that uses the second style for type casting in similar situations, so this is also a question about should I or may I not perform full-scale revision of that code.
Future-proofing.
Let's say in the future I do this:
float blah = 1.0f;
float* f = &blah;
Now, int i = static_cast<int>(f);
stops compiling, but int i = (int)f;
does a reinterpret_cast
.
static_cast<int>
is this is exactly what I want you to do. (int)
is do whatever you can to get me an int. With the latter, the compiler will go out of its way to get you an int
value, and that's rarely (never?) desirable.