I'm trying to implement an operation, a function, with an interface like this one uint64_t foo(uint32_t,uint32_t)
, so this is a simple implementation:
#include <iostream>
#include <cstdint>
uint64_t foo(const uint32_t &a, const uint32_t &b) {
return ((reinterpret_cast<const uint64_t &>(a)) +
(reinterpret_cast<const uint64_t &>(b)));
}
int main() {
uint32_t k1 = ~0;
uint32_t k2 = 1;
std::cout << foo(k1, k2) << "\n";
return (0);
}
Now my focus is on the reinterpret_cast
and the +
operator.
the +
operator should be fine where it is because it's being called by 2 uint64_t
; so the problem is the reinterpret_cast
? I don't get why ...
My speculation is about the chunk of memory that is near a
or b
, so the result of the reinterpret_cast
is a 50% of the original a
or b
and the other 50% is a random chunk of memory. I this how this cast really works ?
I already tried several versions of a reinterpret_cast
, even with pointers, with no luck.
reinterpret_cast
essentially tells the compiler to ignore all its type-safety and just accept what you are doing.
You are saying that your reference is not a reference to a 32-bit number but to a 64-bit number. That does of course mean (in 8-bits-per-byte system), 4 bytes that could potentially contain any data are read as part of your integer. You also have a "portability" issue related to big-endian systems in particular where the more significant bytes appear first and therefore will yield a different number even if the other bytes do happen to be zero.
Your correct way to perform this is static_cast
and not use references but "pass-by-value".
As it is, you can manage to write foo without any casting at all.
uint64_t foo( uint64_t a, uint64_t b ) { return a + b; }
and you can call it with 32-bit numbers and not worry if they overflow. (Try it with multiplying them).