Search code examples
c

How could people have used ints to store C pointers historically?


I am currently reading Computer Systems: A Programmer's Perspective by Bryant and O'Hallaron. They remark that

For example, many programmers historically assumed that an object declared as type int could be used to store a pointer. This works fine for most 32-bit programs, but it leads to problems for 64-bit programs.

I am trying to understand how programmers could have done such a thing in the first place. An int is generally signed, so wouldn't storing a pointer of value greater than 2^31 have caused type-casting and errors? This is mostly a historical curiosity I guess, but I figured I'd ask nevertheless.


Solution

  • Firstly, it wouldn't cause much issues unless people did something funny with these ints. E.g. let's say I've got pointers 0xFFFF0000 and 0xFFFF0010 and I cast them to ints. I'll get -65536 and -65520. If I can them back to pointers I'll get back the same pointers. If I substract second from first, I'll get 16 for pointers and for int. If I compare them, I'll get first less than second for poiners and for ints - however this one is fragile; 0x7FFF0000 and 0x81000000 would compare wrong. In short, at least roundtripping from pointer to int to pointer works just fine when pointers and ints are the same width.

    Secondly, pretty sure they meant both int and unsigned int (width is the defining characteristic, not sign), and often pointers were stored as unsigned ints which worked flawlessly on 32 bit.