I was reading up on the difference between 32-bit and 64-bit systems, and came across this blog in the process: https://www.zdnet.com/article/clearing-up-the-3264-bit-memory-limit-confusion/
Now I'm confused because in this blog they provide a note as follows:
Note: Wondering how we arrive at that 4GB limit? Here's the math for 32-bit systems:
2^32 = 4,294,967,296 bytes 4,294,967,296 / (1,024 x 1,024) = 4,096 MB = 4GB
It's different for 64-bit:
2^64 = 18,446,744,073,709,551,616 18,446,744,073,709,551,616 / (1,024 x 1,024) = 16EB (exabytes)
They state that whether a program is 32 bit or 64 bit changes the memory limit it can use.
What I don't understand is, how the bits change into bytes? If you work out 2 bits to the power of 32, surely the result is 4,294,967,296 bits and not bytes? And if this were so, then the memory limit on a 32-bit system would be 4 GigaBits and not 4 GigaBytes?
Can someone explain how this works out? Maybe I'm missing something?
each separately-addressable memory location is a byte. Memory is not bit-addressable, only in byte chunks or larger. That's why setting a single bit in a bitmap requires a read-modify-write of the containing byte or word.