Search code examples
64-bit32-bit32bit-64bitprocessor

Processor, OS : 32bit, 64 bit


I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#.

I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind.

I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background.

Can someone explain me these things in a plain simple English or point me to something which does that.

EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things.

BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.


Solution

  • It really all comes down to wires.

    In digital circuits, only 0's and 1's (usually low voltage and high voltage) can be transmitted from one element (CPU) to another element (memory chip). If I have only 1 wire, I can only send either a 1 or a 0 over the wire per clock cycle. This means I can only address 2 bytes (assuming byte addressing, and that entire addresses are transmitted in just 1 cycle for speed!).

    If I have 2 wires, I can address 4 bytes. Because I can send: (0, 0), (0, 1), (1, 0), or (1, 1) over the two wires. So basically it's 2 to the power of # of wires.

    So if I have 32 wires, I can address 4 GB, and if I have 64 wires, I can address a lot more.

    There are other tricks that engineers can do to address a larger address space than the wires allow for. E.g. splitting up the address into two parts and sending one half in the first cycle and the second half on the next cycle. But that means that your memory interface will be half as fast.

    Edited my comments into here (unedited) ;) And making it a wiki if anyone has anything interesting to add as well.

    Like other comments have mentioned, 2^32 (2 to the power of 32) = 4294967296, which is 4 GB. And 2^64 is 18,446,744,073,709,551,616. To dig in further (and you probably read this in Hennesey & Patterson) processors contains registers that it uses as "scratch space" for storing the results of its computations. A CPU only knows how to do simple arithmetic and knows how to move data around. Naturally, the size of these registers are the same width in bits as the "#-bits" of architecture it is, so a 32-bit CPU's registers will be 32-bits wide, and 64-bit CPU's registers will be 64-bits wide.

    There will be exceptions to this when it comes to floating point (to handle double precision) or other SIMD instructions (single-instruction, multiple data commands). The CPU loads and saves the data to and from the main memory (the RAM). Since the CPU also uses these registers to compute memory addresses (physical and virtual), the amount of memory that it can address is also the same as the width of its registers. There are some CPUs that handles address computation with special extended registers, but those I would call "after thoughts" added after engineers realize they needed it.

    At the moment 64-bits is quite a lot for addressing real physical memory. Most 64-bit CPUs will omit quite a few wires when it comes to wiring up the CPU to the memory due to practicality. It won't make sense to use up precious motherboard real estate to run wires that will always have 0's. Not to mention in order to have the max amount of RAM with today's DIMM density would require 4 billion dimm slots :)

    Other than the increased amount of memory, 64-bit processors offer faster computation for integer numbers larger than 2^32. Previously programmers (or compilers, which is also programmed by programmers ;) would have to simulate having a 64-bit register by taking up two 32-bit registers and handling any overflow situations. But on 64-bit CPUs it would be handled by the CPU itself.

    The drawback is that a 64-bit CPU (with everything equal) would consume more power than a 32-bit CPU just due to (roughly) twice the amount of circuitry needed. However, in reality you will never get equal comparison because newer CPUs will be manufactured in newer silicon processes that have less power leakage, allow you to cram more circuit in the same die size, etc. But 64-bit architectures would consume twice as much memory. What was once considered "ugly" of x86's variable instruction length is actually an advantage now compared to architectures that uses a fixed instruction size.