by default, what's the standard size of a variable that can be read on 32 bits and 64 bits processor using a single instruction?
For example, when I have a double (8 bytes size) in C language is it read respectively in one instruction (on a 64 bits processor) and in two operations (on a 32 bits processor) ?
I think this problem is actually the difference between what we call the "memory word" (which is equal to 1 byte) and the "processor word" (I don't know this size value but it would depend of 32 bits or 64 bits processor, right ?).
Thanks
32-bit processors work on 32 bits of data at a time. 64-bit processors work on 64 bits of data at a time.
is it read respectively in one instruction (on a 64 bits processor) and in two operations (on a 32 bits processor) ?
A "generic" 32-bit processor would have fetch two 32-bit chunks of data separately and combine them. The exact number of instructions may depend on the processor.
Having said that, many processors have specialized architectures to better handle floating-point data
http://en.wikipedia.org/wiki/Floating-point_unit
While most concrete 32-bit processors will need two fetch operations to load a 64-bit integer, the same may not be true for floating-point types on the same processor.
Additionally, some processors have the ability to process vectors in few instructions. Most modern C compilers are able to emit such instructions when appropriate. See for example: