Search code examples
rustmemory-layout

Why is the size of a tuple or struct not the sum of the members?


assert_eq!(12, mem::size_of::<(i32, f64)>()); // failed
assert_eq!(16, mem::size_of::<(i32, f64)>()); // succeed
assert_eq!(16, mem::size_of::<(i32, f64, i32)>()); // succeed

Why is it not 12 (4 + 8)? Does Rust have special treatment for tuples?


Solution

  • Why is it not 12 (4 + 8)? Does Rust have special treatment for tuples?

    No. A regular struct can (and does) have the same "problem".

    The answer is padding: on a 64-bit system, an f64 should be aligned to 8 bytes (that is, its starting address should be a multiple of 8). A structure normally has the alignment of its most constraining (largest-aligned) member, so the tuple has an alignment of 8.

    This means your tuple must start at an address that's a multiple of 8, so the i32 starts at a multiple of 8, ends on a multiple of 4 (as it's 4 bytes), and the compiler adds 4 bytes of padding so the f64 is properly aligned:

    0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    [ i32 ] padding [     f64     ]
    

    "But wait", you shout, "if I reverse the fields of my tuple the size doesn't change!".

    That's true: the schema above is not accurate because by default rustc will reorder your fields to compact structures, so it will really do this:

    0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    [     f64     ] [ i32 ] padding 
    

    which is why your third attempt is 16 bytes:

    0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    [     f64     ] [ i32 ] [ i32 ]
    

    rather than 24:

    0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    [  32 ] padding [     f64     ] [  32 ] padding 
    

    "Hold your horses" you say, keen eyed that you are, "I can see the alignment for the f64, but then why is there padding at the end? There's no f64 there!"

    Well that's so the computer has an easier time with sequences: a struct with a given alignment should also have a size that's a multiple of its alignment, this way when you have multiple of them:

    0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    [     f64     ] [ i32 ] padding [     f64     ] [ i32 ] padding 
    

    they're properly aligned and the computations of how to lay the next one are simple (just offset by the size of the struct), it also avoids putting this information everywhere. Basically, an array / vec is never itself padded, instead the padding is in the struct it stores. This allows packing to be a struct property and not infect arrays as well.


    Using the repr(C) attribute, you can tell Rust to lay your structures in exactly the order you gave (it's not an option for tuples FWIW).

    That is safe, and while it is not usually useful there are some edge-cases where it's important, those I know of (there are probably others) are:

    • Interfacing with foreign (FFI) code, which expects a very specific layout, that is in fact the origin of the flag's name (it makes Rust behave like C).
    • Avoiding false sharing in high-performance code.

    You can also tell rustc to not pad the structure using repr(packed).

    That is much riskier, it will generally degrade performances (most CPUs are rather cross with unaligned data) and might crash the program or return the wrong data entirely on some architectures. That is highly dependent on the CPU architecture, and on the system (OS) running on it: per the kernel's Unaligned Memory Accesses document

    1. Some architectures are able to perform unaligned memory accesses transparently, but there is usually a significant performance cost.
    2. Some architectures raise processor exceptions when unaligned accesses happen. The exception handler is able to correct the unaligned access, at significant cost to performance.
    3. Some architectures raise processor exceptions when unaligned accesses happen, but the exceptions do not contain enough information for the unaligned access to be corrected.
    4. Some architectures are not capable of unaligned memory access, but will silently perform a different memory access to the one that was requested, resulting in a subtle code bug that is hard to detect!

    So "Class 1" architectures will perform the correct accesses, possibly at a performance cost.

    "Class 2" architectures will perform the correct accesses, at a high performance cost (the CPU needs to call into the OS, and the unaligned access is converted into an aligned access in software), assuming the OS handles that case (it doesn't always in which case this resolves to a class 3 architecture).

    "Class 3" architectures will kill the program on unaligned accesses (since the system has no way to fix it up.

    "Class 4" will perform nonsense operations on unaligned accesses and are by far the worst.

    An other common pitfall or unaligned access is that they tend to be non-atomic (since they need to expand into a sequence of aligned memory operations and manipulations of those), so you can get "torn" reads or writes even for otherwise atomic accesses.