Here's a function:
void func(char *ptr)
{
*ptr = 42;
}
Here's an output (cut) of gcc -S function.c:
func:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movq %rdi, -8(%rbp)
movq -8(%rbp), %rax
movb $42, (%rax)
nop
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
I can use that function as:
func(malloc(1));
or as:
char local_var;
func(&local_var);
The question is how does processor determine which segment register should be used to transform the effective address to virtual one in this instruction (it may be DS as well as SS)
movb $42, (%rax)
I have a x86_64 proc.
The default segment is DS; that’s what the processor uses in your example.
In 64-bit mode, it doesn’t matter what segment is used, because the segment base is always 0 and permissions are ignored. (There are one or two minor differences, which I won’t go into here.)
In 32-bit mode, most OSes set the base of all segments to 0, and sets their permissions the same, so again it doesn’t matter.
In code where it does matter (especially 16 bit code that needs to use more than 64 KB of memory), the code must use far pointers, which include the segment selector as part of the pointer value. The software must load the selector into a segment register in order to perform the memory access.