In the C language a typical way to bind
a Socket would be the following way:
int server_socket_fd = socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in addr;
int port_number = 55555;
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl(INADDR_ANY);
addr.sin_port = htons(port_number);
int result = bind(server_socket_fd,(struct sockaddr *)&addr , sizeof(addr));
if(bind_result > 0)
{
// Stuff
}
I am wondering why the cast from sockaddr_in
to sockaddr
works since I cant find any documentation why it works.
It just seems like everyone just does it.
Why does the typecast work here?
I am not asking why we cast it, this has been answered here. I am asking why it works.
The sockaddr
struct basically has only one field, the address family. The code that receives this structure can use this field to determine what is the actual type of the structure. All the structures that are really used also have this field as the first one and therefore the value is deterministic.
The implementations also make the structures the same size with padding, so the memory usage is also completely deterministic. This makes it work properly.
For example Microsoft defines the sockaddr
structure in Visual Studio 2017 as
struct sockaddr {
unsigned short sa_family;
char sa_data[14];
};
sa_data
Maximum size of all the different socket address structures.
So any “child” struct that may be sent must have 14 bytes of data in it, no more or less.
Whereas sockaddr_in
is
struct sockaddr_in{
short sin_family;
unsigned short sin_port;
struct in_addr sin_addr;
char sin_zero[8];
};
Here the port and in_addr
require six bytes in total so 8 bytes of padding is used to keep the size the same as sockaddr
.
Of course it would be possible to create for example sockaddr_un
, set its address family to claim it’s sockaddr_in
and any code receiving the structure would cast it wrong and get completely wrong values.