When using select(), I understand that the process is:
What I don't understand, however, is what exactly it means for a file descriptor to be "set". In this documentation, it says that it means the file descriptor is part of the specified fd_set. But if ISSET() checks if something happened with the file descriptor, why you "setting" each file descriptor at the beginning of each iteration, before select() is even called? Aren't they only supposed to be "set" when something changes? Can they be "unset" at some point before select() returns?
The select bits for read, write and exception are set for the file handles that you want the kernel to look at.
The kernel will loop over the bits up to the limit you provide in the first argument to select. It overwrites the bits you sent with the results of the select.
select returns the number of bits it set, which is important because the kernel isn't required to examine all of the handles, it might return after just one, and if you have a lot of file handles you should be using epoll, but anyway, you can count the number of bits found until it matches the select return and avoid doing the entire bitmask.
(Although I think most current Unixish kernels scan all the bits because there were bugs where a low-numbered file handle could starve the high numbers by always reporting ready.)
You would set the bitmasks to send into select based on what you need. If you expect to read from the socket, set its read bit. If you have data waiting to write to the socket, set the write bit. Etc.
You probably want to always set the read bit, because that's how you can tell that the socket closed, or you can try to write to a closed socket and get the EPIPE error.
It is a major mistake to assume the kernel will buffer all the writes you do to a socket, which is why there is a write bit for select: it will trigger when there is buffer space to write more data, probably at least a page of 4K bytes.