Typically, the seeding of srand() is done by:
srand(time(NULL));
In my case, I use random numbers to generate an identifier for my client process at runtime on the network. The process sometimes restarts and generates a new identifier. As the number of clients increases, there's a good chance that two clients call srand(time(NULL))
within the same sec, which creates two identical identifiers, or a collision as seen by the server side. Some people suggested a finer resolution:
srand((time.tv_sec * 1000) + (time.tv_usec / 1000));
But The trouble here is that the seed will repeat every 24 days or so, and when the number of machines is large enough, there's still a chance of collision. There's another solution:
srand(time.tv_usec * time.tv_sec);
But this seems problematic to me too because the the modulus of this product (the higher bits overflow and get abandoned) is not evenly distributed within the range of the unsigned int
seed value. For example, for every sec, time.tv_usec == 0
leads to the same seed.
So is there a way to seed srand() in my case?
Edit: the client runs on Linux, Windows, Android and iOS, so /dev/random
or /dev/urandom
isn't always available.
P.S. I'm aware of the GUID/UUID approach, but I'd like to know if it's possible to just seed srand() properly in this case.
You have two domains: clients and processes. Therefore you need a unique identifier for each one. Processes obviously can be done with the process-id. For clients, I suggest using the MAC address, which MUST be unique for each network interface. I believe that all the platforms you list support sockets, so the SIOCGIFHWADDR ioctl may be supported.
The only problem is that MAC addresses are 48 bits and PIDs are typically 32 bits, so you have to pick the highest entropy bits of the two values to use for your srand seed. I suggest the lower 16 bits of the PID and the lower 16 bits of the MAC address.