some guys use a firewall on their laptops which not only blocks their own local incoming ports (except those they need for their application) but also blocks messages unless they are issued from a distinct port number. We're talking about a local UDP server which is listening to UDP broadcasts. The problem is that the remote client uses a random port, say 1024, which is blocked unless they tell the firewall to accept it.
What puzzles me is that as far as I know from using sockets in my programs is that usually the client gets its port number from the OS, whereas only when you have a server, you bind your socket to a distinct port, right?
In my literature and in tutorials and code snippets in the web I haven't found any clue that clients should be using fixed port numbers at all.
So how is this in reality? Am I probably missing a point? Are there client applications around using fixed ports? Is is actually useful to block remote ports with a firewall? And if yes, what level of added security does this give to you?
Thanks for enlightenment in beforehand...
Although the default API's allow the network stack to select a local port for client connections, clients may specify a fixed port for various reasons.