I can't manage to get ServerSocket
to use IPv4 instead of IPv6, which seems to be default behaviour on my system.
Running
new ServerSocket(11000, queueLimit, InetAddress.getByName("0.0.0.0")
will result in
➜ ~ netstat -an | grep 11000
tcp46 0 0 *.11000 *.* LISTEN
➜ ~ lsof -i :11000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 2845 myuser 383u IPv6 0x5ba3bfaea6c7372d 0t0 TCP *:irisa (LISTEN)
As you can notice, the address where we listen to port 11000 is an IPv6 address, even if I specified "0.0.0.0" IPv4 address when creating the ServerSocket.
On the other hand, if I specify -Djava.net.preferIPv4Stack=true
on the vm option, I will get the following scenario.
➜ ~ netstat -an | grep 11000
tcp4 0 0 *.11000 *.* LISTEN
➜ ~ lsof -i :11000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 3628 myuser 384u IPv4 0x5ba3bfaeaafaa08d 0t0 TCP *:irisa (LISTEN)
You can now notice I now correctly listen to port 11000 on IPv4 address, which is what I want, but I am only able to reach this solution by sending specific vm option.
How can I reliably decide which version of the IP stack I can listen to when opening a ServerSocket
?
IPv6 sockets can also listen for incoming IPv4 connections, as you can see from the tcp46
socket type. There is nothing wrong with that. IPv6 is growing really fast, and making sure your software can work with both IPv4 and IPv6 is good practice that will prevent many issues in the future (and today).
Forcing a socket to listen only to IPv4 is strongly discouraged.