Search code examples
sshdockeriptables

Port based routing in docker container


I have a docker application container (node:latest image) which has two network interfaces:

  • eth1: Which is a the default interface and is the bridge network between all containers from a service. (Managed by pipework, but I can't change anything in that level)
  • eth0: Which is a regular docker0 interface and has access to everywhere except those on eth1.

And here is the default routing table:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.25.88.254    0.0.0.0         UG    0      0        0 eth1
10.25.88.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

As you see it is necessary to have eth1 as default interface, but it causes me a problem: My application needs ssh access to some other remote servers which are not accessible through eth1 but eth0 has the access. So I need to change the route for this requests. My first solution was this:

route add -net 10.20.33.0/24 gw 172.17.42.1

This approach works fine and I'll get access to those addresses in range 10.20.33.0/24. But it will block access from those hosts to application itself. The application is serving on port 80 and after adding this route, all requests to it from hosts in range 10.20.33.0/24 will fail.

So I suppose that route is so global and effects all input/output requests to that range of IPs. I searched for a way to just route output ssh requests all over stackoverflow and so far this is what I have:

# Initialize route table
echo 1 p.ssh > /etc/iproute2/rt_tables
ip route add table p.ssh default via 172.17.42.1

# Mark Packet with matching D.Port
iptables -A PREROUTING  -t mangle -p tcp --dport 22 -j MARK --set-mark 1
iptables -A POSTROUTING -t nat -o eth0 -p tcp --dport 22 -j SNAT --to 172.17.42.1


#IP Route
ip rule add fwmark 1 table p.ssh
ip route flush cache

#IP Stack
#This is the missing part from the guide
echo 1 > /proc/sys/net/ipv4/ip_forward
for f in /proc/sys/net/ipv4/conf/*/rp_filter ; do echo 0 > $f ; done
echo 0 > /proc/sys/net/ipv4/route/flush

But it is not working. I've tried to log the marked package by iptables so maybe I could find any issue in the process but the syslog is not running inside the container, so I installed it apt-get install rsyslog and added this rule:

iptables -A PREROUTING  -t mangle -p tcp --dport 22 -j LOG --log-level 4 --log-prefix "fwmark 1: "

But it's not logging anything either.


Solution

  • After a couple of days I was able to solve this problem. I used tcpdump as follow to find out if the traffic is routing through eth0 interface or not:

    tcpdump -i eth0 port ssh
    

    And it turned out that the first problem is from iptables marking command. So instead of marking requests on PREROUTING chain, I marked them on OUTPUT chain like follow:

    iptables -t mangle -A OUTPUT -p tcp --dport 22 -j MARK --set-mark 1
    

    Now I was able to see the ssh requests on eth0 but still I couldn't connect. It turned out that this requests need to be masqueraded to function properly:

    iptables --table nat --append POSTROUTING -o eth0 -p tcp --dport 22 -j MASQUERADE
    

    The final script is now like:

    REMOTE_HOSTS=10.20.33.0/24
    HOSTS_ADDR=172.17.42.1
    
    # Add table
    echo 1 p.ssh >> /etc/iproute2/rt_tables
    # Add route
    ip rule add fwmark 1 table p.ssh
    ip route add $REMOTE_HOSTS via $HOSTS_ADDR dev eth0 table p.ssh
    
    # Sets mark correctly
    iptables -t mangle -A OUTPUT -p tcp --dport 22 -j MARK --set-mark 1
    iptables --table nat --append POSTROUTING -o eth0 -p tcp --dport 22 -j MASQUERADE
    
    #IP Stack
    echo 1 > /proc/sys/net/ipv4/ip_forward     # Default in debian/ubuntu
    for f in /proc/sys/net/ipv4/conf/*/rp_filter ; do echo 0 > $f ; done