Search code examples
dockeriptablescoreos

Coreos security


I'm playing with coreos and digitalocean, and I'd like to start allowing internal communication between my containers.

I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.

I've explored a lot of options for this, but none of them seem satisfactory:

  • Using the '-p', I can ensure docker binds to the local interface, but this has two downsides:
    • I can't easily test services by SSHing in, because that traffic originates from localhost
    • I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
  • I tried using flannel, but it doesn't make the traffic private (or I didn't set it up right)
  • I considered using iptables on the containers to prevent external access, but that doesn't seem as secure
  • I tried using iptables on the coreos hosts, but ... it's tricky, and I couldn't get it working.

So what's the best approach, and I'll invest time in making it work.

Overall, I guess I need to find something that I can:

  • Roll out to all the hosts reliably
  • Something that is reasonably flexible going forward
  • Something that allows for 'edge machines' which are accessible from the wider internet.

Solution

I'll go into how I ended up solving this. Thanks to larsks for their help. In the end, their approach was the correct one. It's tricky on coreos, because there aren't really stable addresses, like larsks assumes. The whole point of coreos it to be able to forget about ip addresses.

I solved this by finding a not-too-bad way to inject the ip address into the command in the service file. The tricky thing about this is that it doesn't really support a lot of the shell features I expected. What I wanted to do was to assign the ip address of the machine to a variable then inject it into the command:

ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');
/usr/bin/docker run -p $ip:7000:7000 ...

But, as mentioned, that doesn't work. So what to do? Get the shell!

ExecStart=/usr/bin/sh -c "\
     export ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');\
     echo $ip;\
     /usr/bin/docker run -p $ip:7000:7000"

I hit a few problems along the way.

  1. I'm pretty sure there aren't newlines in that command, so I had to add the ';' characters
  2. when you test the above bash -c command in a shell, it'll have very different effects to when systemd does it. In the shell you need to escape the '$' characters, while in systemd config files, you don't.
  3. I included the echo so that I could see what the command thought the ip was.
  4. When I was doing all this, I actually inserted a small webserver to the docker image, so that I could just test using curl.

Downsides of this approach is that it's tied to the way ifconfig works, and ipv4. In fact, this approach doesn't work on my linux mint laptop, where ifconfig produces differently formatted output. The important lesson here is to output things in yaml or json, so that shell json tools can access things more easily.


Solution

  • I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.

    This is exactly the behavior that you get with the -p option when you specify an ip address. Let's say I have a host with two external interfaces, eth0 (with address 10.0.0.10) and eth1 (with address 192.168.0.10), and the docker0 bridge at 172.17.42.1/16.

    If I start a container like this:

    docker run -p 192.168.0.10:80:80 -d larsks/mini-httpd
    

    This will start a container that is accessible over the eth1 interface at 192.168.0.10, port 80. This service is also accessible -- from the host on which the container is located -- at the address assigned to the container on the docker0 network. This would be something like 172.17.0.39, port 80.

    This seems to meet your goals:

    • The container port is exposed over the "private" eth1 interface.
    • The container port is accessible from the host.

    I can't easily test services by SSHing in, because that traffic originates from localhost.

    If you were running ssh inside a container, you would ssh to it at the "internal" address assigned by Docker. But if you are running ssh inside your containers, you may want to consider not doing that and rely on tools like docker exec instead.

    I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on

    With this solution, there is no need to inject the machine ip into the container.