I am currently in the process of building an Ubuntu Docker Container trying to allow for users to execute bash functions from outside a running container. This would look like something like the following:
atlas_cli.sh
#!/bin/bash
# Shell Scripts to execute commands against various Atlas CLI Commands
test_func()
{
echo "This is working"
}
Outside of the container they could then the following to have the bash function executed inside the container.
atlas_build git:(wh/atlas_image) ✗ docker exec atlas_build-atlas-1 test_func
This is working
What I am finding right now is that while I am able to add the atlas_cli.sh
file to bashrc
and have the commands be executable inside the container in an interactive terminal I cannot sort how to be able to have these functions be executable from outside the container.
Dockerfile about the Atlas Image I am building.
FROM ubuntu:latest
RUN mkdir /atlas \
&& apt-get update \
&& apt-get install -y curl \
&& curl -sSf https://atlasgo.sh | sh
WORKDIR /atlas
COPY atlas.hcl .
COPY atlas_cli.sh .
RUN cat atlas_cli.sh >> /root/.bashrc
ENTRYPOINT ["tail", "-f", "/dev/null"]
Ideally I would be be able to allow for users at my company to either be able to execute commands from within the container, or outside the container. I've been able to sort the former by running the following within my Dockerfile:
You can't do this as you've described.
The normal operation of docker exec
is to take the administrator-provided command and directly run it as a process. It does not normally run a shell, unless you explicitly ask it to. Furthermore, it does not go through paths like a Dockerfile ENTRYPOINT
that could potentially rewrite the command.
So for this to work, when an administrator runs docker exec
, they'd have to explicitly specify that they want to run a shell, and that they need it to read shell dotfiles, sort of like
docker exec the-container \
bash -l -c 'test_func'
A comment suggests changing these from shell functions to standalone scripts. For example, if you have
#!/bin/sh
echo "This is working"
and in your Dockerfile
you COPY
that script to a location that's normally in $PATH
COPY test_script /usr/local/bin
then you could directly run it via normal means, without specifically needing a shell
docker run --rm your-image test_script
docker exec existing-container test_script
(I'd prefer docker run --rm
over docker exec
here; in particular, the ENTRYPOINT tail
creates a container that's not doing anything, and there's no reason to keep it around, and the specific ENTRYPOINT
construct makes it tricky to do anything else. Also remember that anyone who can run any docker
command can very easily take over the entire system; I might prefer some sort of HTTP or message-oriented API if that's an option.)