Search code examples
dockerenvironment-variables

Inconsistent environment variable behavior in docker


Consider the following dockerfile.

FROM phusion/baseimage:jammy-1.0.4

#Create user and add to sudo, then add home to path
RUN useradd u1
RUN adduser u1 sudo
RUN mkdir /home/u1
RUN chown -R u1:u1 /home/u1
RUN apt-get update
RUN apt-get install -y sudo
ENV PATH=$PATH:/home/u1

#Create a little script and add it to path
RUN echo "echo Hello!" > /home/u1/t1.sh
RUN chown u1:u1 /home/u1/t1.sh
RUN chmod 700 /home/u1/t1.sh

#Check if script in path then run script - works fine
RUN echo $PATH
RUN t1.sh

#Same. This time script shows in path but does NOT work
RUN sudo -H -u u1 echo $PATH
RUN sudo -H -u u1 t1.sh

If you try to docker build this. The script will work just fine the first time. However, the second time - when I print the $PATH to screen as user u1 I see the folder containing the script in the path (this $PATH is same as when I echo it as root) - but the script behaves as if it is not in PATH. This seems inconsistent. Output something like this:

#14 [11/14] RUN echo /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/u1
#14 0.393 /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/u1
#14 DONE 0.4s

#15 [12/14] RUN t1.sh
#15 0.478 Hello!
#15 DONE 0.5s

#16 [13/14] RUN sudo -H -u u1 echo /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/u1
#16 0.330 /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/u1
#16 DONE 0.3s

#17 [14/14] RUN sudo -H -u u1 t1.sh
#17 0.364 sudo: t1.sh: command not found
#17 ERROR: process "/bin/sh -c sudo -H -u u1 t1.sh" did not complete successfully: exit code: 1
------
 > [14/14] RUN sudo -H -u u1 t1.sh:
0.364 sudo: t1.sh: command not found

Solution

  • You should remove sudo from this setup. You never need it in Docker. The Dockerfile you show doesn't switch users at all, but if it did, you could switch back using the USER directive.

    USER u1
    RUN t1.sh
    
    USER root
    RUN t1.sh
    

    Also consider putting the script in a directory that's normally on $PATH, if that's an option.

    COPY t1.sh /usr/local/bin
    

    (Most Dockerfiles I've seen that install sudo, including yours, do so in a not especially secure way. Since you've added the user to the sudo group, so long as they ask politely, the user can do absolutely anything they want with no restrictions, up to the container boundary and Linux capability constraints.)


    For security reasons, sudo resets the environment to a known state before it runs a command as another user. That includes resetting $PATH. So, sudo -u u1 t1.sh doesn't work because the script isn't in the default $PATH.

    The previous line is a little trickier to explain. Docker wraps RUN commands in a shell invocation, so the second-to-last line is equivalent to

    RUN ["/bin/sh", "-c", "sudo -H -u u1 echo $PATH"]
    

    So first the shell expands $PATH, and then it passes it (without any environment variable references) as an argument to sudo. Staying in Dockeresque JSON-array syntax, the command that gets run after shell expansion and splitting is something like

    RUN ["sudo", "-H", "-u", "u1", "echo", "/usr/local/bin:/usr/bin:/bin:/home/u1"]
    

    So even though sudo resets the environment, the environment variable expansion happens before this happens.

    Also compare

    sudo echo "$PATH"
    sudo sh -c 'echo "$PATH"'
    

    where the latter command keeps $PATH as a string before invoking sudo, and then a shell (running as a different user) reëxpands it.


    As the last example hints, none of this is specific to Docker, and you can demonstrate all of this same behavior on your host system.

    The sudo -E option preserves the outer process's environment, and that would probably help this particular case, but you shouldn't need sudo at all here.