Search code examples
linuxdockerulimit

Docker nproc limit has to be set, seemingly, too high in order for a container to run


I'm trying to debug a weird behavior of the image I don't own - GitHub repo with the image.

Running

docker run -it --ulimit nproc=100 --ulimit nofile=90:100 --network none --tmpfs /tmp:rw,noexec,nosuid,size=65536k --tmpfs /home/glot:rw,exec,nosuid,size=131072k --user=glot --read-only glot/python:latest /bin/bash

results in exec /bin/bash: resource temporarily unavailable.

However if we bump nproc to 10000 it suddenly starts working (for me even bumping it to 1000 results in the same error).

This image has no ps but from what I see in the proc folder, there are never more than 2 processes.

I'm not experienced with Linux and container limits, so any insights and comments are welcome.

P.S. A bit of background: This image serves as a sandbox for executing fleeting snippets of code, and nproc limit alleviates the fork bombing problem.


Solution

  • As the comment from @Philippe says - ulimit metrics are read per user system-wide.

    The problem was that the user created for the image shared the same UID as the main user on the host, although with different username. When the limits were enforced for nproc in container the total number of processes for this UID was taken into the account (including all the processes from the local host user). And since this was ran on the desktop env with many running processes it is no surprise it broke the 100 hard limit (even 1000) on the number of processes.

    Be careful with ulimits and UIDs, they are not encapsulated per container but rather shared system wide. And a user with different username but the same UID between a container and the host is treated as the same user when enforcing ulimits inside a container.