I have a small web application (a Rails app called sofia
) that I'm deploying locally with minikube
.
When I create the k8s resources and run my deployment, the containers do not contain any of the files that were supposed to be copied over during the image build process.
Here's what I'm doing:
As part of the Dockerfile
build, I copy the contents of my local cloned repository into the image working directory:
RUN mkdir -p /app
WORKDIR /app
COPY . ./
docker-compose
setupHistorically I've used a docker-compose
file to run this application and all its services. I map my local directory to the container's working directory (see volumes:
below). This is a nice convenience when working locally since all changes are reflected "live" inside the container:
# docker-compose.yml
sofia:
build:
context: .
args:
RAILS_ENV: development
environment:
DATABASE_URL: postgres://postgres:sekrit@postgres/
image: sofia/sofia:local
ports:
- # ...
volumes:
- .:/app #<---- HERE
kompose
In order to run this on minikube
, I use the kompose
tool that Kubernetes themselves provide in order to transform my docker-compose
file into a k8s resource file that can be consumed.
$ kompose convert --file docker-compose.yml --out k8s.yml --with-kompose-annotation=false
WARN Volume mount on the host "/Users/jeeves/git/jeeves/sofia" isn't supported - ignoring path on the host
INFO Kubernetes file "k8s.yml" created
As you can see, it generates a warning that my local volume can not be mounted against the remote volume. This makes sense since a k8s deployment runs "remotely", so I just ignore the warning.
Finally I run the above resources with k8s / minikube
minikube start
kubectl apply -f k8s.yml
I notice the sofia
container keeps crashing and restarting so I check the logs:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pod/sofia-6668945bc8-x9267 0/1 CrashLoopBackOff 1 10s
pod/postgres-fc84cbd4b-dqbrh 1/1 Running 0 10s
pod/redis-cbff75fbb-znv88 1/1 Running 0 10s
$ kubectl logs pod/sofia-6668945bc8-x9267
Could not locate Gemfile or .bundle/ directory
That error is Ruby/Rails specific, but the underlying cause is that there are no files in the container! I can confirm this by entering the container and checking files with ls
- it is indeed empty.
sofia/sofia:latest
image is correctly built with the COPY
-ied file contents, why would it dissapear when runing the container on minikube
?Thanks!
The issue is that Volumes are not behaving the same way in Docker in docker-compose and K8s. Kompose can't perfectly translate volume. In Docker with docker-compose, your declared volume keep the existing files from the directory, while in k8s, a volume is created empty and shadow the existing content.
There is no direct equivalent of docker-compose volumes that keeps existing files in k8s, you will have to work around that with one of the following options, depending on what makes sense in your use case: