Search code examples
google-app-enginedockergoogle-app-engine-pythonapp-engine-flexible

Multiple services with different dockerfiles on GAE Flexible


I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.

Here is the Dockerfile generated by gcloud beta app gen-config --custom command

FROM gcr.io/google-appengine/python
LABEL python_version=python3.6

RUN virtualenv --no-download /env -p python3.6

# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi

Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.

So, my question is how can I use two different commands to start the services?

EDIT 1

So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.

gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often


Solution

  • Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.

    But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.

    I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:

    • hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
    • package and publish your shared code as an external library, added to the requirements.txt file of each service using it
    • split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
    • have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.