I have a Dockerfile that does a pip install
of a package from an AWS code artifact. The install requires an auth token, so my current approach is to generate the dynamic/secret repo url in a build script and pass it into Docker as a build arg, which leads to lines like this in my Dockerfile:
ARG CORE_REPO_URL
ARG CORE_VERSION
RUN pip install -i $CORE_REPO_URL mylib_core==$CORE_VERSION
The use of ARGs in a RUN
command cause that layer to never be cached, and therefore this part gets rebuilt every time even if the library version did not change.
Is there a better way to do this such that the layer cache would be used unless the CORE_VERSION
changed?
Maybe I should be installing the aws
tool chain in the image so the dynamic repo url can be generated in there in an earlier step (using the same command every time so it wouldn't require an ARG and would hopefully cache the layer)? One downside of this is having to put AWS credentials in the image. I could maybe involve docker secrets
to avoid that if that's the only solution though.
Figured out for my use case of :
Hopefully this will help someone else who finds this.
Docker Build using an Assumed Role Profile
Remember to build with the buildkit
# syntax = docker/dockerfile:experimental
# This needs to go at the top of the file or it will break ^
# installing requirements NB, You can probably just take this image it's real small and depending on latest isn't the best, but idc only working on a poc.
FROM amazon/aws-cli:latest AS dependencies
ARG PYTHONLIBS
ARG PROFILE
ENV AWS_DEFAULT_PROFILE=$PROFILE
COPY ./requirements.txt .
# NB I had this here as I was assuming a role you may not need it.
COPY ./config /root/.aws/config
RUN yum install -y pip
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws codeartifact login --tool pip --repository //rest of command
RUN pip install -r requirements.txt --target ${PYTHONLIBS}