Search code examples
amazon-web-servicesdockerdocker-composeamazon-ecsaws-fargate

Dokcer-compose issue with AWS Fargate


I'm having a long running problem building a new webapp. A while back I request info on some docker-compose types of problems and trying to reduce the size of the images:

Decrease docker build size, share conda environment between two images

In short I have got to a stage (many iterations of docker-compose, dockerfile, buildspec.yaml) where I can spin the images up during an AWS-Codebuild. However when the images are pushed to AWS-Fargate the images in the two containers appear to be the same.

File directory structure:

-worker_app
---service
-----worker.py
-----server.py
-----other_files.py
---other_folders
---Dockerfile
---environment.yml
-buildspec.yml
-docker-compose.yml 

Buildspec:


version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --region $AWS_DEFAULT_REGION ecr get-login-password | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-2.amazonaws.com

  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - pwd
      - ls -la
      - echo checking config
      - docker-compose -f docker-compose.yml config
      - echo building images
      - docker-compose -f docker-compose.yml up --build -d

      # Tag the built docker image using the appropriate Amazon ECR endpoint and relevant
      # repository for our service container. This ensures that when the docker push
      # command is executed later, it will be pushed to the appropriate repository.
      - docker tag co2gasp/worker:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
      - docker tag co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker image..
      # Push the image to ECR.
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
      - echo Completed pushing Docker image. Deploying Docker image to AWS Fargate on `date`
      # Create a artifacts file that contains the name and location of the image
      # pushed to ECR. This will be used by AWS CodePipeline to automate
      # deployment of this specific container to Amazon ECS.
      - printf '[{"name":"CO2GASP-Service","imageUri":"%s"},{"name":"CO2GASP-Worker","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest > imagedefinitions.json

artifacts:
  # Indicate that the created imagedefinitions.json file created on the previous
  # line is to be referenceable as an artifact of the build execution job.
  files: imagedefinitions.json

Docker-compose

version: '3.8'
services:
  web:
    # will build ./docker/web/Dockerfile
    image: co2gasp/service:latest
    build: ./worker_app
    command: ["python", "server.py"]
  worker:
    # will build ./docker/db/Dockerfile
    image: co2gasp/worker:latest
    build: ./worker_app
    command: ["python", "worker.py"]

Dockerfike

FROM continuumio/miniconda3

RUN apt-get update -y
RUN apt-get install zip -y
RUN apt-get install awscli -y
#RUN aws route53 list-hosted-zones
WORKDIR /app
## Create the environment:
COPY environment.yml .
#Make RUN commands use the new environment:
RUN conda env create -f environment.yml

COPY ./PHREEQC /PHREEQC
COPY ./service /service
COPY ./temp_files /temp_files
COPY ./INPUT_DATA /INPUT_DATA
COPY ./PHREEQC/phreeqc_files/database/pitzer.dat /bin/pitzer.dat
COPY ./PHREEQC/phreeqc_files/bin/phreeqc /bin/phreeqc
#ENV PATH=${PATH}:/app/bin
ENV PATH=${PATH}:/bin/phreeqc
ENV PATH=${PATH}:/bin/pitzer.dat
ENV PATH=${PATH}:/bin
RUN echo 'Adding new'
#RUN phreeqc
RUN echo "conda activate myenv" >> ~/.bashrc
#RUN echo "export PATH=/PHREEQC/phreeqc_files/bin/phreeqc:${PATH}" >> ~/.bashrc
#RUN echo "export PATH=/PHREEQC/phreeqc_files/bin/phreeqc:$PATH" >> ~/.bashrc

#RUN echo "$(cat ~/.bashrc)"

SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]

# Demonstrate the environment is activated:
RUN echo "Make sure flask is installed:"
RUN python -c "import flask"

RUN echo Copy service directory



WORKDIR /service
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myenv"]
CMD ["python","server.py"]

Codebuild output

[Container] 2023/01/22 20:22:23 Waiting for agent ping
[Container] 2023/01/22 20:22:24 Waiting for DOWNLOAD_SOURCE
[Container] 2023/01/22 20:22:37 Phase is DOWNLOAD_SOURCE
[Container] 2023/01/22 20:22:37 CODEBUILD_SRC_DIR=/codebuild/output/src693461010/src
[Container] 2023/01/22 20:22:37 YAML location is /codebuild/output/src693461010/src/buildspec.yml
[Container] 2023/01/22 20:22:37 Setting HTTP client timeout to higher timeout for S3 source
[Container] 2023/01/22 20:22:37 Processing environment variables
[Container] 2023/01/22 20:22:37 No runtime version selected in buildspec.
[Container] 2023/01/22 20:22:39 Moving to directory /codebuild/output/src693461010/src
[Container] 2023/01/22 20:22:39 Configuring ssm agent with target id: codebuild:7b0e2985-8075-4ac9-ad81-61c7e146093e
[Container] 2023/01/22 20:22:39 Successfully updated ssm agent configuration
[Container] 2023/01/22 20:22:39 Registering with agent
[Container] 2023/01/22 20:22:39 Phases found in YAML: 3
[Container] 2023/01/22 20:22:39  BUILD: 10 commands
[Container] 2023/01/22 20:22:39  POST_BUILD: 6 commands
[Container] 2023/01/22 20:22:39  PRE_BUILD: 2 commands
[Container] 2023/01/22 20:22:39 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
[Container] 2023/01/22 20:22:39 Phase context status code:  Message: 
[Container] 2023/01/22 20:22:40 Entering phase INSTALL
[Container] 2023/01/22 20:22:40 Phase complete: INSTALL State: SUCCEEDED
[Container] 2023/01/22 20:22:40 Phase context status code:  Message: 
[Container] 2023/01/22 20:22:40 Entering phase PRE_BUILD
[Container] 2023/01/22 20:22:40 Running command echo Logging in to Amazon ECR...
Logging in to Amazon ECR...

[Container] 2023/01/22 20:22:40 Running command aws --region $AWS_DEFAULT_REGION ecr get-login-password | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-2.amazonaws.com
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[Container] 2023/01/22 20:22:49 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2023/01/22 20:22:49 Phase context status code:  Message: 
[Container] 2023/01/22 20:22:49 Entering phase BUILD
[Container] 2023/01/22 20:22:49 Running command echo Build started on `date`
Build started on Sun Jan 22 20:22:49 UTC 2023

[Container] 2023/01/22 20:22:49 Running command echo Building the Docker image...
Building the Docker image...

[Container] 2023/01/22 20:22:49 Running command pwd
/codebuild/output/src693461010/src

[Container] 2023/01/22 20:22:49 Running command ls -la
total 12
drwxr-xr-x 4 root root  139 Jan 22 20:22 .
drwxr-xr-x 3 root root   17 Jan 22 20:22 ..
-rw-r--r-- 1 root root 2338 Jan 22 20:22 buildspec.yml
-rw-r--r-- 1 root root 2888 Jan 22 20:22 buildspec_old.yml
-rw-r--r-- 1 root root  312 Jan 22 20:22 docker-compose.yml
drwxr-xr-x 6 root root  113 Jan 22 20:22 server_app
-rw-r--r-- 1 root root    0 Jan 22 20:22 website_build.txt
drwxr-xr-x 6 root root  135 Jan 22 20:22 worker_app

[Container] 2023/01/22 20:22:49 Running command echo checking config
checking config

[Container] 2023/01/22 20:22:49 Running command docker-compose -f docker-compose.yml config
services:
  web:
    build:
      context: /codebuild/output/src693461010/src/worker_app
    command:
    - python
    - server.py
    image: co2gasp/service:latest
  worker:
    build:
      context: /codebuild/output/src693461010/src/worker_app
    command:
    - python
    - worker.py
    image: co2gasp/worker:latest
version: '3.8'


[Container] 2023/01/22 20:22:50 Running command echo building images
building images

[Container] 2023/01/22 20:22:50 Running command docker-compose -f docker-compose.yml up --build -d
Creating network "src_default" with the default driver
Building web
Step 1/26 : FROM continuumio/miniconda3
latest: Pulling from continuumio/miniconda3
Digest: sha256:10b38c9a8a51692838ce4517e8c74515499b68d58c8a2000d8a9df7f0f08fc5e
Status: Downloaded newer image for continuumio/miniconda3:latest
 ---> 45461d36cbf1
Step 2/26 : RUN apt-get update -y
 ---> Running in dd74833eb6a6
Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Get:4 http://deb.debian.org/debian bullseye/main amd64 Packages [8183 kB]
Get:5 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages [214 kB]
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [14.6 kB]
Fetched 8620 kB in 1s (6800 kB/s)
Reading package lists...
Removing intermediate container dd74833eb6a6
 ---> d025f5361af7
Step 3/26 : RUN apt-get install zip -y
 ---> Running in 93e55c431c12
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  unzip
The following NEW packages will be installed:
  unzip zip
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 404 kB of archives.
After this operation, 1031 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 unzip amd64 6.0-26+deb11u1 [172 kB]
Get:2 http://deb.debian.org/debian bullseye/main amd64 zip amd64 3.0-12 [232 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 404 kB in 0s (2258 kB/s)
Selecting previously unselected package unzip.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 12440 files and directories currently installed.)
Preparing to unpack .../unzip_6.0-26+deb11u1_amd64.deb ...
Unpacking unzip (6.0-26+deb11u1) ...
Selecting previously unselected package zip.
Preparing to unpack .../archives/zip_3.0-12_amd64.deb ...
Unpacking zip (3.0-12) ...
Setting up unzip (6.0-26+deb11u1) ...
Setting up zip (3.0-12) ...
Removing intermediate container 93e55c431c12
 ---> e3c960679ed3
Step 4/26 : RUN apt-get install awscli -y
 ---> Running in 5664acef1c09
Reading package lists...
Building dependency tree...
Reading state information...
(removed for shortness)
Removing intermediate container 5c4e38ee01c5
 ---> 10ae3f85a5dc
Step 8/26 : COPY ./PHREEQC /PHREEQC
 ---> e90d9f82e4be
Step 9/26 : COPY ./service /service
 ---> 9adc70933fcd
Step 10/26 : COPY ./temp_files /temp_files
 ---> 0009a6b30e37
Step 11/26 : COPY ./INPUT_DATA /INPUT_DATA
 ---> c6fefb1177d2
Step 12/26 : COPY ./PHREEQC/phreeqc_files/database/pitzer.dat /bin/pitzer.dat
 ---> 6c607db80b5c
Step 13/26 : COPY ./PHREEQC/phreeqc_files/bin/phreeqc /bin/phreeqc
 ---> 9929ca929c36
Step 14/26 : ENV PATH=${PATH}:/bin/phreeqc
 ---> Running in 3584df0a38a3
Removing intermediate container 3584df0a38a3
 ---> bc1fbc3ab44a
Step 15/26 : ENV PATH=${PATH}:/bin/pitzer.dat
 ---> Running in df6567e946bb
Removing intermediate container df6567e946bb
 ---> 7884bbf9c81a
Step 16/26 : ENV PATH=${PATH}:/bin
 ---> Running in e5844cc5a89c
Removing intermediate container e5844cc5a89c
 ---> 863c92f66cfe
Step 17/26 : RUN echo 'Adding new'
 ---> Running in d983f0139087
Adding new
Removing intermediate container d983f0139087
 ---> 165061bdbb1a
Step 18/26 : RUN echo "conda activate myenv" >> ~/.bashrc
 ---> Running in 10480f5953e0
Removing intermediate container 10480f5953e0
 ---> 73b398920e88
Step 19/26 : SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
 ---> Running in 7825c13f4d82
Removing intermediate container 7825c13f4d82
 ---> 28d64beaf762
Step 20/26 : RUN echo "Make sure flask is installed:"
 ---> Running in 6464253fb0f7
Make sure flask is installed:

Removing intermediate container 6464253fb0f7
 ---> 8f24b186dbcb
Step 21/26 : RUN python -c "import flask"
 ---> Running in 35baf159fe93
Removing intermediate container 35baf159fe93
 ---> 02cef1cee9d9
Step 22/26 : RUN echo "Please work v14 new"
 ---> Running in 66d087cd8df8
Please work v14 new

Removing intermediate container 66d087cd8df8
 ---> c601c52eaeb0
Step 23/26 : RUN echo Copy service directory
 ---> Running in e82660354cd5
Copy service directory

Removing intermediate container e82660354cd5
 ---> aa3f75d5851f
Step 24/26 : WORKDIR /service
 ---> Running in 717fcc72d06d
Removing intermediate container 717fcc72d06d
 ---> ef5fdef9d4f4
Step 25/26 : ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myenv"]
 ---> Running in e0560cc2107d
Removing intermediate container e0560cc2107d
 ---> bd7571eca5cc
Step 26/26 : CMD ["python","server.py"]
 ---> Running in 0c20ad9202c1
Removing intermediate container 0c20ad9202c1
 ---> 45b528b9fc92

Successfully built 45b528b9fc92
Successfully tagged co2gasp/service:latest
Building worker
Step 1/26 : FROM continuumio/miniconda3
 ---> 45461d36cbf1
Step 2/26 : RUN apt-get update -y
 ---> Using cache
 ---> d025f5361af7
Step 3/26 : RUN apt-get install zip -y
 ---> Using cache
 ---> e3c960679ed3
Step 4/26 : RUN apt-get install awscli -y
 ---> Using cache
 ---> 80aedd834d9d
Step 5/26 : WORKDIR /app
 ---> Using cache
 ---> 441c997e0184
Step 6/26 : COPY environment.yml .
 ---> Using cache
 ---> c7d0ab20c3fd
Step 7/26 : RUN conda env create -f environment.yml
 ---> Using cache
 ---> 10ae3f85a5dc
Step 8/26 : COPY ./PHREEQC /PHREEQC
 ---> Using cache
 ---> e90d9f82e4be
Step 9/26 : COPY ./service /service
 ---> Using cache
 ---> 9adc70933fcd
Step 10/26 : COPY ./temp_files /temp_files
 ---> Using cache
 ---> 0009a6b30e37
Step 11/26 : COPY ./INPUT_DATA /INPUT_DATA
 ---> Using cache
 ---> c6fefb1177d2
Step 12/26 : COPY ./PHREEQC/phreeqc_files/database/pitzer.dat /bin/pitzer.dat
 ---> Using cache
 ---> 6c607db80b5c
Step 13/26 : COPY ./PHREEQC/phreeqc_files/bin/phreeqc /bin/phreeqc
 ---> Using cache
 ---> 9929ca929c36
Step 14/26 : ENV PATH=${PATH}:/bin/phreeqc
 ---> Using cache
 ---> bc1fbc3ab44a
Step 15/26 : ENV PATH=${PATH}:/bin/pitzer.dat
 ---> Using cache
 ---> 7884bbf9c81a
Step 16/26 : ENV PATH=${PATH}:/bin
 ---> Using cache
 ---> 863c92f66cfe
Step 17/26 : RUN echo 'Adding new'
 ---> Using cache
 ---> 165061bdbb1a
Step 18/26 : RUN echo "conda activate myenv" >> ~/.bashrc
 ---> Using cache
 ---> 73b398920e88
Step 19/26 : SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
 ---> Using cache
 ---> 28d64beaf762
Step 20/26 : RUN echo "Make sure flask is installed:"
 ---> Using cache
 ---> 8f24b186dbcb
Step 21/26 : RUN python -c "import flask"
 ---> Using cache
 ---> 02cef1cee9d9
Step 22/26 : RUN echo "Please work v14 new"
 ---> Using cache
 ---> c601c52eaeb0
Step 23/26 : RUN echo Copy service directory
 ---> Using cache
 ---> aa3f75d5851f
Step 24/26 : WORKDIR /service
 ---> Using cache
 ---> ef5fdef9d4f4
Step 25/26 : ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myenv"]
 ---> Using cache
 ---> bd7571eca5cc
Step 26/26 : CMD ["python","server.py"]
 ---> Using cache
 ---> 45b528b9fc92

Successfully built 45b528b9fc92
Successfully tagged co2gasp/worker:latest
Creating src_worker_1 ... 
Creating src_web_1    ... 
·[1A
Creating src_web_1    ... done
·[1B·[2A
Creating src_worker_1 ... done
·[2B
[Container] 2023/01/22 20:50:09 Running command docker tag co2gasp/worker:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest

[Container] 2023/01/22 20:50:09 Running command docker tag co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest

[Container] 2023/01/22 20:50:09 Phase complete: BUILD State: SUCCEEDED
[Container] 2023/01/22 20:50:09 Phase context status code:  Message: 
[Container] 2023/01/22 20:50:09 Entering phase POST_BUILD
[Container] 2023/01/22 20:50:09 Running command echo Build completed on `date`
Build completed on Sun Jan 22 20:50:09 UTC 2023

[Container] 2023/01/22 20:50:09 Running command echo Pushing the Docker image..
Pushing the Docker image..

[Container] 2023/01/22 20:50:09 Running command docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest
The push refers to repository [769126297153.dkr.ecr.us-east-2.amazonaws.com/co2gasp/worker]
72e0458bf59f: Preparing
3ed9cb7ff5e4: Preparing
33810354d9da: Preparing
58f71f4114eb: Preparing
edcb85c7c85a: Preparing
89bfec2a6ec0: Preparing
9809700b743d: Preparing
d4ea492f859c: Preparing
aaa1fcd61920: Preparing
edc2c622596c: Preparing
107838da2ee5: Preparing
999b746901d1: Preparing
e7ecfc83aef3: Preparing
b9a946f70034: Preparing
b16bba17811d: Preparing
d8f00b2dd1ec: Preparing
7bd72d2b5d13: Preparing
92d9617bd3c6: Preparing
32a72a3896c6: Preparing
8a70d251b653: Preparing
9809700b743d: Waiting
d4ea492f859c: Waiting
aaa1fcd61920: Waiting
edc2c622596c: Waiting
107838da2ee5: Waiting
89bfec2a6ec0: Waiting
999b746901d1: Waiting
7bd72d2b5d13: Waiting
e7ecfc83aef3: Waiting
92d9617bd3c6: Waiting
b9a946f70034: Waiting
32a72a3896c6: Waiting
b16bba17811d: Waiting
8a70d251b653: Waiting
3ed9cb7ff5e4: Pushed
72e0458bf59f: Pushed
58f71f4114eb: Pushed
edcb85c7c85a: Pushed
33810354d9da: Pushed
9809700b743d: Pushed
aaa1fcd61920: Pushed
edc2c622596c: Pushed
e7ecfc83aef3: Pushed
b9a946f70034: Pushed
89bfec2a6ec0: Pushed
d8f00b2dd1ec: Pushed
7bd72d2b5d13: Pushed
92d9617bd3c6: Layer already exists
32a72a3896c6: Layer already exists
8a70d251b653: Layer already exists
107838da2ee5: Pushed
b16bba17811d: Pushed
d4ea492f859c: Pushed
999b746901d1: Pushed
latest: digest: sha256:ffff1b4491a2e00c440570264e7f1f3d2accb2b704d3be7f09ae6cfef544ed62 size: 4516

[Container] 2023/01/22 20:52:13 Running command docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest
The push refers to repository [769126297153.dkr.ecr.us-east-2.amazonaws.com/co2gasp/service]
72e0458bf59f: Preparing
3ed9cb7ff5e4: Preparing
33810354d9da: Preparing
58f71f4114eb: Preparing
edcb85c7c85a: Preparing
89bfec2a6ec0: Preparing
9809700b743d: Preparing
d4ea492f859c: Preparing
aaa1fcd61920: Preparing
edc2c622596c: Preparing
107838da2ee5: Preparing
999b746901d1: Preparing
e7ecfc83aef3: Preparing
b9a946f70034: Preparing
b16bba17811d: Preparing
d8f00b2dd1ec: Preparing
7bd72d2b5d13: Preparing
92d9617bd3c6: Preparing
32a72a3896c6: Preparing
89bfec2a6ec0: Waiting
8a70d251b653: Preparing
aaa1fcd61920: Waiting
d4ea492f859c: Waiting
b16bba17811d: Waiting
d8f00b2dd1ec: Waiting
edc2c622596c: Waiting
9809700b743d: Waiting
107838da2ee5: Waiting
7bd72d2b5d13: Waiting
b9a946f70034: Waiting
92d9617bd3c6: Waiting
999b746901d1: Waiting
32a72a3896c6: Waiting
e7ecfc83aef3: Waiting
33810354d9da: Pushed
58f71f4114eb: Pushed
72e0458bf59f: Pushed
edcb85c7c85a: Pushed
3ed9cb7ff5e4: Pushed
9809700b743d: Pushed
aaa1fcd61920: Pushed
edc2c622596c: Pushed
e7ecfc83aef3: Pushed
b9a946f70034: Pushed
89bfec2a6ec0: Pushed
d8f00b2dd1ec: Pushed
7bd72d2b5d13: Pushed
92d9617bd3c6: Layer already exists
32a72a3896c6: Layer already exists
8a70d251b653: Layer already exists
b16bba17811d: Pushed
107838da2ee5: Pushed
d4ea492f859c: Pushed
999b746901d1: Pushed
latest: digest: sha256:ffff1b4491a2e00c440570264e7f1f3d2accb2b704d3be7f09ae6cfef544ed62 size: 4516

[Container] 2023/01/22 20:54:18 Running command echo Completed pushing Docker image. Deploying Docker image to AWS Fargate on `date`
Completed pushing Docker image. Deploying Docker image to AWS Fargate on Sun Jan 22 20:54:18 UTC 2023

[Container] 2023/01/22 20:54:18 Running command printf '[{"name":"CO2GASP-Service","imageUri":"%s"},{"name":"CO2GASP-Worker","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/co2gasp/worker:latest > imagedefinitions.json

[Container] 2023/01/22 20:54:18 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2023/01/22 20:54:18 Phase context status code:  Message: 
[Container] 2023/01/22 20:54:18 Expanding base directory path: .
[Container] 2023/01/22 20:54:18 Assembling file list
[Container] 2023/01/22 20:54:18 Expanding .
[Container] 2023/01/22 20:54:18 Expanding file paths for base directory .
[Container] 2023/01/22 20:54:18 Assembling file list
[Container] 2023/01/22 20:54:18 Expanding imagedefinitions.json
[Container] 2023/01/22 20:54:18 Found 1 file(s)
[Container] 2023/01/22 20:54:18 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2023/01/22 20:54:18 Phase context status code:  Message: 

When I ran the codebuild without the -d option i.e. instead of docker-compose -f docker-compose.yml up --build -d I did docker-compose -f docker-compose.yml up --build I got this relevant response

Step 26/26 : CMD ["python","server.py"]
 ---> Using cache
 ---> c367c9c15b42

Successfully built c367c9c15b42
Successfully tagged co2gasp/worker:latest
Creating src_web_1 ... 
Creating src_worker_1 ... 
·[1A
Creating src_worker_1 ... done
·[1B·[2A
Creating src_web_1    ... done
·[2BAttaching to src_worker_1, src_web_1
worker_1  | 18:42:44 Worker rq:worker:0171503f40bb44cfb4cc18b7d60844cc: started, version 1.9.0
worker_1  | 18:42:44 Subscribing to channel rq:pubsub:0171503f40bb44cfb4cc18b7d60844cc
worker_1  | 18:42:44 *** Listening on default...
worker_1  | 18:42:44 Cleaning registries for queue: default
web_1     |  * Serving Flask app 'server'
web_1     |  * Debug mode: on
web_1     | /service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
web_1     |   rawusgs,geo =read_in_data()
web_1     | /service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
web_1     |   medusgs=medusgs_data_import(rawusgs,grad,sur)
web_1     | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
web_1     |  * Running on all addresses (0.0.0.0)
web_1     |  * Running on http://127.0.0.1:8080
web_1     |  * Running on http://172.18.0.3:8080
web_1     | Press CTRL+C to quit
web_1     |  * Restarting with stat
web_1     | /service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
web_1     |   rawusgs,geo =read_in_data()
web_1     | /service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
web_1     |   medusgs=medusgs_data_import(rawusgs,grad,sur)
web_1     |  * Debugger is active!
web_1     |  * Debugger PIN: 145-314-329

However it then hangs and the images aren't pushed. By using the -d flag it seems to start the images.

When I then go to fargate the logs for both containers seem to show that the CMD ["python","server.py"] line in dockerfile has been executed for both images.

e.g. my service log

 * Serving Flask app 'server'
 * Debug mode: on
/service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
  rawusgs,geo =read_in_data()
/service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
  medusgs=medusgs_data_import(rawusgs,grad,sur)
[31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8080
 * Running on http://10.0.3.102:8080

and the worker log

 * Serving Flask app 'server'
 * Debug mode: on
/service/data_import.py:85: DtypeWarning: Columns (1,3,4,7,8,9,15,16,17,18,20,22,24,25,26,29,31,32,33,34,35,36,37,38,39,40,41,42,47,48,49,172,174,175) have mixed types.Specify dtype option on import or set low_memory=False.
  rawusgs,geo =read_in_data()
/service/data_import.py:89: DtypeWarning: Columns (4,7,10,17,21,25,26,27,32,35,37,39,48,49,175,176) have mixed types.Specify dtype option on import or set low_memory=False.
  medusgs=medusgs_data_import(rawusgs,grad,sur)
Address already in use
Port 8080 is in use by another program. Either identify and stop that program, or start the server with a different port.
ERROR conda.cli.main_run:execute(47): `conda run python server.py` failed. (See above for error)

Solution

  • You shouldn't be running docker-compose up inside CodeBuild. You are actually running your Docker images inside the CodeBuild environment, which is pointless. You should change the command to only build the images, not run them:

    docker-compose -f docker-compose.yml build
    

    Also, both of your containers use exactly the same Dockerfile to build exactly the same images. So you only really need to create one docker image, and configure both containers to use that image. There is no reason to create both a service image and a worker image, when both images are exactly the same, the only difference being the command you select at run time.

    The runtime issue with ECS/Fargate is almost certainly because of how you are creating the ECS Task Definition and deploying the ECS Task (which you haven't shown in your question). You need to make sure that your task definition for the two containers correctly specifies the different command for the containers.

    The default command is server.py because that's what you configured as the last line in your Dockerfile:

    CMD ["python","server.py"]
    

    You are overriding that in your docker-compose file, but that is a run-time setting in docker-compose. When you build images, that run-time setting isn't copied into those images. The command setting is only applied when you run containers using docker-compose. The different command settings in your docker-compose file would only be applied to your ECS deployment if you are using the docker-compose ECS integration to perform your actual ECS deployments. It sounds like that isn't how you are performing your ECS deployments, so however you are deploying to ECS, you need to make sure that you are overriding the command setting in the container definitions inside your ECS Task Definition, just like you are in your docker-compose file.