Search code examples
amazon-web-servicesdockeramazon-elastic-beanstalk

Accessing Environment variables within Running Container Host in ElasticBean stalk


I have a Red Hat Enterprise Linux Docker container designed to run and host a Tomcat application. I'm using Docker platform with Amazon Linux 2023. During the Docker build, the Tomcat application requires a set of database connection credentials. I want to configure the container to accept environment configuration parameters specified in the Elastic Beanstalk environment, either through the UI or the server.config file.

Typically, when building my Docker container I'd supply these to the image when building but since the environment RDS connection parameters are not defined until the user creates the environment, I cannot until the Beanstalk env is built. I cannot bake them into the image.

In Elastic Beanstalk, the environment configuration parameters don't exist until the Beanstalk environment is built and the EC2 instance is running with Docker. I've created a Docker image that installs java and Tomcat but does NOT deploy my Tomcat application. The install script which requires the RDS connection parameter variables will need to be executed after the container is deployed as the environment configuration parameters will be available to the container.

I've created a install-server.config under .ebextensions that looks like this:

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/10_run_installer.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/bin/bash
      
      # Function to check if Docker is running
      function wait_for_docker {
        echo "Waiting for Docker service to start..."
        until sudo docker info >/dev/null 2>&1; do
          sleep 5
        done
        echo "Docker is up and running."
      }
      
      # Function to check if any containers are running
      function check_running_containers {
        local attempts=100
        local count=0
        
        while [ $count -lt $attempts ]; do
          CONTAINER_LIST=$(sudo docker ps -q)
          if [ -n "$CONTAINER_LIST" ]; then
            echo "Running containers:"
            sudo docker ps
            return 0
          fi
          echo "No running containers found. Retrying in 5 seconds..."
          sleep 5
          count=$((count + 1))
        done
        
        echo "No running containers found after $attempts attempts."
        return 1
      }
      
      # Wait for Docker service to be ready
      wait_for_docker

      # Check for running containers
      if ! check_running_containers; then
        echo "No running containers detected. Exiting script."
        exit 1
      fi

      # Get the first container ID
      CONTAINER_ID=$(sudo docker ps -q | head -n 1)
      if [ -z "$CONTAINER_ID" ]; then
        echo "No containers found."
        exit 1
      else
        echo "Container ID: $CONTAINER_ID"
        sudo docker exec $CONTAINER_ID sh /opt/install_server.sh &
      fi

option_settings:
  aws:elasticbeanstalk:application:environment:
    DB_USER: "postgres"
    DB_PASSWORD: "<omitted>"
    DB_TYPE: "postgres"
    DB_URL: "some-rds-db.chs4y4g6ulgr.us-east-1.rds.amazonaws.com"
    DB_PORT: "5432"

container_commands:
  01_make_script_executable:
    command: "chmod +x /opt/elasticbeanstalk/hooks/appdeploy/post/10_run_installer.sh"
    leader_only: true
  02run_installer:
    command: "/opt/elasticbeanstalk/hooks/appdeploy/post/10_run_installer.sh"
    leader_only: true

No matter what I try, the script can never seem to find the running Docker container. When I attempt to get the containers by running sudo docker ps -q | head -n 1, it returns no containers. I assumed it might be due to Docker taking a few minutes to spin up, so I added delay logic to account for this, but it still fails to find any running containers.

The logs from: cfn-init-cmd.log.

-07-19 23:54:31,073 P3012 [INFO] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2024-07-19 23:54:31,073 P3012 [INFO] Config postbuild_0_evolven_server_beanstalk_deployment
2024-07-19 23:54:31,082 P3012 [INFO] ============================================================
2024-07-19 23:54:31,082 P3012 [INFO] Test for Command 01_make_script_executable
2024-07-19 23:54:31,087 P3012 [INFO] Completed successfully.
2024-07-19 23:54:31,087 P3012 [INFO] ============================================================
2024-07-19 23:54:31,087 P3012 [INFO] Command 01_make_script_executable
2024-07-19 23:54:31,090 P3012 [INFO] Completed successfully.
2024-07-19 23:54:31,099 P3012 [INFO] ============================================================
2024-07-19 23:54:31,099 P3012 [INFO] Test for Command 02run_installer
2024-07-19 23:54:31,103 P3012 [INFO] Completed successfully.
2024-07-19 23:54:31,104 P3012 [INFO] ============================================================
2024-07-19 23:54:31,104 P3012 [INFO] Command 02run_installer
2024-07-20 00:03:05,873 P3012 [INFO] -----------------------Command Output-----------------------
2024-07-20 00:03:05,873 P3012 [INFO]    Waiting for Docker service to start...
2024-07-20 00:03:05,873 P3012 [INFO]    Docker is up and running.
2024-07-20 00:03:05,873 P3012 [INFO]    No running containers found. Retrying in 5 seconds...
2024-07-20 00:03:05,873 P3012 [INFO]    No running containers found. Retrying in 5 seconds...
<repeats 90+ more times>
    No running containers found after 100 attempts.
2024-07-20 00:03:05,880 P3012 [INFO]    No running containers detected. Exiting script.

Things I have tried:

  1. If I remove the "container_sections" in my .config file, deploy the environment, ssh into the host and execute "10_run_installer.sh" as the ec2-user manually, it works as expected. Executes my deployment script within the container and launches my Tomcat.
  2. Also ran the commands one by one manually on the Container host as the ec2-user, all commands work as expected.

Why can't my 10_run_installer.sh script when launched from container_commands pick up my containers that are running?? Is there a better approach?


Solution

  • I've found a solution that might help those who've been struggling with environment variables in Docker containers on Elastic Beanstalk. The key is to use Elastic Container Service (ECS). When setting up your environment, instead of choosing 'Docker on Amazon Linux', select 'ECS Docker running on Amazon Linux 2023' as your platform. The advantage of using ECS is that it provides built-in support for supplying environment variables to containers. This approach simplifies the process of managing environment variables within your Docker containers on Elastic Beanstalk. Even if I was able to get the docker exec command to work in my previous, it's not a good practice, from a security perspective.

    Here is a sample Dockerrun.aws.json.

    {
        "AWSEBDockerrunVersion": "2",
        "containerDefinitions": [
            {
                "name": "server",
                "image": "myrepo/myserver:latest",
                "memory": 1024,
                "update": true,
                "portMappings": [
                    {
                        "hostPort": 80,
                        "containerPort": 8080
                    }
                ],
                "environment": [
                    {
                        "name": "DB_URL",
                        "value": "my-rds-test-instance-1.ehs4y4g6ulgr.us-east-1.rds.amazonaws.com"
                    },
                    {
                        "name": "DB_TYPE",
                        "value": "postgres"
                    },
                    {
                        "name": "DB_USER",
                        "value": "postgres"
                    },
                    {
                        "name": "PASSWORD",
                        "value": "<omitted>"
                    },
                    {
                        "name": "DB_PORT",
                        "value": "5432"
                    }
                ]
            }
        ]
    }
    

    I recently refined my Dockerfile, learning valuable insights about Docker's behavior. Basic Docker 101 stuff, but my original setup used an install.sh script that depended on environment variables for database credentials. However, I encountered an unexpected issue: the environment variables used during the image build process become permanent within the image. This contrasted with my initial assumption that these variables would dynamically adapt to user-supplied values at runtime. Although Docker does offer build-time arguments, my specific use case required more flexibility to accommodate varying user inputs for these variables.

    To solve this, I realized that environment variables can be accessed within bash scripts or code once the container is running. So, I removed the install.sh command from my Dockerfile and created a separate bash script called entrypoint.sh with the same install.sh command. My Dockerfile performs an ENTRYPOINT [ "entrypoint.sh" ] command as the last step, which executes the installation pulling the environment variables the user has set. This approach provides the flexibility I needed, allowing user-supplied values to be incorporated at runtime rather than being set permanently during the build process. I rebuilt the container and uploaded it to my repo.

    You can even take it a step further by integrating Secrets manager provided you have the right permissions.

    Here is a sample of my entrypoint.sh

    #!/bin/bash
    
    echo "Using DB_URL: $DB_URL"
    echo "Using DB_PORT: $DB_PORT"
    echo "Using DB_USER: $DB_USER"
    echo "Using DB_TYPE: $DB_TYPE"
    echo "Using DB_PASSWORD: $DB_PASSWORD"
    
    #
    # Fetch secret from AWS Secrets Manager
    SECRET_NAME="mysecret"
    REGION="us-east-1"
    
    # You can enhance by pulling from Secrets Manager.
    DB_PASSWORD=$(aws secretsmanager get-secret-value --secret-id $SECRET_NAME --region $REGION --query SecretString --output text | jq -r .DB_PASSWORD)
    
    export DB_PASSWORD
    exec "$@"
    
    
    # Run the installer
    su -c "/opt/Installer/install.sh -t /opt/tomcat -d $DB_URL -p $DB_PORT -u $DB_USER -P $DB_PASSWORD -c $DB_TYPE"
    
    # Check if the installation was successful
    if [ $? -eq 0 ]; then
        echo "Installation successful"
    else
        echo "Installation failed"
        exit 1
    fi
    
    # Start Tomcat server
    /opt/tomcat/bin/catalina.sh run
    
    # Keep the container running
    tail -f /dev/null
    

    To deploy you can then add an .ebextensions folder with more configuration data (there are way more settings of course). Make sure the "Memory" value you set within your Dockerrun.aws.json is compatible with the sizing of the InstanceType. I kept provisioning t3.micro which would break my deployment as my container required at least 1 GB.

    option_settings:
      aws:autoscaling:launchconfiguration:
        InstanceType: t2.medium
      aws:autoscaling:asg:
        MinSize: 1
        MaxSize: 2
      aws:elasticbeanstalk:environment:process:default:
        HealthCheckPath: /health
    

    I created a project folder (I.E. My-app) then added the Dockerrun.aws.json in the root and the .ebextensions folder underneath with the environment-properties.conf

    my-app/
    ├── Dockerrun.aws.json
    ├── .ebextensions/
    │   └── environment-properties.config
    

    You can zip it (Navigate into root project directory -> "zip -r my-app.zip Dockerrun.aws.json .ebextension/") and/or use the eb API to deploy.

    This is a basic example, but I hope it helps.