Search code examples
amazon-web-servicesdockerspring-bootspring-cloud-configjhipster-registry

Deploying jhipster registry on Amazon ECS


I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.

I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:

services:
jhipster-registry:
    image: jhipster/jhipster-registry:v3.2.3
    volumes:
         - /home/ubuntu/bb-key:/root/.ssh

I don't know how I can pass this key to container running in ECS?

I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.


Solution

  • I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html

    Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.

    In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.

    echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.

    The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.

    If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.

    One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.