I am trying to run an app using docker on a GitLab CI/CD pipeline. The job is succeeding but I am facing an issue with boto3 (the AWS SDK) that is unable to locate the credentials.
I have placed the .aws folder in /root/ and also the config and credentials file used for that are having the aws_access_key_id and aws_secret_access_key. The dockerfile is copying those as well:
COPY ./.aws/ /root/
Now, once running it seems like boto is unable to find them.
Running the aws configure list returns a properly set profile. One thing I am not doing is setting a profile name and using the [default].
Looking in the docs it should search in the .aws/credentials and .aws/config directory/files.
Now searching in various sources I am finding all about needing IAM task roles, some people are saying you do not need them, you need to set a profile name to make it work, it seems like I cannot find the solution.
Does anyone know what is the resolution here and faced the same issue?
RESOLUTION:
So, I actually resolved this back then and it seems like this is a safe and easiest method to achieve this.
Using a password keeper package I am fetching the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY from there and hardcoded AWS_REGION in the main script executing the app and then add them as environment variables once the container is running.
This works with just a few lines of code without any changes in the instance etc...
os.environ["AWS_ACCESS_KEY_ID"] = AWS_ACCESS_KEY_ID
os.environ["AWS_SECRET_ACCESS_KEY"] = AWS_SECRET_ACCESS_KEY
os.environ["AWS_REGION"] = AWS_REGION