Search code examples
amazon-web-servicesamazon-ecsaws-fargate

ECS Fargate task not applying role


I have an ECS Fargate task running that has a role attached to it. This role has the S3FullAccess policy (and AssumeRole trusted partnership with ECS service).

However when trying to put an object into a bucket, I get Access Denied errors. I have tried booting an EC2 instance and attaching the same role and can put to the bucket without issue.

To me it seems like the role is not being attached to the task. Is there an important step I'm missing? I can't SSH into the instance as it's Fargate.

UPDATE: I extracted the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables that are set and used them on my local machine. I am getting the Access Denied issues there too, implying (to me) that none of the polices I have set for that role are being applied to the task.

Anyone that can help with anything is appreciated!

WORKAROUND: A simple workaround is to create an IAM User with programmatic access and set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables in your task definition.

This works, but does not explain the underlying issue.


Solution

  • I've just had a similar issue and I think it's probably due to your program being unable to access the role's credentials that are exposed by the Instance Metadata service.

    Specifically, there's an environment variable called AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and its value is what's needed by the AWS SDKs to use the task role. The ECS Container Agent sets it when your task starts, and it is exposed to the container's main process that has process ID 1. If your program isn't running as such, it might not being seeing the env var and so explaining the access denied error.

    Depending on how your program is running, there'll be different ways to share the env var.

    I had the issue inside ssh login shells (BTW you can ssh into Fargate tasks by running sshd) so in my Docker entrypoint script I inserted somewhere:

    # To share the env var with login shells
    echo "export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" >> /root/.profile
    

    In other cases it might work to add to your Docker entrypoint script:

    # To export the env var for use by child processes
    export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
    

    References: