Context: I have an EC2, and and S3 bucket. The EC2 is running a webserver that provides a Pre-Signed URL through a GET Request. The client requesting the pre-signed URL form the Webserver is to upload the file to the S3 bucket. Based on the tutorials online, I created an IAM role with full S3 access and added that to my EC2. With AWS CLI I am able to list the bucket.
Problem: The external client can reach this REST endpoint and get the pre-signed URL and the fields associated with it. But not able to upload the file.
Here is the server side code:
import json
import boto3
import pydantic
from botocore.exceptions import ClientError
from botocore.client import Config
import datetime
URL = "http://169.254.169.254/latest/meta-data/iam/security-credentials/my-iam-role-s3"
access_key = ""
secret_key = ""
bucket_name = "my-bucket-name-s3"
def get_aws_creds():
global access_key
global secret_key
try:
response = requests.get(URL)
response.raise_for_status()
json_response = json.loads(response.text)
access_key = json_response["AccessKeyId"]
secret_key = json_response["SecretAccessKey"]
except requests.exceptions.HTTPError as err:
print(err)
raise SystemExit(err)
return access_key, secret_key
def get_pre_signed_upload_URL(uuid :str):
global access_key
global secret_key
if (access_key == "" or secret_key == ""):
access_key, secret_key = get_aws_creds()
s3_client = boto3.client(
's3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
region_name="us-west-1",
config=Config(signature_version='s3v4'))
try:
response = s3_client.generate_presigned_post(
Bucket=bucket_name,
Key=uuid,
ExpiresIn=50
)
except ClientError as e:
return None
return response
The client calling the REST endpoint on EC2 is able to receive the following (information redacted):
{
'url': 'https://bucket-name.s3.amazonaws.com/',
'fields': {
'key': 'some-uuid',
'x-amz-algorithm': 'AWS4-HMAC-SHA256',
'x-amz-credential': 'ABCDEFGH/20230628/us-west-1/s3/aws4_request',
'x-amz-date': '20230628T202210Z',
'policy': 'POLICY-BLOB',
'x-amz-signature': 'signature-hash'
}
}
I use the following to perform a curl command on my computer - I've already received the URL and Fields in upload_object
:
curl_upload_command = f'curl -X POST {upload_object["url"]}'
for key, value in upload_object['fields'].items():
curl_upload_command += f' -F "{key}={value}"'
curl_upload_command += f' -F "file=@{post_body["filename"]}"'
# Upload the file using curl
print(curl_upload_command)
upload_result = subprocess.run(curl_upload_command, shell=True, capture_output=True, text=True)
print(upload_result.stdout)
The response I get is as follows:
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>ASIA****************</AWSAccessKeyId><RequestId>W8GW0J0PE0BWK5RZ</RequestId><HostId>HOST-ID******</HostId></Error>
Any guidance here will greatly be appreciated.
Note: EC2 with IAM role is generating the pre-signed URL and a public client is trying to POST a file with it.
An IAM Role does not have a permanent Access Key + Security Key, so whatever you are passing to that function is invalid.
From GetAccessKeyInfo - AWS Security Token Service: "Access key IDs beginning with ASIA are temporary credentials that are created using AWS STS operations."
Instead, your program should call AssumeRole()
while passing the IAM Role ARN. This will return a temporary Access Key, Secret Key and Security Token. These values can then be used to generate the pre-signed URL.
Note that the pre-signed URL will only be valid for the duration that the Assumed Role is valid (which defaults to 60 minutes).
Also, the program will need to use a set of AWS credentials to call AssumeRole()
. These can be the automatic credentials generated by the IAM Role assigned to the EC2 instance.