Search code examples
ruby-on-railsamazon-web-servicesamazon-s3fog

Getting Negative Expiry when Using Fog AWS S3 & Rails


I'm attempting to generate a signed url for a file in S3 using Fog; however, the url that get's returned always returns a negative expiry which causes the url to 400.

connection = Fog::Storage.new(
  region: 'us-west-1',
  provider: 'AWS',
  aws_access_key_id: ENV['AWS_ACCESS_KEY'],
  aws_secret_access_key: ENV['AWS_SECRET_KEY']
)
bucket = connection.directories.get(BUCKET)
file = 'test.jpg'
p file_url = bucket.files.get_https_url("uploads/#{file}", 300)

Generated URL:

https://account.s3-us-west-x.amazonaws.com/files/test.doc?X-Amz-Expires=-1443648781&X-Amz-Date=20150930T213801Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AAAVA***FA/20150930/us-west-x/s3/aws4_request&X
-Amz-SignedHeaders=host&X-Amz-Signature=e31663f9b2470e***215825d585b14c37e

Am I missing something here? Why is the generated url giving me a negative expiration (X-Amz-Expires)?


Solution

  • The expires argument appears to be expecting the absolute expiration time in unix epoch time... not the number of seconds from now.

    If true, then "300" would be an expiration time of "1970-01-01 00:05:00 UTC" which was, I'm guessing, 1443648781 seconds in the past, at the time when you generated that signed URL.

    The signature you're generating is AWS Signature V4, and in the URL itself, the expiration time will be shown in seconds from now... but the old AWS Signature V2 expected an absolute epoch time, so based on the legacy behavior, it would make sense if the library still expected epoch time as the argument regardless of the signature version in use... and did the subtraction when building the url... but it seems a little silly for such a glaringly invalid value to be blindly accepted by the library.