Search code examples
amazon-web-servicesamazon-s3gocache-controlfilepicker.io

Given an S3 path and a valid key and secret, how do I update an objects cache-control headers?


I need to update the headers on files after they are uploaded to S3. I don't have control over the upload process (I'm using the FilePicker.io API which doesn't provide a way to specify the cache-control header as far as I now), they just magically appear in a bucket. I have the full s3 path to the objects and the key and secret for the bucket.

Using Go, what is the easiest way to add new headers to these objects? Seems like you need to do a PUT copy request but that requires request signing and it overwrites all of the existing headers. All I want to do is add a cache-control header, there has to be an easier way right?


Solution

  • The small program below should simply add a cache-control header to the the given bucket / key combo. The important bit is the s3.CopyOptions struct. The MetadataDirective can also be COPY - see http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html for details. Also the source must be bucket/key since the source of course can be in another bucket.

    package main
    
    import (
        "fmt"
        "os"
    
        "github.com/goamz/goamz/aws"
        "github.com/goamz/goamz/s3"
        //// should work as well
        //"github.com/crowdmob/goamz/aws"
        //"github.com/crowdmob/goamz/s3"
    )
    
    func main() {
        // use as
        //  $ go run s3meta.go bucket key
        // will add a 1 hour Cache-Control header to
        // key in bucket
        auth := aws.Auth{
            AccessKey: os.Getenv("AWS_ACCESS_KEY_ID"),
            SecretKey: os.Getenv("AWS_SECRET_KEY_ID"),
        }
    
        bucketName, keyName := os.Args[1], os.Args[2]
    
        bucket := s3.New(auth, aws.USEast).Bucket(bucketName)
        opts := s3.CopyOptions{}
        opts.CacheControl = "maxage=3600"
        opts.MetadataDirective = "REPLACE"
    
        _, err := bucket.PutCopy(keyName, s3.PublicRead, opts, bucketName+"/"+keyName)
        if err != nil {
            panic(err)
        }
    
    }
    

    Trial run (bucket has since been deleted):

    ╭─brs at stengaard in ~/ using
    ╰─○ curl  -I https://s3.amazonaws.com/cf-templates-1r14by1vl75o0-us-east-1/success.png
    HTTP/1.1 200 OK
    x-amz-id-2: 49oTuRARMHlx32nqv34CMOjdTMBUCZIVzP8YKBS2Wz5h1w5KBG62u8nFru1UkIbJ
    x-amz-request-id: C92E9952BFF31D77
    Date: Mon, 30 Jun 2014 08:57:15 GMT
    Last-Modified: Mon, 30 Jun 2014 08:50:45 GMT
    ETag: "41b9951893f1bbff89e2b9c8a5b7ea18"
    Accept-Ranges: bytes
    Content-Type: image/png
    Content-Length: 61585
    Server: AmazonS3
    
    ╭─brs at stengaard in ~/ using
    ╰─○ go run s3meta.go cf-templates-1r14by1vl75o0-us-east-1 success.png
    ╭─brs at stengaard in ~/ using
    ╰─○ curl  -I https://s3.amazonaws.com/cf-templates-1r14by1vl75o0-us-east-1/success.png
    HTTP/1.1 200 OK
    x-amz-id-2: oiDeXjO1V4kquWo8UlNWBi/HAHoqfvlOSHVeXFZXv2yA4o0+Njcdshhu15PIiw7J
    x-amz-request-id: 0BB1A397DE7EBE75
    Date: Mon, 30 Jun 2014 09:00:17 GMT
    Cache-Control: maxage=3600
    Last-Modified: Mon, 30 Jun 2014 09:00:12 GMT
    ETag: "41b9951893f1bbff89e2b9c8a5b7ea18"
    Accept-Ranges: bytes
    Content-Type: binary/octet-stream
    Content-Length: 61585
    Server: AmazonS3
    

    Note that Content-Type changes as well since we have opts.MetadataDirective = "REPLACE". If this little thing is worth the hassle of updating headers out-of-band is really domain specific: How important is it to cache the uploaded files in the client? Is it to expensive to do the HEAD request to S3?