Search code examples
node.jsamazon-web-servicesamazon-s3browserupload

How can I safely upload a large file from the browser to a bucket on Amazon S3?


I need to allow a user to upload a large file (over 2GB) from the web application and I don't want to expose the company's access keys.

One of my teammates found that to upload without the keys we need to use presigned urls. In this way, the frontend requests a presigned url to the backend and, after receiving it, the frontend uploads the file via an HTTP request. Following is the code that does this:

const presignedUrl = '...'

const body = {
    data: file,
};

const req = new HttpRequest('PUT', presignedUrl, body, {
    headers: new HttpHeaders({
        'Content-Type': contentType,
        'x-amz-acl': 'public-read',
    }),
    reportProgress: true,
});

I found that to upload large files via the AWS Node SDK, we just need to set the options object that is passed as an argument to the upload method as follows:

const options: AWS.S3.ManagedUpload.ManagedUploadOptions = {
    partSize: 10 * 1024 * 1024, // each part is 10 MB
    queueSize: 2, // 2 parts are uploaded concurrently
};

const params: AWS.S3.PutObjectRequest = {
    Bucket: 'teste-multipart-upload',
    Key: key,
    Body: body,
};

return s3.upload(params, options, (err, data) => {
    if (err) throw err;
    console.log(data);
});

How can I do these two things at the same time? Either upload via SDK to a presigned url or upload a large file via the provided REST API (if there's another option I'd like to know).


Solution

  • This can be accomplished with a series of steps:

    1. The back-end initiates the multipart upload.
    2. The back-end creates a series of pre-signed URLs, one per part, and sends them to the client.
    3. The client uploads the various parts to those pre-signed part URLs (in any order).
    4. The client tells the back-end that it's done, and the back-end completes the multipart upload.

    See Multipart uploads with S3 pre-signed URLs for an example.