I have large download files (Some are bigger than 5 GB) hosted on Amazon S3. My main server is Nginx. Amazon S3 has not public access. Files are served with signed URLs.
Is there a way a restrict bandwidth when using Amazon S3? I know there is no option on Amazon S3, but can we use Nginx as a proxy and make it from there?
I am trying to use the example from that link:
https://coderwall.com/p/rlguog/nginx-as-proxy-for-amazon-s3-public-private-files
This code block:
location ~* ^/proxy_private_file/(.*) {
set $s3_bucket 'your_bucket.s3.amazonaws.com';
set $aws_access_key 'AWSAccessKeyId=YOUR_ONLY_ACCESS_KEY';
set $url_expires 'Expires=$arg_e';
set $url_signature 'Signature=$arg_st';
set $url_full '$1$aws_access_key&$url_expires&$url_signature';
proxy_http_version 1.1;
proxy_set_header Host $s3_bucket;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_buffering off;
proxy_intercept_errors on;
resolver 172.16.0.23 valid=300s;
resolver_timeout 10s;
proxy_pass http://$s3_bucket$url_full;
}
What i don't understand is how can i pass the created signed URL from PHP to that Nginx Config? So i can tell Nginx to go to that signed URL as proxy.
I found the solution. Here it is:
First open your http block in your nginx config. We will create the zone needed for limiting connections per IP.
limit_conn_zone $binary_remote_addr zone=addr:10m;
Now open your server block in /etc/nginx/conf.d/sitename.conf or wherever you defined it. Create an internal location. We will redirect PHP request to here:
location ~* ^/internal_redirect/(.*?)/(.*) {
# Do not allow people to mess with this location directly
# Only internal redirects are allowed
internal;
# Location-specific logging, so we can clearly see which requests
# passing through proxy and what is happening there
access_log /var/log/nginx/internal_redirect.access.log main;
error_log /var/log/nginx/internal_redirect.error.log warn;
# Extract download url from the request
set $download_uri $2;
set $download_host $1;
# Extract the arguments from request.
# That is the Signed URL part that you require to get the file from S3 servers
if ($download_uri ~* "([^/]*$)" ) {
set $filename $1;
}
# Compose download url
set $download_url $download_host/$download_uri?$args;
# Set download request headers
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
# Activate the proxy buffering, without it limiting bandwidth speed in proxy will not work!
proxy_buffering on;
# Buffer 512 KB data
proxy_buffers 32 16k;
proxy_intercept_errors on;
resolver 8.8.8.8 valid=300s;
resolver_timeout 10s;
# The next two lines could be used if your storage
# backend does not support Content-Disposition
# headers used to specify file name browsers use
# when save content to the disk
proxy_hide_header Content-Disposition;
add_header Content-Disposition 'attachment; filename="$filename"';
# Do not touch local disks when proxying
# content to clients
proxy_max_temp_file_size 0;
# Limit the connection to one per IP address
limit_conn addr 1;
# Limit the bandwidth to 300 kilobytes
proxy_limit_rate 300k;
# Set logging level to info so we can see everything.
# All levels you can set: info | notice | warn | error
limit_conn_log_level info;
# Finally download the file and send it to client
# Beware that you can shouldn't include "htttp://" or "https://"
# in proxy. Doing that will cause an "invalid port in upstream" error.
proxy_pass $download_url;
}
Make the finishing touch in PHP and send your signed URL to Nginx:
header( 'X-Accel-Redirect: ' . '/internal_redirect/' . $YOUR_SIGNED_URL );