I currently have a working upload system including a progress bar that uploads files from my front end using Axios to my AWS s3 bucket (via multer-s3 on a node JS back-end on an EC2 instance).
The system works for smaller files but fails when uploading larger files.
The files my users will need to upload could be anywhere between 20MB and 1Gb
I am aware that there are a number of similar questions on this, but from what I can see they either have no answers, or the answer has not been accepted by the OP. Also, many answers keep coming back with "use multipart upload" but in my (limited) understanding, I was under the impression that Multer/Multer-s3 middleware was meant to take care of that, and since I have an (almost) working system I would prefer to fix this issue using multer (assuming there is a solution with multer)
The error I am getting in the console is:
POST [server ip/url] net::ERR_CONNECTION_RESET
At first I thought it was files larger than 275MB, but when I timed it, the progress halts at about 60 seconds every time, so it may be a timeout issue. Note: The progress bar stops at 60 seconds but the error does not show in console until about 30 or 40 seconds after the upload halts
code for the upload is thus:
var uploadResources = multer({
storage: multerS3({
s3: s3,
bucket: bucketName,
acl: "public-read",
contentType: multerS3.AUTO_CONTENT_TYPE,
metadata: function (req, file, cb) {
cb(null, {fieldName: file.fieldname});
},
fileFilter, limits: {
fileSize: MAX_SIZE
},
key: function (req, file, cb) {
cb(null, ClientObjects + "/" +
req.headers.path + "/" +
req.headers.pid + "/" +
req.headers.tofrom + "/" +
file.originalname)
}
})
});
app.post(
'/uploadResources/', uploadResources.array('file', 1), (req, res) => {
res.json({
file: req.file,
});
});
What I have tried (from various posts all over the net - my search history involving any of the keywords is totally purple!):
Increasing or removing my file filter:
const MAX_SIZE = 9000000000
fileFilter, limits: {
fileSize: MAX_SIZE
},
Overriding multer limits:
var limits = {
files: 1, // allow only 1 file per request
fileSize: 900 * 1024 * 1024
};
var uploadResources = multer({
limits: limits,
storage: multerS3({
s3: s3,
bucket: bucketName,
acl: "public-read",
contentType: multerS3.AUTO_CONTENT_TYPE,
metadata: function (req, file, cb) {
cb(null, {fieldName: file.fieldname});
},
. . . . etc etc etc (as above)
Increasing the "Expires" time:
......
storage: multerS3({
s3: s3,
bucket: bucketName,
acl: "public-read",
Expires: 3600,
contentType: multerS3.AUTO_CONTENT_TYPE,
.... etc etc .....
Changing the AWS timeout time (for this I have tried both larger numbers as well as 0 to make it infinite)
aws = require('aws-sdk')
aws.config.httpOptions.timeout = 0;
OR
aws.config.httpOptions.timeout = 3600;
EDIT: I have also tried res.setTimeout in route
So at this stage I am still unsure where the problem is even originating. I guess it could be either of the following or a mixture of any:
Help me Obi-stack-overflow-nobie, you're my only hope!
UPDATE I tried a test: I ran my node server locally and throttled my speed. I had a file take about 7 minutes to upload to the s3 (going from my localhost frontend to localhost backend then to my s3 bucket) and it works like a charm. So this tells me that is has something to do with my ec2.
Something about my setup is resetting the POST connection at 60 seconds during large file uploads
UPDATE 2 : I have been looking at my ec2 sysctl settings and saw that net.ipv4.tcp_fin_timeout was set to 60 (the exact time of my connection resetting). I upped this to 320 and set it as the new default and rebooted my ec2 instance - I double checked and the change was still there... but I still get the same problem - small quick uploads Good, long large uploads bad :( I cant be the only person on the internet who has an ec2 and wants to upload a large file to s3 with multer
UPDATE 3: I bypassed my ec2 and setup an AWS lightsail server running a fresh install of Ubuntu. I installed same version of node that my ec2 has, I pulled my git repo over, ran the server an tried the same tests... and got the exact same error at 60 seconds.... Also confirmed that small files still worked fine.... so this tells me its not unique to ec2... but it still could be an AWS-wide limitation.... This is slowly killing me.
UPDATE 4: Just had a friend who has a google cloud virtual machine that runs Ubuntu load my app from github and we tried uploading through that... and it worked perfectly....
So my tests have shown the following:
AWS EC2: 60 second, connection reset
AWS Lightsail with ubuntu: 60 second, connection reset
Windows Localhost: Successful upload longer than 7 minutes
Google VM with ubuntu: Successful upload longer than 3 minutes
In my opinion, based on these results, I can only conclude that AWS has some sort of hard limit of VM's that do not allow TCP connections longer than 60 seconds.
Anyone care to offer an alternative conclusion? I'm all ears.
Debugged this with OP directly, this was traced back to node_modules being committed in the repo and causing issues between dev OS and EC2 OS for connection handling.
Solution:
npm install
to pull in correct OS dependencies