I have been trying to figure out the difference in the following two code snippets in node. On making the request from the browser both the code files shows transfer-encoding to be chunked. So, are these two methods same and if not what is the drawback of one over the other.
Method-1 :
const http = require("http");
const fs = require("fs");
const server = http.createServer((req, res) => {
const stream = fs.createReadStream("./bigFile.txt");
stream.on("open", () => {
stream.pipe(res);
});
});
server.listen(3000, () => {
console.log("server started");
});
Method-2 :
const http = require("http");
const fs = require("fs");
const server = http.createServer((req, res) => {
const stream = fs.createReadStream("./bigFile.txt");
stream.on("data", () => {
stream.pipe(res);
});
});
server.listen(3000, () => {
console.log("server started");
});
So, you don't have to wait for either event before piping the stream. The stream and the pipe operation will handle all that for you. So, you can just do this:
const stream = fs.createReadStream("./bigFile.txt");
stream.pipe(res);
The pipe operation will register itself for the data
event and as data arrives on the readstream, it will write it out to the res stream. So, you don't need to do either of your options.
I would guess that this option:
stream.on("data", () => {
stream.pipe(res);
});
can actually cause a problem because the data
event can occur more than once and then you'll be trying to call .pipe()
more than once. You would not have that issue with the open
event, though as I said above, you don't need to hook into either event yourself as .pipe()
will do that for you