I'm sending logs over TCP using Node.js. The logs are ingested when sent as a JSON array (only the commented line without the loop). But if I iterate over the array and send them individually, only the first log is ingested. The rest don't even show up in the logstash logs. I need to send them individually so Logstash can parse them to their respective indices in Elasticsearch which it cannot identify when sent as an array.
const client = new net.Socket();
client.connect(LOGSTASH_PORT, LOGSTASH_HOST, () => {
for (const event of allEvents) {
const success = client.write(JSON.stringify(event), (error) => {
if (error) {
console.error('Error writing to socket:', error);
} else {
console.log('Data written successfully');
}
});
}
//const success = client.write(JSON.stringify(allEvents));
client.end();
});
No error response in both cases so checking for drain
event does not help. This is what my Logstash config looks like
input {
tcp {
port => "5400"
codec =>"json"
}
}
filter {
}
output {
stdout { codec => json }
elasticsearch {
index => "<some index name>"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "<some path>"
}
}
I've tried the json_lines
codec but that does not help. The problem could be with client.write()
not finishing the operation before the next iteration starts but it does not send an error.
Ok so I figured out the fix.
Change the input codec to json_lines
and manually insert a new line in the string after writing a JSON object. This is what the change looks like
const success = client.write(JSON.stringify(event) + "\n")
This looks like a hacky solution but hey it works. All I can think of for this happening is the client.write()
not treating the message as a single unit after each loop iteration.