I'm trying to send the same log flow to two different elasticsearch indexes, because of users with different roles each index.
I use a file for destination too. Here is a sample:
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] input/input.go:152 Run input
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:191 Start next scan
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sensor filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:421 Check file for harvesting: /opt/zeek/logs/current/weird.log
When I use only one destination for elasticsearch-http, any of the two configured, everything works fine, but when use both destinations, syslog-ng fails to start and systemcl complains.
Here is my /etc/syslog-ng/syslog-ng.conf
file:
@version: 3.27
@include "scl.conf"
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
dns_cache(no); owner("root"); group("adm"); perm(0640);
stats_freq(0); bad_hostname("^gconfd$");
};
source s_net {
udp(
ip(0.0.0.0)
port(514)
flags(no-parse)
);
};
log {
source(s_net);
destination(d_es);
destination(d_es_other_index); ######## comment this to avoid the error
destination(d_file);
};
template t_demo_filetemplate {
template("${ISODATE} ${HOST} ${MESSAGE}\n");
};
destination d_file {
file("/tmp/test.log" template(t_demo_filetemplate));
};
destination d_es{
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
destination d_es_other_index{
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
The error I get when using two elasticsearch destinations (journalctl -xe seems to show no relevant info):
# systemctl restart syslog-ng.service
Job for syslog-ng.service failed because the control process exited with error code.
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
And my syslog-ng info:
$ syslog-ng --version
syslog-ng 3 (3.27.1)
Config version: 3.22
Installer-Version: 3.27.1
Revision: 3.27.1-3build1
Compile-Date: Jul 30 2020 17:56:17
Module-Directory: /usr/lib/syslog-ng/3.27
Module-Path: /usr/lib/syslog-ng/3.27
Include-Path: /usr/share/syslog-ng/include
Available-Modules: syslogformat,afsql,linux-kmsg-format,stardate,affile,dbparser,geoip2-plugin,afprog,kafka,graphite,riemann,tfgetent,json-plugin,cef,hook-commands,basicfuncs,disk-buffer,confgen,timestamp,http,afamqp,mod-python,tags-parser,pseudofile,system-source,afsocket,afsnmp,csvparser,afstomp,appmodel,cryptofuncs,examples,afmongodb,add-contextual-data,afsmtp,afuser,xml,map-value-pairs,kvformat,redis,secure-logging,sdjournal,pacctformat
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on
Is there any way of doing this two elasticsearch indexes at the same time?
You can check the exact error message in the journal logs, as it is suggested by systemctl:
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
Alternatively, you can start syslog-ng in the foreground:
$ syslog-ng -F --stderr
You probably have a persist-name collision due to the matching elasticsearch-http()
URLs. Please try adding the persist-name()
option with 2 unique names, for example:
destination d_es {
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es")
);
};
destination d_es_other_index {
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es_other_index")
);
};