I'm searching for some monitor & alert solutions for my services. I found following nice related works.
Both works use dns service discovery to monitor multiple replicas of services.
I've tried to replay these work, but I found I can only get single backend container ip.
# dig A node-exporter
; <<>> DiG 9.10.4-P8 <<>> A node-exporter
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18749
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;node-exporter. IN A
;; ANSWER SECTION:
node-exporter. 600 IN A 10.0.0.42
;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Mon Jan 29 02:57:51 UTC 2018
;; MSG SIZE rcvd: 60
When I inspect the service, I found the endpoint mode of node-exporter is vip.
> docker inspect 242pn4obqsly
...
"Endpoint": {
"Spec": {
"Mode": "vip"
},
"VirtualIPs": [
{
"NetworkID": "61fn8hmgwg0n7rhg49ju2fdld",
"Addr": "10.0.0.3/24"
}
]
...
This means when contact with dns, prometheus can only get a single delegate service ip. Then the inner lbs strategy will route the income request to different backend instances.
Then how does the related works succeeded?
Thx!
For Prometheus
DNS Service Discovery, you don't want to use docker swarm
internal load balancing using Virtual IP (VIP)
.
What you're looking for is a per task service DNS. To get IP adresses of every service in your swarm, just prefix the DNS of your docker swarm service name with tasks.
.
For instance, in a swarm with 3 nodes, I get:
$ nslookup tasks.node-exporter
Server: 127.0.0.11
Address 1: 127.0.0.11
Name: tasks.node-exporter
Address 1: 10.210.0.x node-exporter.xxx.mynet
Address 2: 10.210.0.y node-exporter.yyy.mynet
Address 3: 10.210.0.z node-exporter.zzz.mynet
But when I query the service name with no prefix, I get one IP (the VIP one that load balances requests to every container):
$ nslookup node-exporter
Server: 127.0.0.11
Address 1: 127.0.0.11
Name: node-exporter
Address 1: 10.210.0.w ip-x-x-x-x
You can have a look at this Q/A on SO showing 3 different way of getting a DNS resolution in docker swarm
. Basically, for a service named myservice
in docker swarm
:
myservice
resolves to the Virtual IP (VIP)
of that service which is internally load balanced to the individual task IP addresses.
tasks.myservice
resolves to each Private IP of each container deployed in the swarm.
docker.com
does not exist as a service name and so the request is forwarded to the configured default DNS server (that you can customize).
Note: Container names resolve as well, albeit directly to their IP addresses.
Looking at the links you provided, node-exporter
configuration uses the task
way of reaching services:
Using the exporters service name, you can configure DNS discovery:
scrape_configs: - job_name: 'node-exporter' dns_sd_configs: - names: - 'tasks.node-exporter' type: 'A' port: 9100
Hope this helps!