i have 100's of servers and every server has 10's of metric_path. i am able to scrape all hosts but its automatically adding /metric after every host.
`global:
scrape_interval: 30s
scrape_configs:
- job_name: 'file_sd_targets'
metrics_path: '/configuration/1'
file_sd_configs:
- files:
- /etc/prometheus/targets.json
relabel_configs:
- source_labels: [__address__]
regex: '([^:]+):.*'
target_label: instance
replacement: '$1'`
what i have in target.json is
[ { "target": "1.2.3.4:10091", "metrics_paths": ["/configuration/1", "/analytics/1"] }, { "target": "1.2.3.5:10091", "metrics_paths": ["/configuration/1", "/analytics/1"] }, { "target": "1.2.3.6:10091", "metrics_paths": ["/configuration/1", "/analytics/1"] } ]
but i am seeing this in the UI http://1.2.3.4:100091/metric
expected output: http://1.2.3.4:100091/configuration/1 http://1.2.3.4:100091/analytics/1 http://1.2.3.5:100091/configuration/1 http://1.2.3.5:100091/analytics/1 http://1.2.3.6:100091/configuration/1 http://1.2.3.6:100091/analytics/1
metrics_path
can be set only on the job level: see the description of configuration file format here. I believe this is due to the fact that metrics path is not stored into labels by default, and thus allowing you do scrape same host on different paths within one job would create possibility of conflicts.
file_sd_config
only allows for targets and labels.
Most straightforward approach for your situation is to create a job per metrics_path
.
A couple alternatives:
<relabel_config>
and set __metrics_path__
through it. (I haven't tested this approach, I do believe this should work.) Config will look something like this: - targets: ["host1:9091", "host2:9091"]
labels:
path: path1
- targets: ["host2:9091", "host3:9091"]
labels:
path: path2
relabel_configs:
- source_labels: path
target_label: __metrics_path__