I run pods on a kubernetes running inside EKS. I have Prometheus installed on the cluster. I wish to export to Cloudwatch metrics some metric comming from Prometheus. So I follow this guide.
When comes time to modify my cwagent config, I have this in the current one:
{
"agent": {
"region": "${log_region}"
},
"logs": {
"metrics_collected": {
"kubernetes": {
"cluster_name": "${cluster_name}",
"metrics_collection_interval": 60
}
},
"force_flush_interval": 5
}
}
Since I still want the base kubernetes metric scrapping working, I was planning on letting it and just adding the "kubernetes" section. Giving me this:
{
"agent": {
"region": "${log_region}"
},
"logs": {
"metrics_collected": {
"kubernetes": {
"cluster_name": "${cluster_name}",
"metrics_collection_interval": 60
},
"prometheus": {
"prometheus_config_path": "/etc/prometheusconfig/prometheus.yaml",
"emf_processor": {
"metric_declaration_dedup": false,
"metric_declaration": [
...
]
}
}
},
"force_flush_interval": 5
}
}
But if I do so, when starting the agent I get an error stating that I can not have "Prometheus" and "Kubernetes" at the same time:
error : "feature kubernetes, ecs, prometheus are mutually exclusive"
So I am not sure how I should proceed, if should I make my configuration different to allow multiple scrapper?
Should I totally replace the export of metrics from Kuebernetes to be replaced by the one from Prometheus ? If so, is there a way to easily do it, or at least find the list of what metrics were here it the first place?
So the solution is actually to create a second agent. I have one agent working as daemon set on all my nodes, scrapping Kub metrics. And I create a deployement specifically for an CWAgent working with Kubernetes.