When using Google Stackdriver I can use the log query to find the exact log statements I am looking for.
This might look like this:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
However as you can see in this query argument resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
that I am looking for a pod with the id 7f6cf95b6c-nkkbm
. Because of this I can not use this Stackdriver view with this exact query if I deployed a new revision of my-app
therefore having a new ID and the one in the curreny query becomes invalid or not locatable.
Now I don't always want to look for the new ID every time I want to have the current view of my my-app
logs. So I tried to add a special label stackdriver: my-app
to my Kubernetes YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
stackdriver: my-app <<<
Revisiting my newly deployed Pod I can assure that the label stackdriver: my-app
is indeed existing.
Now I want to add this new label to use as a query argument:
resource.type="k8s_container"
resource.labels.project_id="my-project"
resource.labels.location="europe-west3-a"
resource.labels.cluster_name="my-cluster"
resource.labels.namespace_name="dev"
resource.labels.pod_name="my-app-pod-7f6cf95b6c-nkkbm"
resource.labels.container_name="container"
resource.labels.stackdriver=my-app <<< the kubernetes label
As you can guess this did not work otherwise I'd have no reason to write this question ;) Any idea how the thing I am about to do can be achieved?
Any idea how the thing I am about to do can be achieved?
Yes! In fact, I've prepared an example to show you the whole process :)
Let's assume:
GKE
cluster named: gke-label
Deployment
named nginx
with a following label
:
stackdriver: look_here_for_me
deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
stackdriver: look_here_for_me
replicas: 1
template:
metadata:
labels:
app: nginx
stackdriver: look_here_for_me
spec:
containers:
- name: nginx
image: nginx
You can apply this definition and send some traffic from the other pod so that the logs could be generated. I've done it with:
$ kubectl run -it --rm --image=ubuntu ubuntu -- /bin/bash
$ apt update && apt install -y curl
$ curl NGINX_POD_IP_ADDRESS/NONEXISTING
# <-- this path is only for better visibilityAfter that you can go to:
GCP Cloud Console (Web UI)
-> Logging
(I used new version)With the following query:
resource.type="k8s_container"
resource.labels.cluster_name="gke-label"
-->labels."k8s-pod/stackdriver"="look_here_for_me"
You should be able to see the container logs as well it's label
: