Search code examples
kuberneteskubernetes-podkubernetes-deployment

deployment with scale 1 has 2 pods


I have a deployment with scale=1 but when I run get pods, i have 2/2... When I scale the deployment to 0 and than to 1, I get back 2 pods again... how is this possible? as i can see below prometeus-server has 2:

PS C:\dev\> kubectl.exe get pods -n monitoring
NAME                                             READY   STATUS    RESTARTS   AGE
grafana-6c79d58dd-5k8cs                          1/1     Running   0          3d21h
prometheus-alertmanager-5584c7b8d-k7zrn          2/2     Running   0          3d21h
prometheus-kube-state-metrics-6b46f67bf6-kt5dq   1/1     Running   0          3d21h
prometheus-node-exporter-fj5zv                   1/1     Running   0          3d21h
prometheus-node-exporter-vgjtt                   1/1     Running   0          3d21h
prometheus-node-exporter-xfm5h                   1/1     Running   0          3d21h
prometheus-node-exporter-zp9mw                   1/1     Running   0          3d21h
prometheus-pushgateway-6c9764ff46-s295t          1/1     Running   0          3d21h
prometheus-server-b647558d5-jxgtl                2/2     Running   0          2m18s

The deployment is:

PS C:\dev> kubectl.exe describe deployment prometheus-server -n monitoring
Name:                   prometheus-server
Namespace:              monitoring
CreationTimestamp:      Thu, 16 Jul 2020 11:46:58 +0300
Labels:                 app=prometheus
                        app.kubernetes.io/managed-by=Helm
                        chart=prometheus-11.7.0
                        component=server
                        heritage=Helm
                        release=prometheus
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: prometheus
                        meta.helm.sh/release-namespace: monitoring
Selector:               app=prometheus,component=server,release=prometheus
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=prometheus
                    chart=prometheus-11.7.0
                    component=server
                    heritage=Helm
                    release=prometheus
  Service Account:  prometheus-server
  Containers:
   prometheus-server-configmap-reload:
    Image:      jimmidyson/configmap-reload:v0.3.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --volume-dir=/etc/config
      --webhook-url=http://127.0.0.1:9090/-/reload
    Environment:  <none>
    Mounts:
      /etc/config from config-volume (ro)
   prometheus-server:
    Image:      prom/prometheus:v2.19.0
    Port:       9090/TCP
    Host Port:  0/TCP
    Args:
      --storage.tsdb.retention.time=15d
      --config.file=/etc/config/prometheus.yml
      --storage.tsdb.path=/data
      --web.console.libraries=/etc/prometheus/console_libraries
      --web.console.templates=/etc/prometheus/consoles
      --web.enable-lifecycle
    Liveness:     http-get http://:9090/-/healthy delay=30s timeout=30s period=15s #success=1 #failure=3
    Readiness:    http-get http://:9090/-/ready delay=30s timeout=30s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data from storage-volume (rw)
      /etc/config from config-volume (rw)
  Volumes:
   config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-server
    Optional:  false
   storage-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus-server
    ReadOnly:   false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  prometheus-server-b647558d5 (1/1 replicas created)
NewReplicaSet:   <none>
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m32s  deployment-controller  Scaled down replica set prometheus-server-b647558d5 to 0
  Normal  ScalingReplicaSet  5m14s  deployment-controller  Scaled up replica set prometheus-server-b647558d5 to 1

the weird thing is, that as shown above, k8s thinks it's 1 pod, if looks like a manual operation which was made. I have no idea what now :/


Solution

  • Two containers, one pod. You can see them both listed under Containers: in the describe output too. One is Prometheus itself, the other is a sidecar that trigger a reload when the config file changes because Prometheus doesn't do that itself.