Search code examples
kuberneteskubernetes-helm

Helm upgrade and immediately show logs


Is there a way to do

helm upgrade something .

And immediately attach to created container logs?


Solution

  • There is not.

    For some idea of the complexity involved, consider upgrading this Deployment:

    apiVersion: apps/v1
    kind: Deployment
    spec:
      replicas: 10
      template:
        spec:
          initContainers:
            - name: wait
              image: busybox
              command: [sleep, '60']
          containers: [...]
    

    In order to do this, you'd need to know what Deployments were being created (which data Helm has); but then track through to their managed ReplicaSets and Pods, attach to each one as it's created and read its logs. With this particular artificial Deployment, the sleep needs to complete before a Pod will become ready, which needs to happen before one of the old Pods is deleted. That means your log-watcher tool will be waiting for a minimum of 9 minutes for all of the Pods to be created on an upgrade.

    (There's no reason such a tool couldn't be written, but it's not part of the standard helm or kubectl command set.)

    In my practical experience it does take observably long for a Pod to start up. If I helm upgrade something . and then immediately run kubectl get pods, even with a single replica it can routinely be 10-15 seconds before only the new pod is running.

    If you're dealing with a small number of Pods (I might want to do this in a developer cluster, for example, with only a single replica, and less than 100% confidence my Pod will actually start) a workaround can be to wait for the pod to exist, then look at its logs.

    # upgrade the release
    helm upgrade something .
    
    # watch the list of pods; you will see the new pod created
    # and (hopefully) the old one destroyed
    kubectl get pod -w
    
    # now get the logs for the (hopefully one) running pod for
    # the deployment
    kubectl logs deployment/something-server
    

    The kubectl logs deployment/name trick only really works reliably if you have just one replica and it did start up successfully. You will need to use the longer kubectl logs something-server-12345678-abcde knowing the full Pod name if the startup isn't successful.