Search code examples
kuberneteskubectl

Kubectl how to work with different clusters (contexts) at the same time


In this case, I have multiple Kubernetes clusters and want to work on different clusters at the same time. (Will keep it as 2 clusters to make it simple)

As described in Kubernetes documentation I have configured two clusters (will call them dc1-main and dc2-main)

I'm logging into a node where kubectl is, with an application support user (e.g. appuser)

At the same time on two sessions to the management server I logged in with appuser.

In this case, I want to use kubectl to manage one context on each session.

But, if I set the active context as below, both sessions to the server reflect get the change as both are referring to the the same config file (which has both contexts)

kubectl config use-context dc1-main

Or the other option in the document is to pass the context with the command as an argument. Which makes the command quite complicated.

kubectl --context="dc2-main" get nodes

I'm looking at an easy way to change this quickly to change the context without affecting the other session. Which could be most likely an environment variable. Not so sure if this is the easiest though.

I went through the kubectl project GitHub and found a change has been requested long time ago for something similar to this and talking about env variables.

Any better suggestions?


Solution

  • The standard Kubernetes client libraries support a $KUBECONFIG environment variable. This means that pretty much every tool supports it, including Helm and any locally-built tools you have. You can set this to a path to a cluster-specific configuration. Since it's an environment variable, each shell will have its own copy of it.

    export KUBECONFIG="$HOME/.kube/dc1-main.config"
    kubectl get nodes
    

    In your shell dotfiles, you can write a simple shell function to set this

    kubecfg() {
      export KUBECONFIG="$HOME/.kube/$1.config"
    }
    

    In my use I only have one context (user/host/credentials) in each kubeconfig file, so I pretty much never use the kubectl config family of commands. This does mean that, however you set up the kubeconfig file initially, you either need to repeat those steps for each cluster or split out your existing kubeconfig file by hand (it's YAML so it's fairly doable).

    # specifically for Amazon Elastic Kubernetes Service
    kubecfg dc1-main
    aws eks update-kubeconfig --name dc1-main ...
    kubecfg dc2-main
    aws eks update-kubeconfig --name dc2-main ...
    

    Tools that want to write the configuration also use this variable, which for me mostly comes up if I want to recreate my minikube environment. You may find it useful to chmod 0400 "$KUBECONFIG" to protect these files once you've created them.