Search code examples
kubernetesnamespacesrbacconfigmapargo

How do I fix a role-based problem when my role appears to have the correct permissions?


I am trying to establish the namespace "sandbox" in Kubernetes and have been using it for several days for several days without issue. Today I got the below error.
I have checked to make sure that I have all of the requisite configmaps in place.

Is there a log or something where I can find what this is referring to?

panic: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

I did find this (MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered) thread and have applied the below patch to my service account, but I am still getting the same error.

automountServiceAccountToken: false

UPDATE: In answer to @p10l I am working in a bare-metal cluster version 1.23.0. No terraform.

I am getting closer, but still not there.

This appears to be another RBAC problem, but the error does not make sense to me.

I have a user "dma." I am running workflows in the "sandbox" namespace using the context dma@kubernetes

The error now is

Create request failed: workflows.argoproj.io is forbidden: User "dma" cannot create resource "workflows" in API group "argoproj.io" in the namespace "sandbox"

but that user indeed appears to have the correct permissions.

This is the output of kubectl get role dma -n sandbox -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"dma","namespace":"sandbox"},"rules":[{"apiGroups":["","apps","autoscaling","batch","extensions","policy","rbac.authorization.k8s.io","argoproj.io"],"resources":["pods","configmaps","deployments","events","pods","persistentvolumes","persistentvolumeclaims","services","workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
  creationTimestamp: "2021-12-21T19:41:38Z"
  name: dma
  namespace: sandbox
  resourceVersion: "1055045"
  uid: 94191881-895d-4457-9764-5db9b54cdb3f
rules:
- apiGroups:
  - ""
  - apps
  - autoscaling
  - batch
  - extensions
  - policy
  - rbac.authorization.k8s.io
  - argoproj.io
  - workflows.argoproj.io
  resources:
  - pods
  - configmaps
  - deployments
  - events
  - pods
  - persistentvolumes
  - persistentvolumeclaims
  - services
  - workflows
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete

This is the output of kubectl get rolebinding -n sandbox dma-sandbox-rolebinding -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"dma-sandbox-rolebinding","namespace":"sandbox"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"dma"},"subjects":[{"kind":"ServiceAccount","name":"dma","namespace":"sandbox"}]}
  creationTimestamp: "2021-12-21T19:56:06Z"
  name: dma-sandbox-rolebinding
  namespace: sandbox
  resourceVersion: "1050593"
  uid: d4d53855-b5fc-4f29-8dbd-17f682cc91dd
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: dma
subjects:
- kind: ServiceAccount
  name: dma
  namespace: sandbox

Solution

  • The issue you are describing is a reoccuring one, described here and here where your cluster lacks KUBECONFIG environment variable.

    First, run echo $KUBECONFIG on all your nodes to see if it's empty. If it is, look for the config file in your cluster, then copy it to all the nodes, then export this variable by running export KUBECONFIG=/path/to/config. This file can be usually found at ~/.kube/config/ or /etc/kubernetes/admin.conf` on master nodes.

    Let me know, if this solution worked in your case.