Search code examples
node.jskubernetesk3s

kubernetes (k3s) ServiceAccount and Node Client: HTTP Error


I have a k8s cluster with k3s. In a cron job which is supposed to backup my volumes I want to connect to the cluster via a Service Account in order to read some data about my volumes etc.

Therefore I created a Service Account:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: backup-volumes-sa
automountServiceAccountToken: false

...and added/mounted it to my cron job:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: backup-volumes-cron
spec:
  schedule: "0 4 * * *"
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 5
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: backup-volumes-sa
          automountServiceAccountToken: true
          containers:
            - name: sync
            [...]

In the job I use the Node.js k8s Client Library like this:

const kc = new k8s.KubeConfig();

if (process.env.KUBECONFIG)
    kc.loadFromFile(process.env.KUBECONFIG);
else
{
    kc.loadFromDefault();
    kc.clusters[0].skipTLSVerify = true; // For testing
}

if(process.env.PRINT_CLUSTERINFO && process.env.PRINT_CLUSTERINFO === 'true')
{
    console.log("Cluster Info:")
    console.log(kc.clusters);
    console.log(kc.users);
}

const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const res = await k8sApi.listPersistentVolumeClaimForAllNamespaces(undefined, undefined, undefined, 'include-in-backup=true');

I used the kubeconfig for testing but on my cluster I switched to .loadFromDefault() which uses the auto-mounted Service Account.

I already added the skipTLSVerify flag for testing and debugging but everything is the same without it.

The library loads the SA but and when printing the cluster info it outputs the corrent information:

Cluster Info:
[ { name: 'inCluster',
    caFile: '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt',
    server: 'https://10.43.0.1:443',
    skipTLSVerify: true } ]
[ { name: 'inClusterUser',
    authProvider: { name: 'tokenFile', config: [Object] } } ]

But when executing a request to my cluster's API Node I get this error:

(node:25) UnhandledPromiseRejectionWarning: HttpError: HTTP request failed
    at Request._callback (/usr/local/bin/google-sync/node_modules/@kubernetes/client-node/dist/gen/api/coreV1Api.js:11112:36)
    at Request.self.callback (/usr/local/bin/google-sync/node_modules/request/request.js:185:22)
    at Request.emit (events.js:189:13)
    at Request.<anonymous> (/usr/local/bin/google-sync/node_modules/request/request.js:1154:10)
    at Request.emit (events.js:189:13)
    at IncomingMessage.<anonymous> (/usr/local/bin/google-sync/node_modules/request/request.js:1076:12)
    at Object.onceWrapper (events.js:277:13)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1125:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

I'm using k3s and didn't configure anything touching this topic. Does anybody know what's going wrong here?

Node Client Version: 0.20.0
k3s Version: 1.28.6+k3s2

Thank you!

EDIT: Thanks to @syed-hyder who mentioned my ServiceAccount needs a Role and a Binding. Unfortunately it doesn't work in my case. I tried it with a Role/RoleBinding as well as ClusterRole/ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: backup-volumes-role
rules:
  - apiGroups:
      - ""
    resources:
      - persistentvolumes
      - persistentvolumeclaims
      - persistentvolumeclaims/status
    verbs:
      - create
      - delete
      - get
      - list
      - watch
      - update
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backup-volumes-sa
automountServiceAccountToken: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: backup-volumes-binding
subjects:
  - kind: ServiceAccount
    name: backup-volumes-sa
    namespace: default
roleRef:
  kind: ClusterRole
  name: backup-volumes-role
  apiGroup: rbac.authorization.k8s.io

Solution

  • Disclaimer: I am not familiar with NodeJS.

    I see that you are trying to call a list on PersistentVolumeClaims on all namespaces. Have you assigned the appropriate roles and role bindings to your serviceaccount?

    If not, you can quickly create a role that grants access to list pvcs in all namespaces and bind your serviceaccount to that role.