Search code examples
kubernetesk3s

Creating a link to an NFS share in K3s Kubernetes


I'm very new to Kubernetes, and trying to get node-red running on a small cluster of raspberry pi's I happily managed that, but noticed that once the cluster is powered down, next time I bring it up, the flows in node-red have vanished.

So, I've create a NFS share on a freenas box on my local network and can mount it from another RPI, so I know the permissions work.

However I cannot get my mount to work in a kubernetes deployment.

Any help as to where I have gone wrong please?

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-red
  labels:
    app: node-red
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-red
  template:
    metadata:
      labels:
        app: node-red
    spec:
      containers:
      - name: node-red
        image: nodered/node-red:latest
        ports:
        - containerPort: 1880
          name: node-red-ui
        securityContext:
          privileged: true
        volumeMounts:
        - name: node-red-data
          mountPath: /data
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: TZ
          value: Europe/London
      volumes:
         - name: node-red-data
      nfs:
         server: 192.168.1.96
         path: /mnt/Pool1/ClusterStore/nodered

The error I am getting is

error: error validating "node-red-deploy.yml": error validating data: 
ValidationError(Deployment.spec.template.spec): unknown field "nfs" in io.k8s.api.core.v1.PodSpec; if 
you choose to ignore these errors, turn validation off with --validate=false

New Information

I now have the following

apiVersion: v1
kind: PersistentVolume
metadata:
  name: clusterstore-nodered
  labels:
    type: nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /mnt/Pool1/ClusterStore/nodered
    server: 192.168.1.96 
  persistentVolumeReclaimPolicy: Recycle

claim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clusterstore-nodered-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Now when I start the deployment it waits at pending forever and I see the following the the events for the PVC

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 5m47s (x7 over 7m3s) persistentvolume-controller waiting for first consumer to be created before binding Normal Provisioning 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 External provisioner is provisioning volume for claim "default/clusterstore-nodered-claim" Warning ProvisioningFailed 119s (x5 over 5m44s) rancher.io/local-path_local-path-provisioner-58fb86bdfd-rtcls_506528ac-afd0-11ea-930d-52d0b85bb2c2 failed to provision volume with StorageClass "local-path": Only support ReadWriteOnce access mode

Normal ExternalProvisioning 92s (x19 over 5m44s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator

I assume that this is becuase I don't have a nfs provider, in fact if I do kubectl get storageclass I only see local-path

New question, how do I a add a storageclass for NFS? A little googleing around has left me without a clue.


Solution

  • Ok, solved the issue. Kubernetes tutorials are really esoteric and missing lots of assumed steps.

    My problem was down to k3s on the pi only shipping with a local-path storage provider.

    I finally found a tutorial that installed an nfs client storage provider, and now my cluster works!

    This was the tutorial I found the information in.