Search code examples
ubuntukubernetesnfs

NFS mounts on ubuntu nodes in Kubernetes


If I run Kubernetes on a cluster of Ubuntu machines, how does NFS work inside Kubernetes when it is mounted on each of the ubuntu nodes?

My use case is for databases and RabbitMQ to utilize that storage available on the nodes the pods are running.

Do I mount that NFS as a regular volume when deploying or should I use NFS directly from a persistent volume and that way not mount NFS on the Ubuntu nodes? How does the NFS work to distinguish the instances running, are the volume claim separate based on the pods/containers?


Solution

  • For you to use NFS with Kubernetes you have to create a PV and then utilize it via PVC.

    Your PVs will decide which NFS node they are backed with. Since that is where you will mention the server address. Look at the sample example below.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv0003
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      storageClassName: slow
      nfs:
        path: /tmp
        server: 172.17.0.2
    

    Now if you want some pods to use storage from specific PV you can add a field in PVC called volumeName which basically asks the PVC to be bound to that PV.

    The way generally people do things are set up dedicated nodes for storage, since they don't want to loose the data. Keeping data on the nodes might be risky. Since if the node goes down you loose all your data on that node, unless that is backed up somewhere.

    Read more about the Persistent Volumes here.