When using nfs-server-provisioner is it possible to set a specific persistent volume for the NFS provisioner?
At present, I'm setting the Storage Class to use via helm:
helm install stable/nfs-server-provisioner \
--namespace <chart-name>-helm \
--name <chart-name>-nfs \
--set persistence.enabled=true \
--set persistence.storageClass=slow \
--set persistence.size=25Gi \
--set storageClass.name=<chart-name>-nfs \
--set storageClass.reclaimPolicy=Retain
and the Storage Class is built via:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
type: pd-standard
replication-type: none
This then generates the PV dynamically when requested by a PVC.
I'm using the PV to store files for a Stateful CMS, using NFS allows for multiple pods to connect to the same file store.
What I'd like to do now is move all those images to a new set of pods on a new cluster. Rather than backing them up and going through the process of dynamically generating a PV and restoring the files to it, is it possible to retain the current PV and then connect the new PVC to it?
When using nfs-server-provisioner is it possible to set a specific persistent volume for the NFS provisioner?
If we are talking if it's possible to retain the data from existing old PV which was the storage for the old NFS server and then use it with new NFS server, the answer is yes.
I've managed to find a way to do it. Please remember that this is only a workaround.
Steps:
Please remember that this solution is showing the way to create PV's and PVC's for a new workload manually.
I've included the whole process below.
Assuming that you created your old nfs-server
with the gce-pd
, you can access this disk via GCP Cloud Console to make a snapshot of it.
I've included here more safer approach which consists of creating a copy of the gce-pd
with data inside of old nfs-server. This copy will be used in new nfs-server
.
There is also a possibility to change the persistentVolumeReclaimPolicy
on existing old PV to not be deleted when the PVC
of old nfs-server is deleted. In this way, you could reuse existing disk in new nfs-server.
Please refer to official documentation how to create a snapshot out of persistent disks in GCP:
You will need to create a new gce-pd
disk for your new nfs-server
. The earlier created snapshot will be the source for your new disk.
Please refer to official documentation on how to create a new disk from existing snapshot:
To ensure that the GCP
's disk will be bound to the newly created nfs-server you will need to create a PV and a PVC. You can use example below but please change it accordingly to your use case:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-disk
spec:
storageClassName: standard
capacity:
storage: 25G
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: data-disk-pvc # reference to the PVC below
gcePersistentDisk:
pdName: old-data-disk # name of the disk created from snapshot in GCP
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-disk-pvc # reference to the PV above
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25G
The way that this nfs-server works is that it creates a disk in the GCP
infrastructure to store all the data saved on the nfs-server. Further creation of PV
's and PVC
's with nfs storageClass
will result in creation of a folder in the /export
directory inside of a nfs-server pod.
You will need to pull the Helm chart of the nfs-server-provisioner
as it requires a reconfiguration. You can do it by invoking below command:
$ helm pull --untar stable/nfs-server-provisioner
The changes are following in the templates/statefulset.yaml
file:
.Values.persistence.enabled
(on the bottom). This parts are responsible for creating storage which you already have. {{- if not .Values.persistence.enabled }}
volumes:
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ {{ .Values.persistence.accessMode | quote }} ]
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{- end }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}
spec.template.spec
part like here: Kubernetes.io: Configure persistent volume storage: Create a pod volumes:
- name: data
persistentVolumeClaim:
claimName: data-disk-pvc # name of the pvc created from the disk
You will need to run this Helm chart from local storage instead of running it from the web. The command to run it will be following:
$ helm install kruk-nfs . --set storageClass.name=kruk-nfs --set storageClass.reclaimPolicy=Retain
This syntax is specific to Helm3
Above parameters are necessary to specify the name of the storageClass
as well it's reclaimPolicy
.
Example to create a PVC linked to the existing folder in nfs-server.
Assuming that the /export
directory looks like this:
bash-5.0# ls
ganesha.log
lost+found
nfs-provisioner.identity
pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf # folder we will create a PV and PVC for
v4old
v4recov
vfs.conf
A tip! When you create a
PVC
with astorageClass
of this nfs-server it will create a folder with the name of thisPVC
.
You will need to create a PV
for your share:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-example
spec:
storageClassName: kruk-nfs
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
nfs:
path: /export/pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf # directory to mount pv
server: 10.73.4.71 # clusterIP of nfs-server-pod service
And PVC
for your PV
:
apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
name: pv-example-claim
spec:
storageClassName: kruk-nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeName: pv-example # name of the PV created for a folder
Above manifests will create a PVC
with the name of pv-example-claim that will have the contents of the pvc-2c16cccb-da67-41da-9986-a15f3f9e68cf
directory available for usage. You can mount this PVC
to a pod by following this example:
piVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: storage-mounting
persistentVolumeClaim:
claimName: pv-example-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/storage"
name: storage-mounting
After that you should be able to check if you have the data in the folder specified in above manifest:
$ kubectl exec -it task-pv-pod -- cat /storage/hello
hello there