I am trying to install my rancher(RKE) kubernetes cluster bitnami/mongodb-shared . But I couldn't create a valid PV for this helm chart.
The error that I am getting: no persistent volumes available for this claim and no storage class is set
This is the helm chart documentation section about PersistenceVolume: https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded/#persistence
This is the StorageClass and PersistentVolume yamls that I created for this helm chart PVCs':
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-nfs-storage
provisioner: nope
parameters:
archiveOnDelete: "false"
----------
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
name: db-nfs
spec:
storageClassName: ssd-nfs-storage # same storage class as pvc
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
nfs:
server: 142.251.33.78 # ip addres of nfs server
path: "/bitnami/mongodb" # path to directory
This is the PVC yaml that created by the helm chart:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-06-06T17:50:40Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/component: shardsvr
app.kubernetes.io/instance: sam-db
app.kubernetes.io/name: mongodb-sharded
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2021-06-06T17:50:40Z"
name: datadir-sam-db-mongodb-sharded-shard1-data-0
namespace: default
resourceVersion: "960381"
uid: c4313ed9-cc99-42e9-a64f-82bea8196629
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
Can you tell me what I am missing?
I am giving the bitnami/mongodb-sharded
installation instruction with NFS server on Rancher(v2.5.8).
I have three Centos 8 VM. One NFS server(lets we say 1.1.1.1), two k8s nodes(lets we say 8.8.8.8 and 9.9.9.9) on k8s-cluster, i am using RKE(aka Rancher K8S Engine)
nfs-subdir-external-provisioner
HELM repository to the Rancher Chart Repositoriesnfs-subdir-external-provisioner
via rancher chartsbitnami
HELM repo to the Rancher Chart Repositoriesmongodb-sharded
via Rancher charts# nfs server install
dnf install nfs-utils -y
systemctl start nfs-server.service
systemctl enable nfs-server.service
systemctl status nfs-server.service
# you can verify the version
rpcinfo -p | grep nfs
# nfs deamon config: /etc/nfs.conf
# nfs mount config: /etc/nfsmount.conf
mkdir /mnt/storage
# allows creation from client
# for mongodb-sharded: /mnt/storage
chown -R nobody: /mnt/storage
chmod -R 777 /mnt/storage
# restart service again
systemctl restart nfs-utils.service
# grant access to the client
vi /etc/exports
/mnt/storage 8.8.8.8(rw,sync,no_all_squash,root_squash)
/mnt/storage 9.9.9.9(rw,sync,no_all_squash,root_squash)
# check exporting
exportfs -arv
exportfs -s
# exporting 8.8.8.8:/mnt/storage
# exporting 9.9.9.9:/mnt/storage
# nfs client install
dnf install nfs-utils nfs4-acl-tools -y
# see from the client shared folder
showmount -e 1.1.1.1
# create mounting folder for client
mkdir /mnt/cstorage
# mount server folder to the client folder
mount -t nfs 1.1.1.1:/mnt/storage /mnt/cstorage
# check mounted folder vis nfs
mount | grep -i nfs
# mount persistent upon a reboot
vi /etc/fstab
# add following codes
1.1.1.1:/mnt/storage /mnt/cstorage nfs defaults 0 0
# all done
Bonus: Unbind nodes.
# un mount and delete from client
umount -f -l /mnt/cstorage
rm -rf /mnt/cstorage
# delete added volume from fstab
vi /etc/fstab
Helm Repository URL: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
nfs-subdir-external-provisioner
via Chartsbitnami
HELM repo to the Rancher Chart RepositoriesBitnami HELM URL: https://charts.bitnami.com/bitnami
mongodb-sharded
via Rancher ChartsRancher -->
Cluster Explorer -->
Apps & Marketplace
Charts -->
Find mongodb-sharded
-->
Select -->
Give a name(my-db) -->
Select Values YAML -->
Add global.storageClassname: nfs-client(we set this value step 5) -->
Install