I'm deploying a mongodb cluster with the MongoDB Community Operator. ReplicaSet configuration.
I followed this example for my configuration. The difference is that I want a pod on each node with his pv on the node. At every deploy the same node location pod/pv.
I have deployed a mongodb statefulset with two replicas. I want that pods of the statefulset pod-0
, pod-1
lays on specific nodes: node-0
, node-1
.
For the two pods I deployed two persistent volumes of type hostpath
. One for each node: pv-0
and pv-1
.
All seems fine but there is The problem:
Sometimes the PVC of pod-0
(forced on node-0) is bounded to pv-1 (foce on node-1) or vice-versa. So the pod can't start because there is a node conflict
Is there a way to force pod-0
on the same node of pv-0
?
Maybe with the MongoDBCommunity.spec.statefulSet.VolumeClameTemplates
, but I can't figure out how.
I read HERE, but I can't figure out how to apply to statefulset.
Follows my yamls. Satefulset:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: my-mongo
labels:
app: my-mongo
namespace: mongo-system
spec:
members: 2
statefulSet:
spec:
template:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
storageClassName: hostpath
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 11Gi
selector:
matchLabels:
type: data
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-volume-db-0
labels:
type: data
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 11Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/volumes/db-data
type: ""
nodeAffinity:
required:
# This is just an example for matchexpression
# This field is required depends on the specific
# of the environment the resource is deployed in
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-volume-db-1
labels:
type: data
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 11Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/volumes/db-data
type: ""
nodeAffinity:
required:
# This is just an example for matchexpression
# This field is required depends on the specific
# of the environment the resource is deployed in
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-1
I resolved using the claimRef
on the PV yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-volume-db-1
labels:
type: data
spec:
storageClassName: hostpath
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 11Gi
claimRef:
namespace: system-mongo
name: data-volume-db-0
accessModes:
- ReadWriteOnce
hostPath:
path: /data/volumes/db-data
type: ""
nodeAffinity:
required:
# This is just an example for matchexpression
# This field is required depends on the specific
# of the environment the resource is deployed in
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-1