I am trying to connect a filestore instance to my GKE cluster. I have two deployments on my GKE cluster, writing to two separate volume claims. What i want to know is:
The other option i see is i create two different filestores - this seems to solve my problem, but I am currently setting up my cluster architecture and it seems like a huge waste of resources (I have ~ 1TB data, but my pods use around 50GB each - so i was thinking of partitioning, but the GCP docs are too cryptic)
Any other suggestions to what i am trying to achieve is also welcome :)
Filestore volumes support ReadWriteMany so you can mount the same PV on multiple pods for write access. You'd create a single PersistentVolumeClaim with accessModes set to ReadWriteMany:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sharedpvc
spec:
accessModes:
- ReadWriteMany
storageClassName: standard-rwx
resources:
requests:
storage: 1Ti
and then mount it in each of your deployments
volumes:
- name: shared
persistentVolumeClaim:
claimName: sharedpvc
So the same Filestore instance is now mounted on the pods/deployments, but you still need to use your own mechanism to ensure pods don't clobber each other's data. There are two options here I'd consider:
You can't create separate partitions and you can't limit the storage size for each deployment. But this way you can have multiple pods writing to the same instance rather than creating multiple 1TB instances.