I would run Redis for caching with separate Pod in k8s using https://github.com/helm/charts/tree/master/stable/redis chart.
So redis is managing PVC and using for persistence and in my opinion my application's pod should ideally only need to connect to redis as a service. So, this host:port should be enough as far as I can think..This situation is same as any database.
So my doubt is, should I make any extra configuration in application's yaml for volume which is relates with Redis or PostgreSQL? I mean, should application's pod mount it as well? What is the common usage for following best practices to connect redis or database from application's pod?
i.e parts of configuration in redis for volume
enabled: true
path: /data
subPath: ""
accessModes:
- ReadWriteOnce
size: 8Gi
matchLabels: {}
matchExpressions: {}
Application's deployment.yaml
env:
- name: REDIS_HOST
value: redis-master
- name: REDIS_PORT
value: "6379"
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: POSTGRES_HOST
- name: POSTGRES_PORT
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: POSTGRES_PORT
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-config
key: POSTGRES_DB
As far as I understand, I think in your case you have to configure PVC/PV. It is properly to setup PVC directly in deployment definition:
Example for redis, creating PVC (only if you have enabled dynamic provisioning):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: your-mysql-pv-claim
labels:
app: redis
spec:
storageClassName: your-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
In Redis deployment configuration file in specification section add following lines:
volumes:
- name: your-mysql-persistent-storage
persistentVolumeClaim:
claimName: your-mysql-pv-claim
Same steps you have to fill for postgress. Remember ito check if you have storageclass. Otherwise you will have to do it manually. Also remember to define path where specific volume should be mounted.
Storage provisioning in cloud:
Static
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
Dynamic
When none of the static PVs the administrator created match a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur. Claims that request the class "" effectively disable dynamic provisioning for themselves.
To enable dynamic storage provisioning based on storage class, the cluster administrator needs to enable the DefaultStorageClass admission controller on the API server. This can be done, for example, by ensuring that DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component. For more information on API server command-line flags, check kube-apiserver documentation.
You can also have shared volumes then two containers can use these volumes to communicate.
More information you can find here: pvc, pvc-kubernetes, pvc-kubernetes-pod.