I have created a configMap from a file using "kustomize", then i am passing the key/value pairs from the configMap to the container as environment variables, when i EXEC into the container and write "env" the pairs seem to show up, but when i try to access one of them like "echo $VARNEM", nothing shows up.
first i wrote this test 'config1.properties' file
JIN=bombay
RUM=bacardi
WHISKY=jhonny walker
then i converted it to a configMap using kustomize
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- k8s.yaml
configMapGenerator:
- name: map1
files:
- config1.properties
then i converted the map to environment variables in the deployment file (this is under the 'container' section)
env:
- name: map-envs
valueFrom:
configMapKeyRef:
name: map1-kt62t9247m
key: config1.properties
finally when i EXEC into the container, it does show up, additionally this is how it shows on the cluster tree
but when i try to access the var, it shows as null, so as for my node.js app when i try to reach it from there.
I think it's something about the whole config file being treated as a key of the 'map1-kt62t9247m' but i haven't managed to fixed it by myself.
Your existing kustomization.yaml
file results in this ConfigMap:
apiVersion: v1
data:
config1.properties: |
JIN=bombay
RUM=bacardi
WHISKY=jhonny walker
kind: ConfigMap
metadata:
name: map1-24h4mcf5gh
That is, there is a single key -- config1.properties
-- and the value is the contents of the config1.properties
file.
If you want each of the properties in config1.properties
to be exposed as a key in the ConfigMap, then you need to tell Kustomize that this file contains a list of environment variable by using the envs
keyword instead of files
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- k8s.yaml
configMapGenerator:
- name: map1
envs:
- config1.properties
This produces:
apiVersion: v1
data:
JIN: bombay
RUM: bacardi
WHISKY: jhonny walker
kind: ConfigMap
metadata:
name: map1-6b95d4kbkm
(There are examples of this in the documentation.)
Then we need to use an envFrom
directive in our Pod template, rather than setting a single environment variable as you are are currently doing:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- envFrom:
- configMapRef:
name: map1-6b95d4kbkm
image: docker.io/alpinelinux/darkhttpd
name: example
Once our Pod is running, we see:
/ $ env
JIN=bombay
RUM=bacardi
WHISKY=jhonny walker
.
.
.