I have problem with deploying of multiple secrets from template deployment.yaml
. For some reason, when my app try to find the file within deployment, it cannot be found. Secrets are taken by groovy script from gopass.
Here is actual simplified version of file (indication levels should be proper)
apiVersion: apps/v1
kind: Deployment
metadata:
name: "test-app"
spec:
template:
spec:
containers:
- name: "some-container"
image: "imgtag"
volumeMounts:
- name: app-secrets
mountPath: /app/secrets
volumes:
- name: app-secrets
projected:
sources:
- secret:
name: secret1
- secret:
name: secret2
Old version (this properly created private_key.pem
):
apiVersion: apps/v1
kind: Deployment
metadata:
name: "test-app"
spec:
template:
spec:
containers:
- name: "some-container"
image: "imgtag"
volumeMounts:
- name: app-secrets
mountPath: /app/secrets
volumes:
- name: app-secrets
secret:
secretName: secret1
secrets.groovy
:
def secrets() {
[
[type: "fromFile", name: "secret1", key: "private_key.pem", gopassPath: "firstGopassPath"],
[type: "fromFile", name: "secret2", key: "credentials.txt", gopassPath: "secondGopassPath"]
]
}
return this
When I added delay (to avoid crashing), then I see that these files just weren't mounted anywhere.
Description of pod says that:
(this was before updating kube client)
Volumes:
app-secrets:
<unknown>
(this was after updating kube client to 1.18 from 1.12.1)
Volumes:
app-secrets:
Type: Projected (a volume that contains injected data from multiple sources)
--UPDATE--
kubectl get secret secret1 -o yaml
apiVersion: v1
data:
old_private_key.pem: somekey
kind: Secret
metadata:
creationTimestamp: "2020-04-22T15:31:43Z"
name: jpd-sales-force-private-key
namespace: default
resourceVersion: "137791226"
selfLink: /api/v1/namespaces/default/secrets/secret1
uid: a4f71c36-81d0-44f8-87a0-a6100c6f9f01
type: Opaque
(note: I was trying to rename file - the original was private_key.pem, here: old_private_key.pem, and private_key.pem in original post is in real new name, so it looks like new name of file didn't appear).
Does any of you have idea what may be wrong?
Solution for my problem:
kubectl delete secret secret1
secret1
and secret2
were one level too low. Improved version:apiVersion: apps/v1
kind: Deployment
metadata:
name: "test-app"
spec:
template:
spec:
containers:
- name: "some-container"
image: "imgtag"
volumeMounts:
- name: app-secrets
mountPath: /app/secrets
volumes:
- name: app-secrets
projected:
sources:
- secret:
name: secret1
- secret:
name: secret2