I would like to provide secrets from a Hashicorp Vault for the Apache Flink jobs running in a Kubernetes cluster. These credits will be used to access a state-backend for checkpointing and savepoints. The state-backend could be for example Minio S3 storage. Could someone provide a working example for a FlinkApplication operator please given the following setup?
Vault secrets for username and password (or an access key):
vault kv put vvp/storage/config username=user password=secret
vault kv put vvp/storage/config access-key=minio secret-key=minio123
k8s manifest of the Flink application custom resource:
apiVersion: flink.k8s.io/v1beta1
kind: FlinkApplication
metadata:
name: processor
namespace: default
spec:
image: stream-processor:0.1.0
deleteMode: None
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: vvp-flink-job
vault.hashicorp.com/agent-inject-secret-storage-config.txt: vvp/data/storage/config
flinkConfig:
taskmanager.memory.flink.size: 1024mb
taskmanager.heap.size: 200
taskmanager.network.memory.fraction: 0.1
taskmanager.network.memory.min: 10mb
web.upload.dir: /opt/flink
jobManagerConfig:
resources:
requests:
memory: "1280Mi"
cpu: "0.1"
replicas: 1
taskManagerConfig:
taskSlots: 2
resources:
requests:
memory: "1280Mi"
cpu: "0.1"
flinkVersion: "1.14.2"
jarName: "stream-processor-1.0-SNAPSHOT.jar"
parallelism: 3
entryClass: "org.StreamingJob"
programArgs: >
--name value
Docker file of the flink application:
FROM maven:3.8.4-jdk-11 AS build
ARG revision
WORKDIR /
COPY src /src
COPY pom.xml /
RUN mvn -B -Drevision=${revision} package
# runtime
FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
The flink-config.yaml contains the following examples:
# state.backend: filesystem
# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
# state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpoints
# Default target directory for savepoints, optional.
#
# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints
The end goal is to replace the hardcoded secrets or set them somehow from the vault:
state.backend: filesystem
s3.endpoint: http://minio:9000
s3.path.style.access: true
s3.access-key: minio
s3.secret-key: minio123
Thank you.
Once you have vault variables set
You can add the annotation in deployment to get variables out of the vault to deployment
annotations:
vault.hashicorp.com/agent-image: <Agent image>
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-secrets: kv/<Path-of-secret>
vault.hashicorp.com/agent-inject-template-secrets: |2
{{- with secret "kv/<Path-of-secret>" -}}
#!/bin/sh
set -e
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
exec "$@"
{{- end }}
vault.hashicorp.com/auth-path: auth/<K8s cluster for auth>
vault.hashicorp.com/role: app
this will create the file inside your POD.
When you application run it should execute this file first and the environment variable will get injected to POD.
So vault annotation will create one file the same as you are getting as txt but instead, we will be doing it like
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
it will keep and inject the export before key & value. Now the file is a kind of shell script and once it will get executed on the startup of the application it will inject variables to the OS level.
Keep this file in reop and add it in Docker ./bin/runapp
#!/bin/bash
if [ -f '/vault/secrets/secrets' ]; then
source '/vault/secrets/secrets'
fi
node <path-insnide-docker>/index.js #Sorry dont know scala or Java
package.json
"start": "./bin/runapp",
Dockerfile
ADD ./bin/runapp ./
EXPOSE 4444
CMD ["npm", "start"]
Your vault injected file will be something like inside pod at /vault/secrets/secrets
or your configured path.
#!/bin/sh
set -e
export development=false
export production=true
exec "$@"