I am deploying ECK in an on-premise Kubernetes cluster with Istio installed. We drew a security perimeter at our gateway. Meaning all the services are only reachable through the gateway, where TLS and authentication is done. Since we use Istio, our K8s services themselves don't need TLS (Envoy proxy brings mTLS to each Pod).
Elasticsearch requires TLS in order to enable auth via OIDC. I don't understand why, but ok.
I can not configure the Elasticsearch with valid certificates using cert-manager - the elasticsearch-es-http K8s service is not reachable from the internet.
If I configure TLS at Elasticsearch with a self-signed certificate, I would need to add its CA to all services communicating with ES. And this seems unreasonable.
Why? Why am I forced to enable TLS on ES for OIDC to function?
Where in OAuth2 flow is "resource server" (ES in this case) required to have TLS? IDP does not send requests directly to resource server anyway. Auth server has TLS configured.
What am I missing here?
Looks like this an open issue. These days, any OAuth client or resource server should be configurable using plain HTTP so that a cloud native platform can manage TLS on their behalf.
It is a little unclear from your question how clients and users interact with the Elasticsearch resource server. I assume you have some kind of app that sends an OAuth 2.0 token in requests to get Elasticsearch data, and you want to restrict access to Keycloak users, or a subset of them.
INTERNAL TLS
To get OIDC working, looks like you will need to use a component like cert-manager to issue internal certificates and keys. I remember playing around with this a while back. I used a self signed issuer and was then able to mount certificate resources inside Elasticsearch pods like this· I believe that the certificate resource auto-renews certificates and keys when close to expiry.
kind: Certificate
apiVersion: cert-manager.io/v1
metadata:
name: elasticsearch-cert
spec:
secretName: elasticsearch-pkcs12
issuerRef:
name: ca-issuer
kind: Issuer
commonName: elasticsearch-svc.default.svc
dnsNames:
- elasticsearch-svc
- elasticsearch-svc.default.svc
- elasticsearch-svc.default.svc.cluster.local
keystores:
pkcs12:
create: true
passwordSecretRef:
name: elasticsearch-pkcs12-password
key: password
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.4.1
env:
- name: discovery.type
value: 'single-node'
- name: xpack.security.enabled
value: 'true'
- name: xpack.security.autoconfiguration.enabled
value: 'true'
- name: xpack.security.http.ssl.enabled
value: 'true'
- name: xpack.security.http.ssl.keystore.path
value: '/usr/share/elasticsearch/config/certs/keystore.p12'
- name: xpack.security.http.ssl.keystore.password
value: 'Password1'
- name: xpack.security.http.ssl.certificate_authorities
value: '/usr/share/elasticsearch/config/certs/ca.crt'
- name: ELASTIC_PASSWORD
value: 'Password1'
volumeMounts:
- name: elasticsearch-ssl-cert
mountPath: /usr/share/elasticsearch/config/certs
readOnly: true
volumes:
- name: elasticsearch-ssl-cert
secret:
secretName: elasticsearch-pkcs12
---
kind: Service
apiVersion: v1
metadata:
name: elasticsearch-svc
spec:
selector:
app: elasticsearch
ports:
- protocol: "TCP"
port: 9200
If there are many clients, consider also putting a dedicated utility gateway (such as Kong or NGINX), in front of Elasticsearch, that routes HTTPS to HTTP requests, and limits the scope of where you need to use internal certificates and configure trust.
Other solutions are possible for working with internal TLS explicitly, such as using SPIFFE to secure database connections. These all involve explicit configuration of trust chains though, which I can see you want to minimize.