I think I have exhausted all possibilities and I already have every link on google in purple.
I have kafka and zookeper installed on machine 1. There is also kerberos configured.
I want to put druid on machine 2 using docker compose. So I created an image + docker-compose for the individual services.
Now I have a problem connecting druid to kafka topic. After trying to connect I get this error:
Unable to create RecordSupplier: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
I already more or less know that this is about forced authentication by a specific method, but I have the right one added everywhere in the configs.
Does my configuration contain any errors, causing the inability to log in
Dockerfile:
FROM debian:bookworm-slim AS builder
RUN apt-get update && apt-get install -y \
krb5-user \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
FROM apache/druid:30.0.0
COPY --from=builder /usr/bin/kinit /usr/bin/kinit
COPY --from=builder /usr/bin/klist /usr/bin/klist
COPY --from=builder /etc/krb5.conf /etc/krb5.conf
COPY --from=builder /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/
COPY stag/krb5.conf /etc/krb5.conf
COPY stag/jaas.conf /etc/security/jaas.conf
COPY stag/keytab.keytab /etc/security/keytab.keytab
ENV JAVA_OPTS="-Djava.security.auth.login.config=/etc/security/jaas.conf"
CMD ["/bin/bash", "-c", "kinit -kt /etc/security/keytab.keytab user/[email protected] && druid/bin/start-micro-quickstart"]
Docker-compose:
version: "2.2"
volumes:
metadata_data: {}
middle_var: {}
historical_var: {}
broker_var: {}
coordinator_var: {}
router_var: {}
druid_shared: {}
overlord_var: {}
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- metadata_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=SecretPassword
- POSTGRES_USER=user
- POSTGRES_DB=exampledb
coordinator:
image: my-druid-with-kerberos:latest
container_name: coordinator
volumes:
- druid_shared:/opt/shared
- coordinator_var:/opt/druid/var
depends_on:
- postgres
ports:
- "8081:8081"
command:
- coordinator
env_file:
- environment
environment:
- druid.zk.service.host=zookeeper:2181
- DRUID_OPTS=-Djava.security.auth.login.config=/etc/security/jaas.conf
broker:
image: my-druid-with-kerberos:latest
container_name: broker
volumes:
- broker_var:/opt/druid/var
- /path/to/krb5.conf:/etc/krb5.conf
- /path/to/jaas.conf:/etc/security/jaas.conf
- /path/to/keytab.keytab:/etc/security/keytab.keytab
depends_on:
- postgres
- coordinator
ports:
- "8082:8082"
command:
- broker
env_file:
- environment
environment:
- druid.zk.service.host=zookeeper:2181
- DRUID_OPTS=-Djava.security.auth.login.config=/etc/security/jaas.conf
historical:
image: my-druid-with-kerberos:latest
container_name: historical
volumes:
- druid_shared:/opt/shared
- historical_var:/opt/druid/var
- /path/to/krb5.conf:/etc/krb5.conf
- /path/to/jaas.conf:/etc/security/jaas.conf
- /path/to/keytab.keytab:/etc/security/keytab.keytab
depends_on:
- postgres
- coordinator
ports:
- "8083:8083"
command:
- historical
env_file:
- environment
environment:
- druid.zk.service.host=zookeeper:2181
- DRUID_OPTS=-Djava.security.auth.login.config=/etc/security/jaas.conf
middlemanager:
image: my-druid-with-kerberos:latest
container_name: middlemanager
volumes:
- druid_shared:/opt/shared
- middle_var:/opt/druid/var
- /path/to/krb5.conf:/etc/krb5.conf
- /path/to/jaas.conf:/etc/security/jaas.conf
- /path/to/keytab.keytab:/etc/security/keytab.keytab
depends_on:
- postgres
- coordinator
ports:
- "8091:8091"
- "8100-8105:8100-8105"
command:
- middleManager
env_file:
- environment
environment:
- druid.zk.service.host=zookeeper:2181
- DRUID_OPTS=-Djava.security.auth.login.config=/etc/security/jaas.conf
router:
image: my-druid-with-kerberos:latest
container_name: router
volumes:
- router_var:/opt/druid/var
- /path/to/krb5.conf:/etc/krb5.conf
- /path/to/jaas.conf:/etc/security/jaas.conf
- /path/to/keytab.keytab:/etc/security/keytab.keytab
depends_on:
- postgres
- coordinator
ports:
- "8888:8888"
command:
- router
env_file:
- environment
environment:
- druid.zk.service.host=zookeeper:2181
- DRUID_OPTS=-Djava.security.auth.login.config=/etc/security/jaas.conf
overlord:
image: my-druid-with-kerberos:latest
container_name: overlord
volumes:
- druid_shared:/opt/shared
- overlord_var:/opt/druid/var
- /path/to/krb5.conf:/etc/krb5.conf
- /path/to/jaas.conf:/etc/security/jaas.conf
- /path/to/keytab.keytab:/etc/security/keytab.keytab
depends_on:
- postgres
ports:
- "8090:8090"
command:
- overlord
env_file:
- environment
environment:
- druid.zk.service.host=zookeeper:2181
- DRUID_OPTS=-Djava.security.auth.login.config=/etc/security/jaas.conf
environent file for druid:
DRUID_SINGLE_NODE_CONF=micro-quickstart
druid_emitter_logging_logLevel=debug
druid_extensions_loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-multi-stage-query","druid-kafka-indexing-service"]
druid_zk_service_host=zookeeper
druid_metadata_storage_host=
druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/exampledb
druid_metadata_storage_connector_user=user
druid_metadata_storage_connector_password=SecretPassword
druid_coordinator_balancer_strategy=cachingCost
druid_indexer_runner_javaOptsArray=["-server", "-Xmx1g", "-Xms1g", "-XX:MaxDirectMemorySize=3g", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]
druid_indexer_fork_property_druid_processing_buffer_sizeBytes=256MiB
druid_storage_type=local
druid_storage_storageDirectory=/opt/shared/segments
druid_indexer_logs_type=file
druid_indexer_logs_directory=/opt/shared/indexing-logs
druid_processing_numThreads=2
druid_processing_numMergeBuffers=2
druid.kafka.consumer.security.protocol=SASL_PLAINTEXT
druid.kafka.consumer.sasl.mechanism=GSSAPI
druid.kafka.consumer.sasl.kerberos.service.name=kafka
jaas.conf:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
storeKey=true
debug=true
serviceName="kafka"
keyTab="/etc/security/keytab.keytab"
principal="[email protected]";
};
kerberos conf:
[libdefaults]
ccache_type = 4
default_realm = EXAMPLE.COM
fcc-mit-ticketflags = true
forwardable = true
kdc_timesync = 1
proxiable = true
renew_lifetime = 30d
ticket_lifetime = 3d
[realms]
EXAMPLE.COM = {
admin_server = kdc1.example.com
default_domain = example.com
kdc = kdc1.domain.com
kdc = kdc2.domain.com
kdc = kdc3.domain.com
}
[domain_realm]
example.com = domain.com
Anyone see any problems here? :)
I tried many different configurations. Looking in documentation, on forums, even tried with AI.
Not an expert in this by any stretch... but does this article help?
https://support.imply.io/hc/en-us/articles/360008709653-Kerberized-Kafka-Ingestion