Search code examples
apache-kafkasasl-scram

kafka SASL/SCRAM Failed authentication


I tried to add security to my kafka cluster, I followed the documentation:

I add the user using this:

kafka-configs.sh --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

I modify the server.properties:

broker.id=1
listeners=SASL_PLAINTEXT://kafka1:9092
advertised.listeners=SASL_PLAINTEXT://kafka1:9092
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
default.replication.factor=3
min.insync.replicas=2
log.dirs=/var/lib/kafka
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

Created the jaas file:

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-secret"
};

Created the file kafka_opts.sh in /etc/profile.d:

export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf

But when I start kafka it throws the following error:

[2020-05-04 10:54:08,782] INFO [Controller id=1, targetBrokerId=1] Failed authentication with kafka1/kafka1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)

I use instead of kafka1,kafka2,kafka3,zookeeper1,zookeeper2 and zookeeper3 the respectively ip of every server, can someone help me with my issue?


Solution

  • My main problem was this configuration:

    zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka
    

    This configuration in the server.properties was needed to have order in the way zookeeper create the kafka information, but that affects the way I need to execute the command kafka-configs.sh, so I will explain the steps I needed to followed

    1. First modify zookeeper.

    I have downloaded zookeeper from the official site https://zookeeper.apache.org/releases.html

    I modified the zoo.cfg file and added the configuration for the security:

    tickTime=2000
    dataDir=/var/lib/zookeeper/
    clientPort=2181
    initLimit=5
    syncLimit=2
    server.1=zookeeper1:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=zookeeper3:2888:3888
    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
    requireClientAuthScheme=sasl
    

    I create the jaas file for zookeeper:

    Server {
        org.apache.zookeeper.server.auth.DigestLoginModule required
        user_admin="admin_secret";
    };
    

    I create the file java.env on /conf/ and added the following:

    SERVER_JVMFLAGS="-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf"
    

    With this files you are telling zookeeper to use the jaas file to let kafka authenticate to zookeeper, to validate that zookeeper is taking the file you only need to run:

    zkServer.sh print-cmd
    

    it will respond:

    /usr/bin/java
    ZooKeeper JMX enabled by default
    Using config: /opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg
    "java"  -Dzookeeper.log.dir="/opt/apache-zookeeper-3.6.0-bin/bin/../logs" ........-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf....... "/opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg" > "/opt/apache-zookeeper-3.6.0-bin/bin/../logs/zookeeper.out" 2>&1 < /dev/null
    
    1. Modify kafka

    I have downloaded kafka from the official site https://www.apache.org/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz

    I modifed/added the following configuration in the server.properties file:

    listeners=SASL_PLAINTEXT://kafka1:9092
    advertised.listeners=SASL_PLAINTEXT://kafka1:9092
    sasl.enabled.mechanisms=SCRAM-SHA-256
    sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
    security.inter.broker.protocol=SASL_PLAINTEXT
    authorizer.class.name=kafka.security.authorizer.AclAuthorizer
    allow.everyone.if.no.acl.found=false
    super.users=User:admin
    

    I created the jaas file for kafka:

    KafkaServer {
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="admin"
        password="admin_secret";
    };
    Client {
       org.apache.zookeeper.server.auth.DigestLoginModule required
       username="admin"
       password="admin_secret";
    };
    

    One important thing you need to understand, the Client part needs to be the same as the jaas file in zookeeper and the KafkaServer part is for interbroker communication.

    Also I need to tell kafka to use the jaas file, this can be done by setting the variable KAFKA_OPTS:

    export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf
    
    1. Create the user admin for kafka brokers

    Run the following command:

    kafka-configs.sh --zookeeper zookeeper:2181/kafka --alter --add-config 'SCRAM-SHA-256=[password=admin_secret]' --entity-type users --entity-name admin
    

    As I mentioned before my error was that I was't adding the /kafka part to the zookeeper ip(note that everything that uses zookeeper will needs to add the /kafka part at the end of the ip), now if you start zookeeper and kafka everything is going to work great.