Search code examples
spring-cloud-dataflow

Spring Cloud Data Flow for Kubernetes - Could not configure multiple kafka brokers


I'm trying to migrate my SCDF local server deployments to the k8s-based solution. But I've got some problems when handling the server configuration of the kafka broker-list for the apps.

I followed the instructions here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.2.RELEASE/reference/htmlsingle and downloaded the sample configuration from : https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes at branch v1.7.2.RELEASE

Because we've already deployed a kafka cluster, I'd like to configure the broker- and zk-nodes in the server-config-kafka.yaml file so that we could use the same kafka cluster.

I configured my environmentVaribales like this:

    deployer:
      kubernetes:
        environmentVariables: >
            SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS='172.16.3.192:9092,172.16.3.193:9092,172.16.3.194:9092',
            SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES='172.16.3.192:2181,172.16.3.193:2181,172.16.3.194:2181'

but got an error when trying to deploy my SCDF stream: Invalid environment variable declared: 172.16.3.193:9092

How should I configure it to make it work? Thanks in advance.


Solution

  • Remove the > in your YAML

    That's creating a block string, not a map of environment variables. In YAML, how do I break a string over multiple lines?

    Also, if using CoreDNS in kubernetes, you should probably be using something like kafka.default.cluster.local for the value, rather than IP addresses, and similar for Zookeeper