Trying to setup an external accessable kafka on GCP with Bitnami helm chart

171 Views Asked by At

I am trying to setup an externally accessable kafka cluster with bitnami helm chart (kafka-26.4.3). By externally accessable I mean that any service that knows the IP address / domain name of my kafka cluster should be able to connect after successfull authentication.

I tried using the following values for the helm chart:

kafka:
  service:
    ports: 
      external: 9095
  sasl:
    existingSecret: nocodex-kafka-secrets
    client:
      users:
        - nocodex-kafka-svcu
  service.ports.external: 9095
  extraEnvVars:
    - name: KAFKA_CFG_MAX_REQUEST_SIZE
      value: "10000000"
    - name: KAFKA_CFG_LOG_RETENTION_HOURS
      value: "168"
    - name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
      value: "10000000"
  externalAccess:
    enabled: true
    broker:
      service:
        ports:
          external: 9095
    controller:
      service:
        forceExpose: true      
        containerPorts:
          external: 9095
        type: LoadBalancer
        loadBalancerIPs:
          - xx.xxx.xx.xxx
          - xx.xxx.xx.xxx
          - xx.xxx.xx.xxx

This successfully creates my kafka cluster and I am able to access it from inside the same K8's cluster by referring to to the kafka cluster as: "nocode-x-kafka.default.svc.cluster.local:9092".

It also creates several services in my K8s cluster of which 3 of them expose an ip externally:

enter image description here

You cannot see it in the image but they use the IP's that I have configured in the values of the helm chart. I do not understand however why they are exposing port 9094 instead of port 9095 that is in my values.

Also I tried to connect to one of these ip addresses and I get the following error:

08:52:16.480 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient -- [Producer clientId=producer-1] Connection to node -1 (xx.xxx.xxx.xx.bc.googleusercontent.com/xx.xxx.xxx.xx:9094) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.

I have also tried to create a firewall rule that should allow ingress on my three ip's:

enter image description here

I don't think it is due to a failed authentication, because the credentials check out and I have these log messages at startup of my service that tries to connect to the kafka cluster:

enter image description here

What am I overlooking here? Any Ideas?

0

There are 0 best solutions below