Kafka in raft Mode SASL_SSL

23 Views Asked by At

I'm, trying to setup 3 kafka brokers and 3 kafka controllers in full SSL transaction.

Right now,i'm able to do SASL_PLAINTEXT for user that trying to create topic as super user (for example) but not with SSL protocol. The communication inter broker protocol is already set in SSL.

But i'm not able to get a proper config to communicate into SASL_SSL mechanisms PLAIN (or SCRAM-512). Between client and brokers.

Maybe someone here have an idea about a config in RAFT mode to be able to do this?

I've already read that RAFT didn't set up this protocol. But the post was many months ago.

Here my configuration :


 

# The address the socket server listens on. If not configured, the host name will be equal to the value of

# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.

#   FORMAT:

#     listeners = listener_name://host_name:port

#   EXAMPLE:

#     listeners = PLAINTEXT://your.host.name:9092

listeners=EXTERNAL://0.0.0.0:9143,INTERNAL://0.0.0.0:9192,SSL://0.0.0.0:9092

 

# Listener name, hostname and port the broker will advertise to clients.

# If not set, it uses the value for "listeners".

#advertised.listeners=SSL://:9092

advertised.listeners=EXTERNAL://gateway-external:9143,INTERNAL://proxy-internal:9192,SSL://:9092

 

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details

listener.security.protocol.map=CONTROLLER:SSL,SASL_SSL:SASL_SSL,SSL:SSL,EXTERNAL:SASL_PLAINTEXT,INTERNAL:SSL

 

sasl.enabled.mechanisms=PLAIN

 

listener.name.external.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \

   username="admin" \

   password="admin-secret" \

   user_admin="admin-secret" \

   user_alice="alice-secret" \

   user_bitnami="bitsecret" \

   user_kafkabroker1="kafkabroker1-secret";

#

# The number of threads that the server uses for receiving requests from the network and sending responses to the network

num.network.threads=3

 

# The number of threads that the server uses for processing requests, which may include disk I/O

num.io.threads=8

 

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=102400

 

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=102400

 

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600

 

############################# Log Basics #############################

 

# A comma separated list of directories under which to store log files

log.dirs=/opt/bitnami/kafka/data

 

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

num.partitions=1

 

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

# This value is recommended to be increased for installations with data dirs located in RAID array.

num.recovery.threads.per.data.dir=1

 

############################# Internal Topic Settings  #############################

# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"

# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

 

############################# Log Flush Policy #############################

 

# Messages are immediately written to the filesystem but by default we only fsync() to sync

# the OS cache lazily. The following configurations control the flush of data to disk.

# There are a few important trade-offs here:

#    1. Durability: Unflushed data may be lost if you are not using replication.

#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.

#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.

# The settings below allow one to configure the flush policy to flush data after a period of time or

# every N messages (or both). This can be done globally and overridden on a per-topic basis.

 

# The number of messages to accept before forcing a flush of data to disk

#log.flush.interval.messages=10000

 

# The maximum amount of time a message can sit in a log before we force a flush

#log.flush.interval.ms=1000

 

############################# Log Retention Policy #############################

 

# The following configurations control the disposal of log segments. The policy can

# be set to delete segments after a period of time, or after a given size has accumulated.

# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens

# from the end of the log.

 

# The minimum age of a log file to be eligible for deletion due to age

log.retention.hours=168

 

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining

# segments drop below log.retention.bytes. Functions independently of log.retention.hours.

#log.retention.bytes=1073741824

 

# The maximum size of a log segment file. When this size is reached a new log segment will be created.

#log.segment.bytes=1073741824

 

# The interval at which log segments are checked to see if they can be deleted according

# to the retention policies

log.retention.check.interval.ms=300000

 

############################# Zookeeper #############################

 

# Zookeeper connection string (see zookeeper docs for details).

# This is a comma separated host:port pairs, each corresponding to a zk

# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

# You can also append an optional chroot string to the urls to specify the

# root directory for all kafka znodes.

#zookeeper.connect=localhost:2181

 

# Timeout in ms for connecting to zookeeper

#zookeeper.connection.timeout.ms=18000

 

############################# Group Coordinator Settings #############################

 

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.

# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.

# The default value for this is 3 seconds.

# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.

# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.

#group.initial.rebalance.delay.ms=0

 

allow.everyone.if.no.acl.found=false

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

auto.create.topics.enable=false

controller.listener.names=CONTROLLER

controller.quorum.voters=0@kafkac-0:9093,1@kafkac-1:9093,2@kafkac-2:9093

inter.broker.listener.name=SSL

 

node.id=3

process.roles=broker

ssl.principal.mapping.rules=RULE:^CN=(.*?),OU=(.*?),O=(.*?),L=(.*?),ST=(.*?),C=(.*?)$/$1/L,DEFAULT

super.users=User:kafkac-0;User:kafkac-1;User:kafkac-2;User:kafkab-0;User:kafkab-1;User:kafkab-2;User:kafka-cli;User:schema-registry;User:kafka-rest

ssl.client.auth=required

ssl.keystore.type=JKS

ssl.truststore.type=JKS

ssl.key.password=kafka-amd

ssl.keystore.location=/opt/bitnami/kafka/config/certs/kafka.keystore.jks

ssl.truststore.location=/opt/bitnami/kafka/config/certs/kafka.truststore.jks

ssl.keystore.password=kafka-amd

ssl.truststore.password=kafka-amd

# listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_user="bitnami";

# listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required;

# listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required;

############################# Metrics Settings #############################

metric.reporters=org.apache.kafka.common.metrics.JmxReporter

#com.sun.management.jmxremote=true

#com.sun.management.jmxremote.authenticate=false

#com.sun.management.jmxremote.ssl=false

#java.rmi.server.hostname=kafkab-0

#java.net.preferIPv4Stack=true

#com.sun.management.jmxremote.local.only=false

## com.sun.management.jmxremote.rmi.port=5556

#com.sun.management.jmxremote.port=5556```

 
0

There are 0 best solutions below