Configure Pulse to Connect to Kafka With SCRAM and SSL
This page describes how to configure Pulse to securely access and collect metrics from a Kafka cluster that uses SCRAM authentication and SSL encryption.
Pulse supports Kafka clusters secured with SCRAM and SSL only in ODP-managed or standalone deployments.
Configure Pulse for Kafka on the ODP cluster
To allow Pulse to securely connect to Kafka that uses Scram and SSL on an ODP cluster, follow these steps:
- Run the cluster configuration command:
accelo config cluster. - If Kafka is using SCRAM authentication, you will be prompted with the following question. Select Y and provide the full path to your SCRAM config file.
Note Copy the kafka_jass.conf file to the Pulse node and provide its location when prompted during Pulse installation.
Is SCRAM authentication enabled for Kafka [y_n]:YEnter full path to the SCRAM config file (eg: _root_kafkaScramJAASLogin.conf): _home_acceldata_kafka_jass.conf- If Kafka is using TLS/SSL, you will be prompted with the following question. Select Y and provide the full path to your certificates.
Note Copy the cacerts and jssecacert certificates to the Pulse node and provide its location when prompted during Pulse installation.
Is HTTPS Enabled in the Cluster on UI Endpoint? [Y_N]: : yEnter the Java Keystore cacerts File Path:path_to_cacertsEnter the Java Keystore jsseCaCerts File Path:_path_to_jssecacertThese prompts appear in the Accelo CLI only when Kafka in the ODP cluster is configured for SCRAM-based authentication and using TLS/SSL. Specifically, the saslenabledmechanism parameter must be set to either SCRAM-SHA-256 or SCRAM-SHA-512.> > To verify or configure this setting in ODP (Ambari):> > 1. Open the Ambari UI.> 2. Navigate to Kafka > Configs > Advanced Kafka-broker.> 3. Locate the parameter saslenabledmechanism.> 4. Ensure the value is set to either SCRAM-SHA-256 or SCRAM-SHA-512.
Configure Pulse for Standalone Kafka Cluster
To allow Pulse to securely connect to Kafka that uses Scram and SSL on a standalone cluster, follow these steps:
- Run the cluster configuration command:
accelo config cluster. - If Kafka is using SCRAM authentication, you are prompted to select the SCRAM security type.
Note Copy the kafka_jass.conf file to the Pulse node and provide its location when prompted during Pulse installation.
Select the Security Type[x] SCRAMEnter the full path to the SCRAM config file:_home_acceldata_kafka_jass.conf.- If Kafka is using TLS/SSL, you will be prompted with the following question. Select Y and provide the full path to your certificates.
Note Copy the cacerts and jssecacert certificates to the Pulse node and provide its location when prompted during Pulse installation.
Do you use TLS ?: YEnter TLS certificate file path:_path_to_cacertsEnter TLS CA file path:_path_to_jssecacertUpdate Kafka Connector Configuration
- Edit the file: $AcceloHome/config/docker/addons/ad-kafka-connector.yml.
- Set the following parameters.
SCRAM_ENABLED = trueKERBEROS_ENABLED = falsePush the Configuration
Apply the updated configuration to the database.
accelo admin database push-config -a.Troubleshooting
If you encounter the following error while configuring Kafka with SCRAM and SSL, update the ad-kafka-connector file with the specified parameters.
java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:104) at sun.security.validator.Validator.getInstance(Validator.java:181) at sun.security.ssl.X509TrustManagerImpl.getValidator(X509TrustManagerImpl.java:302) at sun.security.ssl.X509TrustManagerImpl.checkTrustedInit(X509TrustManagerImpl.java:176) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:247) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141) at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632) at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473) at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369) at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377) at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444) at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981) at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968) at java.security.AccessController.doPrivileged(Native Method) at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915) at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:443) at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:532) at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:381) at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:301) at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:585) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1504) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1435) at java.lang.Thread.run(Thread.java:750)Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200) at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120) at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104) at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:102) ... 25 common frames omittedTo resolve the issue, update the following fields in the ad-kafka-connector file:
- Environment Variable
JAVA_OPTS=-Djavax.net.ssl.trustStore=_tmp_config__cacerts -Djavax.net.ssl.trustStorePassword=kafka-ssl-password -Djavax.net.ssl.trustStoreType=PKCS1- Volume Mount
_data01_acceldata_config_security_cacerts:_tmp_config_cacertsAfter adding the environment variable and volume mount details, the ad-kafka-connector configuration file appears as follows.
version: "2"services: ad-kafka-connector: image: ad-kafka-connector container_name: "" environment: - MONGO_URI=ZN4v8cuUTXYvdnDJIDp+R8Z+ZsVXXjv8zDOvh8UwQXqyScAm+LrS8Y9EWT8A8_30 - MONGO_ENCRYPTED=true - MONGO_SECRET=Ah+MqxeIjflxE8u+_wcqWA== - KERBEROS_ENABLED=false - OTEL_JAVAAGENT_ENABLED=false - JAVA_OPTS=-Djavax.net.ssl.trustStore=_tmp_config__cacerts -Djavax.net.ssl.trustStorePassword=kafka-ssl-password -Djavax.net.ssl.trustStoreType=PKCS12 - SCRAM_ENABLED=true volumes: - _etc_localtime:_etc_localtime:ro - _etc_hosts:_etc_hosts:ro - _data01_acceldata_config_security_cacerts:_tmp_config_cacerts ulimits: {} ports: [] depends_on: [] opts: {} restart: "" extra_hosts: [] network_alias: []label: Kafka Connector