This page describes how to configure Pulse to monitor a standalone Kafka. This configuration enables real-time observability and health monitoring of Kafka components.
Configure the Pulse Core Component for Standalone Kafka Cluster
Pulse supports only Kafka versions 1.0.0 and higher.
- Run the following command to get started:
accelo config cluster- The Acceldata CLI asks you for information about your environment. The following table lists the questions asked by Acceldata CLI and a guide to help you answer the questions.
| Questions asked by CLI | Guidelines for answering the questions |
|---|---|
| Is this current node an 'Edge Node' of the cluster? [y/n]: | Type y if the node on which you are installing is a Edge Node. Else type n. |
| Which distribution do you use? | Select Standalone |
| Stand-alone configuration selected: | Displays the configuration selected. |
- The CLI asks you for few more information to generate the configuration file.
| Questions asked by CLI | Guidelines for answering the questions |
|---|---|
| Enter Your Cluster’s Name: | Enter the name of the cluster. |
| Enter Your Cluster’s Display Name: | Enter the name you want for the cluster to display |
Select the components you would like to install a) MEMSQL b) KAFKA | Select the components that you want to install. You can use the arrows to move, space to select, and type in to filter |
| Is your Kafka version greater than '0.11.0'? [y/n] | Type y if the Kafka version is greater than V0.11.0. |
| Enter one of the Kafka bootstrap server's URI | Enter the Kafka boot server URL. Example: host1.kf.com:6667 |
| What security do you use? a. None b. Plan/SSL c. Kerberos | Select your security type. |
| Do you use TLS ? a. No b. Yes | Select Yes if you use TLS or else select NO. |
| Detected Kafka brokers: | Based on your responses CLI would detect the Kafka brokers. |
| Is the above the information correct? [y/n]: | Type y to confirm if the detected Kafka brokers are correct. |
| Enter the Zookeeper Server URLs (comma separated with http/https & port): | Enter the Zookeeper server URL and use comma separate http, https, and the port. |
| Enter Kafka's Zookeeper Chroot Name (enter '/' for root): | Enter the Kafka Zookeeper roots |
| Would you like to continue with the above configuration? [y/n]: | Type y if you want to continue with the configuration or else type n. |
| Would you like to enable LDAP? [y/n]: | Type y if you want to enable LDAP or else type n. |
| Is Kerberos enabled in this cluster? [y/n]: | Enter your Kerberos Keytab username (Must have required HDFS permissions) |
| Enter the cluster name to use (MUST be all lowercase & unique): | Enter the cluster name to be used. Ensure that the name should be in all lowercase and unique. |
You will see the following message.
INFO: Trying to generate host-roles-map file ... ✓ SuccessINFO: Trying to generate alert-endpoints file ...INFO: Trying to generate FSAnalytics scripts ...WARN: Cannot find any HDFS installations from this cluster. Skipping the FS Analytics configuration.INFO: Edit the config files 'ad-core-connectors.yaml' and 'ad-fsanalyticsv2-connector.yaml'ERROR: Cannot find the required hadoop conf directory at the path '/etc/hadoop/conf'IMPORTANT: Please make sure this directory exists at the path '/etc/hadoop/conf' and contains the files 'core-site.xml' and 'hdfs-site.xml'INFO: Please run 'accelo deploy core' to deploy APM core using this configuration.- You may be asked to answer the following questions if Kerberos is enabled.
| Questions asked by CLI | Guidelines for answering the questions |
|---|---|
| Would you like to continue configuring Kerberos? [y/n]: | Type y if you want to continue configuring the Kerberos |
| Authentication realm | Add the authentication realm. Example: AZ.PULSE.COM |
| KDC address | Add the KDC address. Example: host1.kdc.com:88 |
| Kerberos principal | Add the Kerberos principal. Example: admin/admin@AZ.PULSE.COM |
| KDC admin server address | Add the KDC administration server address. Example: host1.kdcadmin.com:749 |
Configure Kafka when Security Is Disabled
If security is not enabled in your standalone Kafka or Kafka 3 cluster, update the acceldata.conf and ad-kafka-connector.yml configuration files as described below.
- Update the acceldata.conf file.
Update consumerSecurityProtocol and securityProtocol:
- From "SASL_PLAINTEXT"
- To "PLAINTEXT".
enableConsumerGroupCache = truebatchSize = 5zk_secure = falseconsumerSecurityProtocol = "PLAINTEXT"filterConsumerGroups = ""filterConsumerGroupsState = ""securityProtocol = "PLAINTEXT"SASLEnabled = "false"TLSEnabled = "false"- Create the
ad-kafka-connector.ymlfile (if not present).
Run the following command to generate the configuration file:
accelo admin makeconfig ad-kafka-connector- Update the ad-kafka-connector.yml file.
In the $AcceloHome/config/docker/addons/ad-kafka-connector.yml file, set KERBEROS_ENABLED to false.
version: "2"services: ad-kafka-connector: image: ad-kafka-connector container_name: "" environment: - MONGO_URI=<URI> - MONGO_ENCRYPTED=true - MONGO_SECRET=<Secret> - KERBEROS_ENABLED=false - OTEL_JAVAAGENT_ENABLED=false volumes: - /etc/localtime:/etc/localtime:ro - /etc/hosts:/etc/hosts:ro ulimits: {} ports: [] depends_on: [] opts: {} restart: "" extra_hosts: [] network_alias: []label: Kafka ConnectorChecking the Configuration
Open the files located in the address /data01/acceldata/config/acceldata<CLUSTER_NAME>.conf_ to verify if all the configurations are correct.