CDP Deployment for Single KDC
This document provides a step by step process to deploy single Pulse instance for Cloudera clusters with single KDC.
Prerequisites
Keep the following information handy:
- CM URL (
https://<Alias/FQDN of the CM URL>:<CM Port>) - CM Username
- CM Password
- Spark History HDFS path & Spark3 History HDFS path
- Kafka Version
- Hbase Version
- Hive Version
- Hive Metastore DB Connection URL
- hive metastore Database Name
- hive metastore DB Username
- hive metastore DB Password
- Oozie DB Name
- Oozie DB URL
- Oozie DB Username
- Oozie DB Password
- Kerberos Keytab
- krb5.conf file
- Principal
- Kerberos Username
- cacerts/jssecacerts
- YARN Scheduler Type
- Kafka Interbroker protocol
- Certificate File: cert.crt
- Certificate Key: cert.key
- CA Certificate: ca.crt (optional)
- Decide whether to keep the HTTP port (Default: 4000) open or not
- Decide on which port to use (default: 443)
Uninstallation
- For uninstalling agents, you must follow the Cloudera Parcel Agent Uninstall doc.
- You must also remove the Pulse JARS and the configuration for Hive and Tez.
- Acceldata will then perform the following command for backup and uninstalling the existing Pulse.
a. Create a backup directory.
mkdir -p /data01/backupb. For backup, we can copy the whole config and work dir.
cp -R $AcceloHome/config /data01/backup/cp -R $AcceloHome/work /data01/backup/c. Uninstall the existing Pulse setup by running the following command:
accelo uninstall localOUTPUT
[root@nifihost1:data01 (ad-default)]$ accelo uninstall local✗ You're about to uninstall the local AccelData setup. This will also DELETE all persistent data from the current node. However, NONE of the remote no✔ You're about to uninstall the local AccelData setup. This will also DELETE all persistent data from the current node. However, NONE of the remote no✔ You're about to uninstall the local AccelData setup. This will also DELETE all persistent data from the current node. However, NONE of the remote noYou're about to uninstall the local AccelData setup. This will also DELETE all persistent data from the current node. However, NONE of the remote nodes will be affected. Please confirm your action [y/n]: : yWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DBUninstalling the AccelData components from local machine ...d. Logout from the terminal session.
Download the Binaries and Docker Images and Load Them
- Download the jars, hystaller, accelo binaries, and docker images from the download links provided by Acceldata.
- Move the docker images and jars in the following directory:
mkdir -p /data01/images- Copy the binaries and tar files in to the
/data01/imagesfolder.
cp </path/to/binaries/tar> /data01/images- Change the directory.
cd /data01/images- Extract the single tar file.
tar xvf <name_of_tar_file>.tarOUTPUT
[root@nifihost1 images]# tar xvf pulse-333-beta.tar./ad-alerts.tgz./ad-connectors.tgz./ad-dashplots.tgz./ad-database.tgz./ad-deployer.tgz./ad-director.tgz./ad-elastic.tgz./ad-events.tgz./ad-fsanalyticsv2-connector.tgz./ad-gauntlet.tgz./ad-graphql.tgz./ad-hydra.tgz./ad-impala-connector.tgz./ad-kafka-0-10-2-connector.tgz./ad-kafka-connector.tgz./ad-ldap.tgz./ad-logsearch-curator.tgz./ad-logstash.tgz./ad-notifications.tgz./ad-oozie-connector.tgz./ad-pg.tgz./ad-proxy.tgz./ad-pulsemon-ui.tgz./ad-recom.tgz./ad-sparkstats.tgz./ad-sql-analyser.tgz./ad-streaming.tgz./ad-vminsert.tgz./ad-vmselect.tgz./ad-vmstorage.tgz./accelo.linux./admon./hystaller- Load the Docker images by running the following command:
ls -1 *.tgz | xargs --no-run-if-empty -L 1 docker load -i- Check if all the images are loaded into the server.
docker images | grep 3.3.3Config Cluster
- Validate the all the hosts file.
- Create the
acceldatadir by running the following command:
cd /data01/mkdir -p acceldata- Copy the Spark hosts and Zookeeper hosts file in
acceldatadirectory, by running the following command:
cp </path/to/hosts_files> /data01/acceldata- Place the
accelobinary in the/data01/acceldatadirectory.
cp </path/to/accelo/binary> /data01/acceldata- Rename the
accelo.linuxbinary toaccelo.
mv /data01/acceldata/accelo.linux accelochmod +x /data01/acceldata/accelo- Change the directory.
cd /data01/acceldata/accelo- Run the following command to do
accelo init:
./accelo init- Enter the appropriate answers when prompted.
- Source the
ad.shfile.
source /etc/profile.d/ad.sh- Run the
initcommand to provide the Pulse version.
accelo initOUTPUT
[root@nifihost1:~ (ad-default)]$ accelo initEnter the AccelData ImageTag: : 3.3.3✓ Done, AccelData Init Successful.Provide the correct Pulse version, in this case it will be 3.3.3.
- Now run
accelo infocommand to get the initial info.
accelo infoOUTPUT
[root@nifihost1:~ (ad-default)]$ accelo infoWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DB ___ ____________________ ____ ___ _________ / | / ____/ ____/ ____/ / / __ \/ |/_ __/ | / /| |/ / / / / __/ / / / / / / /| | / / / /| | / ___ / /___/ /___/ /___/ /___/ /_/ / ___ |/ / / ___ |/_/ |_\____/\____/_____/_____/_____/_/ |_/_/ /_/ |_|Accelo CLI Version: 3.3.3-betaAccelo CLI Build Hash: 8ba4727f11e5b3f3902547585a37611b6ec74e7cAccelo CLI Build ID: 1700746329Accelo CLI Builder ID: ZEdjMmxrYUdGdWRGOWhZMk5sYkdSaEVLCg==Accelo CLI Git Branch Hash: TXdLaTlCVDFBdE56STNvPQo=AcceloHome: /data01/acceldataAcceloStack: ad-defaultAccelData Registry: 191579300362.dkr.ecr.us-east-1.amazonaws.com/acceldataAccelData ImageTag: 3.3.3-betaActive Cluster Name: NotFoundAcceloConfig Mongo DB Retention days: 15AcceloConfig Mongo DB HDFS Reports Retention days: 15AccelConfig TSDB Retention days: 31dNumber of AccelData stacks found in this node: 0- Run the
config clustercommand to configure the cluster in Pulse.
accelo config cluster- Provide appropriate answers when prompted.
[root@pulsecdp01:acceldata (ad-default)]$ accelo config clusterINFO: Configuring the cluster ...INFO: Using default API Version v10 for CM APIIs the 'Database Service' up and running? [y/n]: : nWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DB✔ ClouderaEnter Your Cluster's Display Name: : cdp1Enter Cloudera URL (with http/https): : https://cdpssl01.acceldata.dvl:7183✔ Enter Cloudera Username: : admin█IMPORTANT: This password will be securely encrypted and stored in this machine.Enter Cloudera User Password: : *****Enter the cluster name to use (MUST be all lowercase & unique): : cdp1ERROR: stat /data01/acceldata/.activecluster: no such file or directoryINFO: Creating Post dirs.✔ Cluster1INFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIINFO: Using lower case for CDP Service name APIEnter the installed Kafka version (ex: 0.10.2): : 0.11.0: 0.11.0█Enter the installed HBase service version (ex: 0.9.4): : 0.9.4Enter the installed Hive service version (ex: 2.0.0): : 2.0.0Enter the installed Hive service version (ex: 2.0.0): : 2.0.0✓ Found Kerberos Realm: ADSRE.COMEnter the Spark History HDFS path: : /user/spark/applicationHistoryOozie DB URL: : jdbc:postgresql://cdpssl01.acceldata.dvl:7432/oozie_oozie_server✔ Oozie DB URL: : jdbc:postgresql://cdpssl01.acceldata.dvl:7432/oozie_oozie_server█Enter the Oozie DB Username: : oozie_oozie_serverEnter the Oozie DB Password: : **********Enter the Oozie DB JODA Timezone (Example: Asia/Kolkata): : Asia/Kolkata✔ Enter the hive metastore Database Name : : hive█✔ Hive Metastore PostgreSQL DB Connection URL: : jdbc:postgresql://cdpssl01.acceldata.dvl:7432/hive█Enter the hive metastore DB Username : : hive✔ Enter the hive metastore DB Password : : **********█✔ Enter the hive metastore DB Password : : **********█INFO: core-site.xml file has been updatedINFO: hdfs-site.xml file has been updated---------------------------Discovered configurations----------------------------------------✓ Cluster Type: CDH✓ CDH Version: 7.1.7✓ Discovered Cluster Name: cdp1✓ Discovered Services: ✓ PULSEHYDRAAGENT ✓ SOLR ✓ SPARK_ON_YARN ✓ KAFKA ✓ LIVY ✓ HUE ✓ HIVE_ON_TEZ ✓ HBASE ✓ QUEUEMANAGER ✓ RANGER ✓ IMPALA ✓ ATLAS ✓ ZOOKEEPER ✓ OOZIE ✓ HIVE ✓ YARN ✓ HDFS✓ Yarn RM URI: https://cdpssl02.acceldata.dvl:8090,https://cdpssl03.acceldata.dvl:8090✓ MapReduce Job History URI: https://cdpssl02.acceldata.dvl:19890✗ Yarn ATS is not enabled✓ HDFS Namenode URI: swebhdfs://nameservice1✓ Hive Metastore URI: thrift://cdpssl02.acceldata.dvl:9083✗ Hive LLAP is not enabled✓ Spark History Server URIs: https://cdpssl02.acceldata.dvl:18488✓ Impala URI: http://cdpssl04.acceldata.dvl:25000,http://cdpssl05.acceldata.dvl:25000,http://cdpssl01.acceldata.dvl:25000✓ Kafka Broker URI: https://cdpssl04.acceldata.dvl:9093,https://cdpssl05.acceldata.dvl:9093,https://cdpssl03.acceldata.dvl:9093✓ Zookeeper Server URI: http://cdpssl01.acceldata.dvl:2181,http://cdpssl02.acceldata.dvl:2181,http://cdpssl03.acceldata.dvl:2181Would you like to continue with the above configuration? [y/n]: : yIs Kerberos enabled in this cluster? [y/n]: : y✓ Found Kerberos Realm: ADSRE.COMEnter your Kerberos keytab username (Must have required HDFS permissions): : hdfsINFO: min-reports is set to default value 10INFO: Purging old config files✓ acceldata.conf file generated successfully.Setting up Kerberos ConfigSetting up kerberos..Enter the principal: : hdfs/cdpssl03.acceldata.dvl@ADSRE.COMEnter full path to the Keytab file (eg: /root/hdfs.keytab): : /data01/security/kerberos_cluster1.keytabEnter the krb5Conf file path: : /data01/security/krb5_cluster1.confWARN: /data01/acceldata/config/users/passwd already being generated✓ Done, Kerberos setup completed.INFO: Creating post config filesINFO: Writing the .dist filesINFO: Clustername : cdp1INFO: Performing PreCheck of FilesIs HTTPS Enabled in the Cluster on UI Endpoint? [Y/N]: : YEnter the Java Keystore cacerts File Path: : /data01/security/cacertsEnter the Java Keystore jsseCaCerts File Path: : /data01/security/cacertsINFO: Setting the active clusterWARN: Cannot find the pulse.yaml file, getting the values from acceldata.conf fileWARN[1090] cannot find the spark on yarn thriftserver service portsWARN[1090] Atlas Server not installedWARN[1090] Hive Server Interactive not installedCreating hydra inventory✔ Is the agent deployment Parcel Based? [Y/N] : : Y█pulsecdp01.acceldata.dvl is the hostname of the Pulse Server, Is this correct? [Y/N]: : y? Select the components you would like to install: Impala, Metastore, Hdfs, HiveServer2, Zookeeper, Yarn, HbaseIs Kerberos Enabled for Impala?: yEnter the JMX Port for hive_metastore: : 8009✔ Enter the JMX Port for zookeeper_server: : 9010█Enter the Kafka Broker Port: : 9092Do you want to enable Impala Agent: [Y/N]? : YWould you like to setup LogSearch? [y/n]: : y? Select the logs for components that are installed/enabled in your target cluster: kafka_server, yarn_timelinereader, impala_catalogd, yarn_timelineserver, hue_runcpserver, hive_server, oozie_jpa, ranger_audit, yarn_resourcemanager, hdfs_audit, oozie_error, hbase_regionserver, hue_error, impala_impalad, hdfs_datanode, yarn_nodemanager, mapred_historyserver, hbase_master, kafka_state_change, hdfs_namenode, kafka_server_gc, kafka_controller, kafka_err, yarn_application, kafka_log_cleaner, hive_server_interactive, oozie_audit, zookeeper, oozie_tomcat, hue_migrate, hue_access, syslog, oozie_ops, oozie_server✓ Generated the vars.yml file successfullyINFO: /data01/acceldata/work/cdp1/fsanalytics/update_fsimage.sh - ✓ DoneINFO: /data01/acceldata/work/cdp1/fsanalytics/kinit_fsimage.sh - ✓ DoneINFO: /data01/acceldata/work/cdp1/fsanalytics/update_fsimage_csv.sh - ✓ DoneConfiguring notifications✓ Generated the notifications.yml file successfullyConfiguring notifications✓ Generated the actions notifications.yml file successfullyINFO: Please run 'accelo deploy core' to deploy APM core using this configuration.Copy the License
Place the license file provided by Acceldata in the work directory.
cp </path/to/license> /data01/acceldata/workDeploy Core
Deploy the Pulse core components by running the following command:
accelo deploy coreOUTPUT
[root@nifihost1:acceldata (ad-default)]$ accelo deploy coreERROR: Cannot connect to DB, Because: cannot connect to mongodbWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DBHave you verified the acceldata config file at '/data01/acceldata/config/acceldata_spark341.conf' ? [y/n]: : y✓ accelo.yml file found and parsed✓ AcceloEvents - events.json file found and parsed✓ acceldata conf file found and parsed✓ .dist file found and parsed✓ hydra_hosts.yml file found and parsed✓ vars.yml file found and parsed✓ alerts notification.yml file found and parsed✓ actions notification.yml file found and parsed✓ alerts default-endpoints.yml file found and parsed✓ override.yml file found and parsed✓ gauntlet_mongo_spark341.yml file found and parsed✓ gauntlet_elastic.yml file found and parsedINFO: No existing AccelData networks found. Current stack 'ad-default' is missing.INFO: Trying to create a new network ..INFO: If you're setting up AccelData for the first time give 'y' to the below.Would you like to initiate DB with the config file '/data01/acceldata/config/acceldata'? [y/n]: : yCreating group monitors [================================================================================================>-------------------] 83.33%INFO: Pushing the hydra_hosts.yml to mongodbDeployment Completed [==============================================================================================>--------------------] 81.82% 28s✓ Done, Core services deployment completed.Now, you can access the AccelData APM Server at the configured port of this node.To deploy the AccelData addons, Run './accelo deploy addons'Configure SSL For Connectors and Streaming
If you have TLS/SSL enforced for any of the Hadoop components in the target cluster, copy the cacerts and jsseCaCerts certificates to the Pulse Node and enter their path when Accelo CLI asks the following question.
- Select Y if the SSL/TLS is enabled.
Is HTTPS Enabled in the Cluster on UI Endpoint? [Y/N]:y- Enter the certificate path.
Enter the Java Keystore cacerts File Path:/path/to/certEnter the Java Keystore jsseCaCerts File Path:/path/to/jsseCaCert- ad-connectors
- ad-sparkstats
- ad-streaming
- ad-kafka-connector
- ad-kafka-0-10-2-connector
- ad-fsanalyticsv2-connector
For Kafka connectors, first, verify the version of Kafka running in your cluster, and then generate the configurations accordingly.
Only these services will establish connections to the corresponding Hadoop components of the cluster via the HTTPS URI.
Ensure that the permissions of these files are set to 0655 . i.e, readable for all the users.
chmod 0655 config/security/*It's not obligatory to have both configuration files available for a target cluster. Sometimes, you might only have one of the files accessible. In such cases, you can simply utilize the available file and disregard the other.
AD-CONNECTORS & AD-SPARKSTATS
- Generate the ad-core-connectors configuration file if not present:
accelo admin makeconfig ad-core-connectors- Edit the file in path
<$AcceloHome>/config/docker/addons/ad-core-connectors.ymland add the following lines under thevolumessection of bothad-connectorsandad-sparkstatsservice blocks.
./config/security/cacerts:/usr/local/openjdk-8/lib/security/cacerts./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/jssecacerts- If you only have the
jssecacertfile available and not thecacertsfile, you can mount thejssecacertsfile as thecacertsfile inside the container, as demonstrated below:
./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/cacertsAD-STREAMING
- Generate the ad-core configuration file if not present:
accelo admin makeconfig ad-core- Edit the file in path
<$AcceloHome>/config/docker/ad-core.ymland add the following lines under thevolumessection ofad-streamingservice block.
./config/security/cacerts:/usr/local/openjdk-8/lib/security/cacerts./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/jssecacerts- If you only have the
jssecacertfile available and not thecacertsfile, you can mount thejssecacertsfile as thecacertsfile inside the container, as demonstrated below:
./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/cacertsAD-FSANALYTICSV2-CONNECTOR
- Generate the ad-fsanalyticsv2-connector configuration file if not present:
accelo admin makeconfig ad-fsanalyticsv2-connector- Edit the file in path
<$AcceloHome>/config/docker/addons/ad-fsanalyticsv2-connector.ymland add the following lines under thevolumessection ofad-fsanalyticsv2-connector.
./config/security/cacerts:/usr/local/openjdk-8/lib/security/cacerts./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/jssecacerts- If you only have the
jssecacertfile available and not thecacertsfile, you can mount thejssecacertsfile as thecacertsfile inside the container, as demonstrated below:
./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/cacertsAD-KAFKA-CONNECTOR
- Generate the ad-core-connectors configuration file if not present:
accelo admin makeconfig ad-kafka-connector- Edit the file in path
<$AcceloHome>/config/docker/addons/ad-kafka-connector.ymland add the following lines under thevolumessection ofad-kafka-connector.
./config/security/cacerts:/usr/local/openjdk-8/lib/security/cacerts./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/jssecacerts- If you only have the
jssecacertfile available and not thecacertsfile, you can mount thejssecacertsfile as thecacertsfile inside the container, as demonstrated below:
./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/cacertsAD-KAFKA-0-10-2-CONNECTOR
- Generate the ad-core-connectors configuration file if not present:
accelo admin makeconfig ad-kafka-0-10-2-connector- Edit the file in path
<$AcceloHome>/config/docker/addons/ad-kafka-0-10-2-connector.ymland add the following lines under thevolumessection ofad-kafka-0-10-2-connector.
./config/security/cacerts:/usr/local/openjdk-8/lib/security/cacerts./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/jssecacerts- If you only have the
jssecacertfile available and not thecacertsfile, you can mount thejssecacertsfile as thecacertsfile inside the container, as demonstrated below:
./config/security/jssecacerts:/usr/local/openjdk-8/lib/security/cacertsDeploy Addons
Run the following command to deploy the Pulse addons, and then select the components that are needed for Spark standalone:
accelo deploy addonsOUTPUT
[root@nifihost1:acceldata (ad-default)]$ accelo deploy addonsWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DBINFO: Active Cluster: spark341? Select the components you would like to install: Alerts (Agents MUST be configured), Core Connectors, Dashplot, Director (Agents MUST be configured), HYDRA, LogSearch, NotificationsStarting the deployment ..Completed [==============================================================================================================================] 137.50% 29s✓ Done, Addons deployment completed.Configure Alerts Notifications
- For setting the active cluster, run the following command:
accelo set- Configure the alerts notifications.
accelo config alerts notificationsOUTPUT
[root@nifihost1:acceldata (ad-default)]$ accelo config alerts notificationsEnter the JODA Timezone value (Example: Asia/Jakarta): : Asia/Kolkata? Select the metric groups you would like to enable: druid, nifi, ntpd, anomaly, chrony, customApp? Select the notifications you would like to enable: emailINFO: Configuring Email Notifications:Enter Email DefaultToEmailIds (comma separated list): :Enter Email DefaultSnoozeIntervalInSecs: : 0Enter Email MaxEmailThreshold: : 1✓ Done, Alerts Notifications Configuration file generated✓ Done, Alerts Notifications pushed to Pulse DB- Set the cluster2 as the active cluster.
accelo set- Configure the alerts for the second cluster.
[root@nifihost1:acceldata (ad-default)]$ accelo config alerts notificationsEnter the JODA Timezone value (Example: Asia/Jakarta): : Asia/Kolkata? Select the metric groups you would like to enable: druid, nifi, ntpd, anomaly, chrony, customApp? Select the notifications you would like to enable: emailINFO: Configuring Email Notifications:Enter Email DefaultToEmailIds (comma separated list): :Enter Email DefaultSnoozeIntervalInSecs: : 0Enter Email MaxEmailThreshold: : 1✓ Done, Alerts Notifications Configuration file generated✓ Done, Alerts Notifications pushed to Pulse DB- Set the cluster3 as the active cluster.
accelo set- Configure the alerts for the third cluster.
[root@nifihost1:acceldata (ad-default)]$ accelo config alerts notificationsEnter the JODA Timezone value (Example: Asia/Jakarta): : Asia/Kolkata? Select the metric groups you would like to enable: druid, nifi, ntpd, anomaly, chrony, customApp? Select the notifications you would like to enable: emailINFO: Configuring Email Notifications:✔ Enter Email DefaultSnoozeIntervalInSecs: : 0█mEnter Email MaxEmailThreshold: : 11█✔ Enter Email MaxEmailThreshold: : 1█✓ Done, Alerts Notifications Configuration file generated✓ Done, Alerts Notifications pushed to Pulse DB- Restart the alerts notifications.
accelo restart ad-alertsOUTPUT
[root@nifihost1:spark341 (ad-default)]$ accelo restart ad-alertsWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DB✗ You're about to restart AccelData services. This will restart all or any specified the service. However, any persistent data will be left untouched.✔ You're about to restart AccelData services. This will restart all or any specified the service. However, any persistent data will be left untouched.✔ You're about to restart AccelData services. This will restart all or any specified the service. However, any persistent data will be left untouched.You're about to restart AccelData services. This will restart all or any specified the service. However, any persistent data will be left untouched. Please confirm your action [y/n]: : yCompleted [===============================================================================================================================] 100.00% 1sRestart ad-alerts completed ✓Database Push Configuration
Run the following command to push config to db:
accelo admin datbase push-config -aConfigure Gauntlet
Updating the Gauntlet Crontab Duration
- Check if the
ad-core.ymlfile is present or not by running the following command:
ls -al $AcceloHome/config/docker/ad-core.yml- If the above file is not present, then generate it by running the following command:
accelo admin makeconfig ad-core- Edit the
ad-core.ymlfile.
a. Open the file.
vi $AcceloHome/config/docker/ad-core.ymlb. Update the CRON_TAB_DURATION env variable in the ad-gauntlet section.
CRON_TAB_DURATION=0 0 */2 * *This makes gauntlet run every two days at midnight.
c. The updated file will look something like this:
ad-gauntlet: image: ad-gauntlet container_name: ad-gauntlet environment: - MONGO_URI=ZN4v8cuUTXYvdnDJIDp+R8Z+ZsVXXjv8zDOvh8UwQXosC8vfVkGYGWGPNnX64ZVSp9yHgErQknPBAfYZ9cOG1A== - MONGO_ENCRYPTED=true - ELASTIC_ADDRESSES=http://ad-elastic:9200 - DRY_RUN_ENABLE=true - CRON_TAB_DURATION=0 0 */2 * * volumes: - /etc/localtime:/etc/localtime:ro - /root/acceldata/config/logsearch/gauntlet_elastic.yml:/gauntlet/config/config.yml - /root/acceldata/logs/logsearch/:/gauntlet/logs/ ulimits: {} ports: [] depends_on: [] opts: {} restart: "" extra_hosts: [] network_alias: []d. Save the file.
- Restart gauntlet service by running the following command:
accelo restart ad-gauntletUpdating the Gauntlet Dry Run Mode
- Check if the
ad-core.ymlfile is present or not by running the following command:
ls -al $AcceloHome/config/docker/ad-core.yml- If the above file is not present, then generate it by running the following command:
accelo admin makeconfig ad-core- Edit the
ad-core.ymlfile.
a. Open the file.
vi $AcceloHome/config/docker/ad-core.ymlb. Update the DRY_RUN_ENABLE env variable in the ad-gauntlet section.
DRY_RUN_ENABLE=falseThis will make the gauntlet delete the older elastic indices and MongoDB data.
c. The updated file will look something like this:
ad-gauntlet: image: ad-gauntlet container_name: ad-gauntlet environment: - MONGO_URI=ZN4v8cuUTXYvdnDJIDp+R8Z+ZsVXXjv8zDOvh8UwQXosC8vfVkGYGWGPNnX64ZVSp9yHgErQknPBAfYZ9cOG1A== - MONGO_ENCRYPTED=true - ELASTIC_ADDRESSES=http://ad-elastic:9200 - DRY_RUN_ENABLE=false - CRON_TAB_DURATION=0 0 */2 * * volumes: - /etc/localtime:/etc/localtime:ro - /root/acceldata/config/logsearch/gauntlet_elastic.yml:/gauntlet/config/config.yml - /root/acceldata/logs/logsearch/:/gauntlet/logs/ ulimits: {} ports: [] depends_on: [] opts: {} restart: "" extra_hosts: [] network_alias: []d. Save the file.
- Restart gauntlet service by running the following command:
accelo restart ad-gauntletConfiguring Gauntlet for Multi Node and Multi Cluster Deployment
- Run the following command to generate the gauntlet config files:
accelo admin database push-config -s -a- Change the dir to
config/gauntlet/.
cd $AcceloHome/config/gauntlet- Check if all the files are present or not for all the clusters or not.
[root@cdp5007:gauntlet (ad-default)]$ accelo admin database push-config -a -sIs the 'Database Service' up and running? [y/n]: : yINFO: Working on cluster: cl1Creating group monitors [========================================================================================================================================================================================================>----------------------------------------] 83.33%INFO: Pushing the hydra_hosts.yml to mongodbINFO: Pushing the LDAP configuration to the mongo DBDone [=====================================================================================>---------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 33.33% 0sPush completed successfully!INFO: Working on cluster: cl2Creating group monitors [========================================================================================================================================================================================================>----------------------------------------] 83.33%INFO: Pushing the hydra_hosts.yml to mongodbINFO: Pushing the LDAP configuration to the mongo DBDone [=====================================================================================>---------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 33.33% 0sPush completed successfully![root@cdp5007:gauntlet (ad-default)]$ ls -altotal 28drwxr-xr-x. 2 root root 130 Nov 24 09:58 .drwxr--r--. 14 root root 4096 Nov 24 09:56 ..-rw-r--r--. 1 root root 866 Nov 24 09:56 gauntlet_elastic_cl1.yml-rw-r--r--. 1 root root 866 Nov 24 09:56 gauntlet_elastic_cl2.yml-rw-r--r--. 1 root root 6404 Nov 24 09:58 gauntlet_mongo_cl1.yml-rw-r--r--. 1 root root 6404 Nov 24 09:58 gauntlet_mongo_cl2.yml- Modify the
gauntlet_elastic_<clustername>.ymlfile.
vi gauntlet_elastic_<clustername>.yml- Edit the elastic address in the file for multi node setup.
INFO: Working on cluster: cl1version: 1elastic_servers: - version: v8 address: "http://<Elastic Server Hostname>:<Elastic Server Port>" basic_auth: true username: "pulse" #EncryptedPassword password: "pPBrVKaoB0QsmCJZNZyYAw==" enable_tls: false client_certificate_path: "" client_key_path: "" client_ca_cert: ""- Modify the elastic address for both clusters.
- Push the config to database.
accelo admin database push-config -a- Restart the gauntlet service.
accelo restart ad-gauntletUpdating MongoDB Cleanup and Compaction Frequency in Hours
By default, when dry run is disabled MongoDB cleanup and compaction will run once a day. To configure the frequency, follow the steps listed below:
- Run the following command:
accelo config retention- Answer the prompts. If you’re unsure about how many days you wish to retain, then proceed with the default values.
✔ How many days of data would you like to retain at Mongo DB ?: 15✔ How many days of data would you like to retain at Mongo DB for HDFS reports ?: 15✔ How many days of data would you like to retain at TSDB ?: 31- When the following prompt comes up, specify the hours of the day during which you would like MongoDB clean up and compaction to run. The value must be a CSV of hours as per the 24 hour time notation.
✔ How often should Mongo DB clean up & compaction run, provide a comma separated string of hours (valid values are [0,23] (Ex. 8,12,15,18)?: 0,6,12,18- Run the following command. When gauntlet runs the next time, MongoDB clean up and compaction will run at the specified hours, once per hour.
accelo admin database push-configEnabling (TLS) HTTPS for Pulse Web UI Configuration Using ad-proxy
Deployment and Configuration
- Copy the
cert.crt,cert.keyandca.crt(optional) files to$AcceloHome/config/proxy/certslocation. - Check if
ad-core.ymlfile is present or not.
ls -al $AcceloHome/config/docker/ad-core.yml- If
ad-core.ymlfile is not present, then generate thead-core.ymlfile.
accelo admin makeconfig ad-coreOUTPUT
[root@hostname:addons (ad-default)]$ accelo admin makeconfig ad-coreWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DB✓ Done, Configuration file generatedIMPORTANT: Please edit/verify the file '/data01/acceldata/config/docker/ad-core.yml'.If the stack is already up and running, use './accelo admin recreate' to recreate the whole environment with the new configuration.- Modify the
ad-core.ymlfile.
a. Open the ad-core.yml file.
vi $AcceloHome/config/docker/ad-core.ymlb. Remove the ports: field in the ad-graphql section of ad-core.yml .
ports: - 4000:4000c. The resulting ad-graphql section will look like this:
ad-graphql: image: ad-graphql container_name: "" environment: - MONGO_URI=ZN4v8cuUTXYvdnDJIDp+R8Z+ZsVXXjv8zDOvh8UwQXosC8vfVkGYGWGPNnX64ZVSp9yHgErQknPBAfYZ9cOG1A== - MONGO_ENCRYPTED=true - MONGO_SECRET=Ah+MqxeIjflxE8u+/wcqWA== - UI_PORT=4000 - LDAP_HOST=ad-ldap - LDAP_PORT=19020 - SSL_ENFORCED=false - SSL_ENABLED=false - SSL_KEYDIR=/etc/acceldata/ssl/ - SSL_KEYFILE=ssl.key - SSL_CERTDIR=/etc/acceldata/ssl/ - SSL_CERTFILE=ssl.crt - SSL_PASSPHRASE="" - DS_HOST=ad-query-estimation - DS_PORT=8181 - 'FEATURE_FLAGS={ "ui_regex": { "regex": "ip-([^.]+)", "index": 1 }, "rename_nav_labels":{}, "timezone": "", "experimental": true, "themes": false, "hive_const":{ "HIVE_QUERY_COST_ENABLED": false, "HIVE_MEMORY_GBHOUR_COST": 0, "HIVE_VCORE_HOUR_COST": 0 }, "spark_const": { "SPARK_QUERY_COST_ENABLED": false, "SPARK_MEMORY_GBHOUR_COST": 0, "SPARK_VCORE_HOUR_COST": 0 }, "queryRecommendations": false, "hostIsTrialORLocalhost": false, "data_temp_string": "" }' volumes: - /etc/localtime:/etc/localtime:ro - /etc/hosts:/etc/hosts:ro - /data01/acceldata/work/license:/etc/acceldata/license:ro ulimits: {} depends_on: - ad-db opts: {} restart: "" extra_hosts: [] network_alias: []d. Save the file.
- Restart the
ad-graphqlcontainer.
accelo restart ad-graphql- Check if the port is not exposed to host.
docker psOUTPUT
ea4eb6fd540f 191579300362.dkr.ecr.us-east-1.amazonaws.com/acceldata/ad-graphql:3.2.1 "docker-entrypoint.s…" 9 minutes ago Up 9 minutes 4000/tcp ad-graphql_default- Check if there any errors in
ad-graphqlcontainer.
docker logs -f ad-graphql_default- Deploy the
ad-proxyaddons, run the following command and selectProxyfrom the list and press enter.
accelo deploy addons- Now you can access the Pulse UI using
https://<pulse-server-hostname>By default the port used is443.
Configuration
If you want to change the SSL port to another port, follow the steps below:
- Check if
ad-proxy.ymlfile is present or not.
ls -altrh $AcceloHome/config/docker/addons/ad-proxy.yml- Generate the
ad-proxy.ymlfile if its not present.
accelo admin makeconfig ad-proxyOUTPUT
[root@hostname:addons (ad-default)]$ accelo admin makeconfig ad-proxyWARN: Gauntlet is running in dry run mode. Disable this to delete indices from elastic and purge data from mongo DB✓ Done, Configuration file generatedIMPORTANT: Please edit/verify the file '/data01/acceldata/config/docker/addons/ad-proxy.yml'.If the addon is already up and running, use './accelo deploy addons' to remove and recreate the addon service.- Modify the
ad-core.ymlfile.
a. Open the ad-proxy.yml file.
vi $AcceloHome/config/docker/addons/ad-proxy.ymlb. Change the host port in the ports list to the desired port.
ports: - <DESIRED_HOST_PORT>:443The final file will look like this if the host port is 6003 :
version: "2"services: ad-proxy: image: ad-proxy container_name: "" environment: [] volumes: - /etc/localtime:/etc/localtime:ro - /data01/acceldata/config/proxy/traefik.toml:/etc/traefik/traefik.toml - /data01/acceldata/config/proxy/config.toml:/etc/traefik/conf/config.toml - /data01/acceldata/config/proxy/certs:/etc/acceldata ulimits: {} ports: - 6003:443 depends_on: [] opts: {} restart: "" extra_hosts: [] network_alias: []label: Proxyc. Save the file.
- Restart the
ad-proxycontainer.
accelo restart ad-proxy- Check if there are any errors.
docker logs -f ad-proxy_default- Now you can access the Pulse UI using
https://<pulse-server-hostname>:6003.
Set Up LDAP for Pulse UI
- Check if the
ldap.confis present or not.
ls -al $AcceloHome/config/ldap/ldap.conf- Run the configure command to generate the default
ldap.confif not already present.
accelo configure ldapOUTPUT
There is no ldap config file availableGenerating a new ldap config filePlease edit '$AcceloHome/config/ldap/ldap.conf' and rerun this command- Edit the file in path
$AcceloHome/config/ldap/ldap.conf.
vi $AcceloHome/config/ldap/ldap.confConfigure file for below properties:
LDAP FQDN : FQDN where LDAP server is running
- host = [FQDN]
If port 389 is being used then
- insecureNoSSL = true
SSL root CA Certificate
- rootCA = [CERTIFICATE_FILE_PATH]
bindDN : to be used for
ldapsearch need to be member of admin groupbindPW :
<encrypted-password-string>for entering in database.encryptedPassword =
true, set this to true to enable the use of encrypted password.baseDN used for user search
- Eg:
(cn=users, cn=accounts, dc=accedata, dc=io)
- Eg:
Filter used for the user search
- Eg:
(objectClass=person)
- Eg:
baseDN used for group search
- Eg:
(cn= groups, cn=accounts, dc=acceldata, dc=io)
- Eg:
Group Search: Object class used for group search
- Eg:
(objectClass= posixgroup)
- Eg:
Here is the command to check if user has search entry access and group access in LDAP directory:
ldapsearch -x -h <hostname> -p 389 -D "uid=admins,cn=users,dc=acceldata,dc=io" -W -b "cn=accounts,dc=acceldata,dc=io" "(&(objectClass=person)(uid=admins))"If the file is already generated it will ask for the LDAP credentials to validate the connectivity and configurations which are mentioned in the below steps.
- Run the configure command.
accelo configure ldap- It will ask for the LDAP user credentials.
Checking LDAP connectionEnter LDAP username: gsEnter LDAP password: *******- If things went correctly, it will show the below confirmation message:
performing ldap search ou=users,dc=acceldata,dc=io sub (&(objectClass=inetOrgPerson)(uid=gs))username "gs" mapped to entry cn=gs,ou=users,dc=acceldata,dc=io✗ Do you want to use this configuration: y- Press ‘y' and press 'Enter’.
OUTPUT
Ok, Updating login properties.✓ Done, You can now login using LDAP.- Push the LDAP config.
accelo admin database push-config -a- Run the
deploy add-onscommand.
accelo deploy addons- Select the LDAP from the list shown and click Enter.
[ ] Job Runner [ ] Kafka 0.10.2 Connector [ ] Kafka Connector> [x] LDAP [ ] Log Reduce [ ] LogSearch [ ] Memsql ConnectorOUTPUT
Starting the deployment ..Completed [==================================================================================================] 100.00% 0s✓ Done, Addons deployment completed.- Run the restart command.
accelo restart ad-graphql- Open Pulse Web UI and create default roles.
- Create an ops role with the necessary access permissions. Any users who log in via LDAP will automatically be assigned to this role.