Upgrade from v2.1.1 to v3.0.3
This document describes the steps to migrate from Pulse 2.1.1 version to 3.0.3 version. You must perform the steps mentioned in this document in all your clusters.
Backup Steps
- Take backup of Dashplots Charts using Export option.

- Take backup of Alerts using the Export option.

Migration Steps
- Requires Pulse server downtime.
- Requires re-installation of Pulse agents running in all the cluster nodes.
Please plan your migrations accordingly.
- (Optional) Execute the following steps only on the standalone nodes of a multi-node Pulse deployment.
- Generate the encrypted string for the
mongodb://accel:<MONGO_PASSWORD>@<PULSE_MASTER_HOST>:27017
mongo URI, by executing the following command.
- Generate the encrypted string for the
accelo admin encrypt
b. Add the following environment variables to the /etc/profile.d/ad.sh
file.
MONGO_URI="<<Output of step 1.a>> " # You must modify the content in double quotes as per the output of step 1a.
MONGO_ENCRYPTED=true
PULSE_SA_NODE=true
Once you execute the above steps, you must receive the output as shown in the following image.

c. Source the /etc/profile.d/ad.sh
file by executing the following command.
source /etc/profile.d/ad.sh
- Stop the ad-streaming and ad-connector connectors by executing the following commands.
docker stop ad-streaming_default
docker stop ad-connectors_default
- Enter the Mongo container by executing the following command.
docker exec -it ad-db_default bash
- Login to the Mongo executing by issuing the following command.
mongo mongodb://accel:<PASSWORD>@localhost:27017/admin
- Execute the following commands.
show databases;
use <db_name>;
- Rename the collection by executing the following command.
db.yarn_tez_queries.renameCollection("yarn_tez_queries_details")
You must get a response which says { "ok": 1 }.
- Exit the Mongo shell by executing the following command.
exit
- Ensure that you are still in the ad-db container bash shell. Use the following command to export the past 7 days data with the required fields from the tez_queries_ nsure that details collection. You can refer to this link to convert a date to epoch value.
mongoexport --username="<username>" --password="<password>" --host=localhost:27017 --authenticationDatabase=admin --db="<db_name>" --collection=yarn_tez_queries_details -f '__id,callerId,user,status,timeTaken,queue,appId,hiveAddress,dagId,uid,queue,counters,tablesUsed,startTime,endTime,llap' --query='{"startTime": {"$gte": <last 7 days epoch millisec>}}' --out=/tmp/tqq.json
You must get a response as # document(s) imported successfully. 0 document(s) failed to import.
- Using the following command, import the data file returned by the preceding command into the yarn__tez_ queries collection.
mongoimport --username="<username>" --password="<password>" --host=localhost:27017 --authenticationDatabase=admin --db="<db_name>" --collection=yarn_tez_queries --file=/tmp/tqq.json
- Delete the /tmp/tqq.json file after executing the above step.
- Exit the ad-db container by executing the following command.
exit
- Execute the below command to complete the migration.
accelo admin database index-db
You must receive the following response.
Trying to create indices for the MongoDB database ..
INFO: Indices created successfully with the following output.
OUTPUT:
MongoDB shell version v4.2.19
connecting to: mongodb://localhost:27017/ad_hdp_qe?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5cb5bb0a-c399-49bb-8a0f-7945a4daccac") }
MongoDB server version: 4.2.19
Download the new CLI with
3.0.3
version.Execute the following migration command steps.
- Execute the following command to set the active cluster.
accelo set
b. Execute the following CLI migrate command.
accelo migrate -v 3.0.0
c. Based on whether you want to migrate as a Root user or a non-root user, execute the commands from one of the following columns.
Non Root User | Root User |
---|---|
a. Disable all the Pulse Services by executing the following command.
| a. If accelo CLI is going to be run as a root user, execute the following command:
|
b. Change the ownership of all data directories to 1000:1000 by executing the following commands.
| |
c. Execute the following migration command with the
| |
d. Execute the following command to uninstall the Pulse Hydra agent from all the current active cluster nodes.
|
You must repeat the steps 14.a, 14.b, and 14.c for all the clusters configured on the Pulse server, one by one.
- Execute the following command to deploy the Pulse core components.
accelo deploy core
- Execute the following command to deploy the required addons.
accelo deploy addons
- Execute the following command to reconfigure all the clusters, configured in the Pulse server. The reconfigure command will update the configurations for all the clusters.
accelo reconfig cluster -a
- Execute the following command to deploy the hydra agents for all the clusters, configured in Pulse server.
- Set the active cluster by executing the following command.
accelo set
b. Deploy the hydra agent for the current active cluster nodes.
accelo deploy hydra
c. Repeat steps 18.a and 18.b for each of the clusters configured in Pulse server, one by one.
- (Optional) Execute the following commands to deploy auto action playbooks, if you have the ad-director add-on component deployed.
accelo deploy playbooks
accelo restart ad-director
- Execute the following command to update the HDFS dashboard data.
accelo admin fsa load
New Dashplots Version and Generic Reporting Feature
- Splashboards and Dashplots are never automatically rewritten, so either delete that dashboard to acquire the new version, or set the environment variable
OVERWRITE_SPLASHBOARDS
andOVERWRITE_DASHPLOTS
to overwrite the existing splashboard dashboard with the newer version. - To access the most recent dashboard, delete the HDFS Analytics dashboard from splashboard studio and then refresh configuration.
- Navigate to
ad-core.yml
. - In
graphql,
set the environment variables ofOVERWRITE_SPLASHBOARDS
andOVERWRITE_DASHPLOTS
totrue
(default value is set tofalse
)
- Export all the dashplots which are not seeded by default to file before performing the upgradation.
- Login to the
ad-pg_default
docker container with the following command after the upgrade to 3.0.3.
docker exec -ti ad-pg_default bash
- Copy, paste and execute the snippet attached in the migration file as is and press enter to execute it.
psql -v ON_ERROR_STOP=1 --username "pulse" <<-EOSQL
\connect ad_management dashplot
BEGIN;
truncate table ad_management.dashplot_hierarchy cascade;
truncate table ad_management.dashplot cascade;
truncate table ad_management.dashplot_visualization cascade;
truncate table ad_management.dashplot_variables cascade;
INSERT INTO ad_management.dashplot_variables (stock_version,"name",definition,dashplot_id,dashplot_viz_id,"global") VALUES
(1,'appid','{"_id": "1", "name": "appid", "type": "text", "query": "", "shared": true, "options": [], "separator": "", "description": "AppID to be provided for user customization", "defaultValue": "app-20210922153251-0000", "selectionType": "", "stock_version": 1}',NULL,NULL,true),
(1,'appname','{"_id": "2", "name": "appname", "type": "text", "query": "", "shared": true, "options": [], "separator": "", "description": "", "defaultValue": "Databricks Shell", "selectionType": "", "stock_version": 1}',NULL,NULL,true),
(1,'FROM_DATE_EPOC','{"id": 0, "_id": "3", "name": "FROM_DATE_EPOC", "type": "date", "query": "", "global": true, "shared": false, "options": [], "separator": "", "dashplot_id": null, "description": "", "displayName": "", "defaultValue": "1645641000000", "selectionType": "", "stock_version": 1, "dashplot_viz_id": null}',NULL,NULL,true),
(1,'TO_DATE_EPOC','{"id": 0, "_id": "4", "name": "TO_DATE_EPOC", "type": "date", "query": "", "global": true, "shared": false, "options": [], "separator": "", "dashplot_id": null, "description": "", "displayName": "", "defaultValue": "1645727399000", "selectionType": "", "stock_version": 1, "dashplot_viz_id": null}',NULL,NULL,true),
(1,'FROM_DATE_SEC','{"_id": "5", "name": "FROM_DATE_SEC", "type": "date", "query": "", "shared": true, "options": [], "separator": "", "defaultValue": "1619807400", "selectionType": ""}',NULL,NULL,true),
(1,'TO_DATE_SEC','{"_id": "6", "name": "TO_DATE_SEC", "type": "date", "query": "", "shared": true, "options": [], "separator": "", "defaultValue": "1622477134", "selectionType": ""}',NULL,NULL,true),
(1,'dbcluster','{"_id": "7", "name": "dbcluster", "type": "text", "query": "", "shared": true, "options": [], "separator": "", "description": "", "defaultValue": "job-4108-run-808", "selectionType": "", "stock_version": 1}',NULL,NULL,true),
(1,'id','{"_id": "8", "name": "id", "type": "text", "query": "", "shared": true, "options": [], "separator": "", "defaultValue": "hive_20210906102708_dcd9df9d-91b8-421a-a70f-94beed03e749", "selectionType": ""}',NULL,NULL,true),
(1,'dagid','{"_id": "9", "name": "dagid", "type": "text", "query": "", "shared": true, "options": [], "separator": "", "description": "", "defaultValue": "dag_1631531795196_0009_1", "selectionType": "", "stock_version": 1}',NULL,NULL,true),
(1,'hivequeryid','{"_id": "10", "name": "hivequeryid", "type": "text", "query": "", "shared": true, "options": [], "separator": "", "defaultValue": "hive_20210912092701_cef246fa-cb3c-4130-aece-e6cac82751bd", "selectionType": ""}',NULL,NULL,true);
INSERT INTO ad_management.dashplot_variables (stock_version,"name",definition,dashplot_id,dashplot_viz_id,"global") VALUES
(1,'TENANT_NAME','{"_id": "11", "name": "TENANT_NAME", "type": "text", "query": "", "global": true, "shared": false, "options": [], "separator": "", "description": "", "defaultValue": "acceldata", "selectionType": "", "stock_version": 1}',NULL,NULL,true);
COMMIT;
EOSQL
- Go to dashplot studio and import the zip file exported in step 1 of this section with
**< 3.0.3 dashboard**
check box selected.
Troubleshooting
Post upgrade after executing the fsa load
command, incase if you encounter the following exception in fsanalytics connector execute the below steps to troubleshoot the issue.
22-11-2022 09:59:40.341 [fsanalytics-connector-akka.actor.default-dispatcher-1206] INFO c.a.p.f.metastore.MetaStoreMap - Cleared meta-store.dat
[ERROR] [11/22/2022 09:59:40.342] [fsanalytics-connector-akka.actor.default-dispatcher-1206] [akka://fsanalytics-connector/user/$a] The file /etc/fsanalytics/hdp314/meta-store.dat the map is serialized from has unexpected length 0, probably corrupted. Data store size is 286023680
java.io.IOException: The file /etc/fsanalytics/hdp314/meta-store.dat the map is serialized from has unexpected length 0, probably corrupted. Data store size is 286023680
at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1800)
at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1640)
at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1563)
at com.acceldata.plugins.fsanalytics.metastore.MetaStoreMap.getMap(MetaStoreConnection.scala:71)
at com.acceldata.plugins.fsanalytics.metastore.MetaStoreMap.initialize(MetaStoreConnection.scala:42)
at com.acceldata.plugins.fsanalytics.metastore.MetaStoreConnection.<init>(MetaStoreConnection.scala:111)
at com.acceldata.plugins.fsanalytics.FsAnalyticsConnector.execute(FsAnalyticsConnector.scala:49)
at com.acceldata.plugins.fsanalytics.FsAnalyticsConnector.execute(FsAnalyticsConnector.scala:17)
at com.acceldata.connectors.core.SchedulerImpl.setUpOneShotSchedule(Scheduler.scala:48)
at com.acceldata.connectors.core.SchedulerImpl.schedule(Scheduler.scala:64)
at com.acceldata.connectors.core.DbListener$$anonfun$receive$1.$anonfun$applyOrElse$5(DbListener.scala:64)
at com.acceldata.connectors.core.DbListener$$anonfun$receive$1.$anonfun$applyOrElse$5$adapted(DbListener.scala:54)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:320)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:976)
at com.acceldata.connectors.core.DbListener$$anonfun$receive$1.applyOrElse(DbListener.scala:54)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at com.acceldata.connectors.core.DbListener.aroundReceive(DbListener.scala:26)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:614)
at akka.actor.ActorCell.invoke(ActorCell.scala:583)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Execute the following steps to resolve the above exception.
- Remove the
${ACCELOHOME}/data/fsanalytics/${ClusterName}/meta-store.dat
file. - Restart the ad-fsanalytics container using the following command.
accelo restart ad-fsanalyticsv2-connector
- Execute the following command to generate the meta store data again.
accelo admin fsa load