Controlling ODP Services Manually
You must follow the precise order while starting and stopping the various ODP services.
Starting ODP Services
Make sure to start the Hadoop services in the prescribed order.
About this task
- Ranger
- ZooKeeper
- HDFS
- Yarn
- HBase
- Hive Metastore
- HiveServer2
- Oozie
- Nifi
- Kafka
- Spark2
- Impala
- Knox
Procedure
- Start Ranger. Execute the following commands on the Ranger host machine:
a. Ranger Admin
/usr/odp/current/ranger-admin/ews/ranger-admin-start
b. Ranger Usersync
/usr/odp/current/ranger-usersync/ranger-usersync-start
c. Ranger KMS
/usr/odp/current/ranger-kms/ranger-kms-services.sh start
- Start ZooKeeper. Execute this command on the ZooKeeper host machine(s):
su - zookeeper -c "export ZOOCFGDIR=/usr/odp/current/zookeeper-server/ conf ; export ZOOCFG=zoo.cfg; source /usr/odp/current/zookeeper-server/ conf/zookeeper-env.sh ; /usr/odp/current/zookeeper-server/bin/zkServer.sh start
3. Start HDFS
- If you are running NameNode HA (High Availability), start the JournalNodes by executing these commands on the JournalNode host machines where $HDFS_USER is the HDFS user.
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-journalnode/../hadoop/sbin/ hadoop-daemon.sh start journalnode"
For example, hdfs
.
- Execute this command on the NameNode host machine(s):
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"
- If you are running NameNode HA, start the ZooKeeper Failover Controller (ZKFC) by executing the following command on all NameNode machines. The starting sequence of the ZKFCs determines which NameNode will become Active.
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-namenode/../hadoop/sbin/ hadoop-daemon.sh start zkfc"
- If you are not running NameNode HA, execute the following command on the Secondary NameNode host machine. If you are running NameNode HA, the Standby NameNode takes on the role of the Secondary NameNode.
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-namenode/../hadoop/sbin/ hadoop-daemon.sh start secondarynamenode"
- Execute these commands on all DataNodes:
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"
- Execute this Yarn command on the ResourceManager host machine(s):
- Execute this command on the ResourceManager host machine(s):
su -l yarn -c "/usr/odp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh start resourcemanager"
- Execute this command on the History Server host machine:
su -l mapred -c "/usr/odp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh start historyserver"
- Execute this command on the timeline server:
su -l yarn -c "/usr/odp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh start timelineserver"
- Execute this command on all NodeManagers:
su -l yarn -c "/usr/odp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager"
- Start HBase
- Execute this command on the HBase Master host machine:
su -l hbase -c "/usr/odp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25"
- Execute this command on all RegionServers:
su -l hbase -c "/usr/odp/current/hbase-regionserver/bin/hbase-daemon.sh start regionserver"
- Start the Hive Metastore. On the Hive Metastore host machine, execute the following commands:
su $HIVE_USER
nohup /usr/odp/current/hive-metastore/bin/hive --service metastore>/var/log/hive/hive.out 2>/var/log/hive/hive.log &
Where $HIVE_USER is the Hive user. For example, hive.
- Start HiveServer2. On the Hive Server2 host machine, execute the following commands:
nohup /usr/odp/current/hive-server2/bin/hiveserver2
Where $HIVE_USER is the Hive user. For example, hive.
- Start Oozie. Execute the following command on the Oozie host machine:
su -l oozie -c "/usr/odp/current/oozie-server/bin/oozied.sh start"
- Start Kafka with the following commands:
su $KAFKA_USER
/usr/odp/current/kafka-broker/bin/kafka start
where $KAFKA_USER is the operating system user that installed Kafka. For example, kafka.
- Start Spark with the following commands:
a. History Server
su '$SPARK_USER'
/usr/odp/current/spark2-historyserver/sbin/start-history-server.sh
b. Thrift Server
su '$SPARK_USER'
/usr/odp/current/spark2-thriftserver/sbin/start-thriftserver.sh --properties-file /usr/odp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf
- Knox
su '$KNOX_USER'
/usr/odp/current/knox-server/bin/gateway.sh start
Stopping ODP Services
Before performing any upgrades or uninstalling software, stop all of the Hadoop services in the prescribed order.
About this task
- Ranger
- Oozie
- Spark
- HiveServer2
- Hive Metastore
- HBase
- YARN
- HDFS
- ZooKeeper
- Kafka
Procedure
- Stop Ranger. Execute the following commands on the Ranger host machine:
a. Ranger Admin
/usr/odp/current/ranger-admin/ews/ranger-admin-stop
b. Ranger Usersync
/usr/odp/current/ranger-usersync/ranger-usersync-stop
c. Ranger KMS
/usr/odp/current/ranger-kms/ranger-kms-services.sh stop
- Stop Knox. Execute the following command on the Knox host machine.
su -l knox -c "/usr/odp/current/knox-server/bin/gateway.sh stop"
- Stop Oozie. Execute the following command on the Oozie host machine.
su -l oozie -c "/usr/odp/current/oozie-server/bin/oozied.sh stop"
4. Stop Hive. Execute this command on the Hive Metastore and Hive Server2 host machine.
ps aux | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1
- Stop HBase
- Execute this command on all RegionServers:
su -l hbase -c "/usr/odp/current/hbase-regionserver/bin/hbase-daemon.sh stop regionserver"
- Execute this command on the HBase Master host machine:
su -l hbase -c "/usr/odp/current/hbase-master/bin/hbase-daemon.sh stop master"
- Stop YARN
- Execute this command on all NodeManagers:
su -l yarn -c "/usr/odp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh stop nodemanager"
- Execute this command on the History Server host machine:
su -l mapred -c "/usr/odp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh stop historyserver"
- Execute this command on the timeline server host machine(s):
su -l yarn -c "/usr/odp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh stop timelineserver"
- Execute this command on the ResourceManager host machine(s):
su -l yarn -c "/usr/odp/current/hadoop-yarn-resourcemanager/sbin/yarn- daemon.sh stop resourcemanager"
7. Stop HDFS
- Execute this command on all DataNodes:
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh stop datanode"
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop secondarynamenode"
- Execute this command on the NameNode host machine(s):
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop namenode"
- If you are running NameNode HA, stop the ZooKeeper Failover Controllers (ZKFC) by executing this command on the NameNode host machines:
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh stop zkfc"
- If you are running NameNode HA, stop the JournalNodes by executing these commands on the JournalNode host machines:
su -l hdfs -c "/usr/odp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh stop journalnode"
where $HDFS_USER is the HDFS user. For example, hdfs.
8. Stop ZooKeeper. Execute this command on the ZooKeeper host machine(s):
su - zookeeper -c "export ZOOCFGDIR=/usr/odp/current/zookeeper-server/ conf ; export ZOOCFG=zoo.cfg; source /usr/odp/current/zookeeper-server/ conf/zookeeper-env.sh ; /usr/odp/current/zookeeper-server/bin/zkServer.sh stop"
- Stop Kafka. Execute this command on the Kafka host machine(s):
su $KAFKA_USER /usr/odp/current/kafka-broker/bin/kafka stop
where$KAFKA_USER is the operating system user that installed Kafka. For example, Kafka.
- Stop Spark2. Execute the following stop commands on the Kafka host machine(s):
a. History Server
su '$SPARK_USER'
/usr/odp/current/spark2-historyserver/sbin/stop-history-server.sh
b. Thrift Server
su '$SPARK_USER'
/usr/odp/current/spark2-thriftserver/sbin/stop-thriftserver.sh