Refer setting up sudo/single node Hadoop 2.X
Refer setting up Distributed/multi node Hadoop 2.X
This post will help you out how to start & stop the Hadoop daemons from master & slave nodes.
All the file available under $HADOOP_PATH/sbin
start-all.sh & stop-all.sh
Used to start and stop hadoop daemons all at once. Issuing it on the master machine will start/stop the daemons on all the nodes of a cluster.
Hadoop daemons are
NameNode SecondaryNameNode ResourceManager JobHistoryServer DataNode NodeManager
start/stop only HDFS daemons separately on all the
nodes from the master machine. (HDFS Daemons are NameNode , SecondaryNameNode and DataNode )
in master node
NameNode SecondaryNameNode
in slave node
DataNode
start-yarn.sh, stop-yarn.sh
start/stop YARN daemons separately on all the
nodes from the master machine. (YARN daemons are ResourceManager and NodeManager )
in master node
ResourceManager
in slave node
NodeManager
Start individual Hadoop daemons
hadoop-daemon.sh namenode/datanode &
yarn-deamon.sh resourcemanager/nodemanager
To start individual daemons on an individual machine manually. You need to go to a particular node and issue these commands.
sbin/hadoop-daemon.sh start datanode
Use case : In distributed Hadoop cluster Suppose you have added a new DataNode to your cluster and you need to start the DataNode daemon only on this machine.
all the DataNode in cluster can start from server
sbin/hadoop-daemons.sh start datanode
Use case : In distributed Hadoop cluster Suppose you want stop/start all the
DataNode in your cluster from master node.
to start histroyserver by
sbin/mr-jobhistory-daemon.sh start historyserver
Note :
1) To Start/Stop datanode and
nodemanager from master, script is *-daemons.sh and not *-daemon.sh. daemon.sh
does not lookup in slaves file and hence, will only start processes
on master
2)
You should have ssh enabled if you want to start all the daemons on all the nodes from one machine.
1 comment:
Great reading yyour blog
Post a Comment