ERROR 11)
Hadoop Master cannot start the Slave with different HADOOP_PATH (or) Slave with different HADOOP_PATH failed to start by Hadoop Master node.
This issue can also answer the question of Master and Salve node running by different Operating System (OS).
Say example, Master running on Ubuntu and Salve running on different OS. One of the Slave running on windows. In this case windows doesn't have the same path as Master HADOOP_PATH running on Ubuntu.
In my case
Master node HADOOP_PATH : /opt/hadoop-1.1.0
Slave Node HADOOP_PATH :
/opt/softwares/hadoop-1.1.0
while i'm starting Master
node by,
hduser@solaiv[bin]$start-dfs.sh
it throwing error
slave: bash: line 0: cd:
/opt/hadoop-1.1.0/libexec/..: No such file or directory
slave: bash: /opt/hadoop-1.1.0/bin/hadoop-daemon.sh: No such file or directory
slave: bash: /opt/hadoop-1.1.0/bin/hadoop-daemon.sh: No such file or directory
SOLUTION 11)
In this case, it is
searching the same path as master on slave, but Salve has different
path. Create the same user for
both Master and Salve node. Create same path (dir) as master on salve
and then create the symbolic link for the actual path.
This issue can be solved in
two ways.
1) Create same path and then create symbolic link from actual path. (OR)
2) Start Salve daemons manually by start-dfs.sh (OR)
3) root@solaiv[bin]#./hadoop-daemon.sh start datanode
3) root@solaiv[bin]#./hadoop-daemon.sh start datanode
ERROR 12)
Hadoop Master cannot start the Slave with different user (or) Slave with different user failed to start by Hadoop Master node.
In my another scenario, I tried with set-up Hadoop multi-node cluster with different user for Master and Slave.
Say example Master running on Ubuntu with hduser, one client running on Debian with hduser and another Salve runnning on BOSS 5.0 with root user.
In my case
Master : hduser
Slave1 : hduser
Salve2 : root
Slave1 : hduser
Salve2 : root
while i'm starting Master
node by,
hduser@solaiv[bin]$start-dfs.sh
Its asking password for hduser@slave2 instead of root@slave2
SOLUTION 12)
1) Start Salve daemons manually by start-dfs.sh (OR )
2) root@solaiv[bin]#./hadoop-daemon.sh start datanode
2) root@solaiv[bin]#./hadoop-daemon.sh start datanode
Both ERROR 11 & 12)
Note : Hadoop
framework does not require ssh and that the DataNode and TaskTracker
daemons can be started manually on each node. So need not require
each salve configure same path. simply ignore the error and start
slave daemons manually ( start-dfs.sh and start-mapred.sh). However
make sure to include all the slaves (ip/dns) in the master's
conf/slave file
5 comments:
My master is on
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.8.0-29-generic x86_64)
and all 20 slaves on
armv7l GNU/Linux
I have setup password less ssh from master to slave. All my slave ips are in hosts file.
My namenode,secondary namenode and resource manager starts but node manager and datanode does not.
can u share your datanode & node manager log
I am also having same issue when i am trying to start the master the slave is not start automatically
how to set up HADDOP_PATH with in both master and slave and in my case both path is diffferent
Nice piece of article you have shared here, my dream of becoming a hadoop professional become true with the help of Hadoop Training in Chennai, keep up your good work of sharing quality articles.
You are doing a great job by sharing useful information about hadoop course. It is one of the post to read and imporove my knowledge in hadoop.You can check our hadoop multi node cluster tutorial,for more information about Multi Node cluster.
Post a Comment