Wednesday, October 23, 2013

Hadoop multi-node cluster configuration : Error & Solution - part-II

        This post continue of part-I, look here for part-I error & solution on Hadoop multi-node cluster setup


         Hadoop Master cannot start the Slave with different HADOOP_PATH (or) Slave with different HADOOP_PATH failed to start by Hadoop Master node.

This issue can also answer the question of Master and Salve node running by different Operating System (OS).
Say example, Master running on Ubuntu and Salve running on different OS. One of the Slave running on windows. In this case windows doesn't have the same path as Master HADOOP_PATH running on Ubuntu.

 In my case
    Master node HADOOP_PATH : /opt/hadoop-1.1.0
    Slave Node HADOOP_PATH : /opt/softwares/hadoop-1.1.0
while i'm starting Master node by,
it throwing error
    slave: bash: line 0: cd: /opt/hadoop-1.1.0/libexec/..: No such file or directory
    slave: bash: /opt/hadoop-1.1.0/bin/ No such file or directory

         In this case, it is searching the same path as master on slave, but Salve has different path. Create the same user for both Master and Salve node. Create same path (dir) as master on salve and then create the symbolic link for the actual path.
This issue can be solved in two ways.

1) Create same path and then create symbolic link from actual path. (OR)
2) Start Salve daemons manually by (OR)
3) root@solaiv[bin]#./ start datanode 

        Hadoop Master cannot start the Slave with different user (or) Slave with different user failed to start by Hadoop Master node.
        In my another scenario, I tried with set-up Hadoop multi-node cluster with different user for Master and  Slave.
        Say example Master running on Ubuntu with hduser, one client running on Debian with hduser and another Salve runnning on BOSS 5.0 with root user.               
 In my case
    Master : hduser
    Slave1 : hduser
    Salve2 : root
while i'm starting Master node by,
both Master and Salve1 running successfully and asking prompt Salve2 password (even password less SSH configured).

Its asking password for hduser@slave2 instead of root@slave2 


1) Start Salve daemons manually by (OR )
2) root@solaiv[bin]#./ start datanode  

Both ERROR 11 & 12)
Note : Hadoop framework does not require ssh and that the DataNode and TaskTracker daemons can be started manually on each node. So need not require each salve configure same path. simply ignore the error and start slave daemons manually ( and However make sure to include all the slaves (ip/dns) in the master's conf/slave file


Vishal Mehta said...

My master is on
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.8.0-29-generic x86_64)

and all 20 slaves on
armv7l GNU/Linux

I have setup password less ssh from master to slave. All my slave ips are in hosts file.

My namenode,secondary namenode and resource manager starts but node manager and datanode does not.

solaimurugan.v said...

can u share your datanode & node manager log

sundara rami reddy said...

This is a great inspiring tutorials on hadoop.I am pretty much pleased with your good work.You put really very helpful information. Keep it up.
Hadoop Training in hyderabad

ZapakAmitav said...

I am also having same issue when i am trying to start the master the slave is not start automatically

how to set up HADDOP_PATH with in both master and slave and in my case both path is diffferent

Jhon Abraham said...

Nice piece of article you have shared here, my dream of becoming a hadoop professional become true with the help of Hadoop Training in Chennai, keep up your good work of sharing quality articles.