Wednesday, October 23, 2013

Hadoop multi-node cluster configuration : Error & Solution - part-II

        This post continue of part-I, look here for part-I error & solution on Hadoop multi-node cluster setup

ERROR 11)

         Hadoop Master cannot start the Slave with different HADOOP_PATH (or) Slave with different HADOOP_PATH failed to start by Hadoop Master node.

This issue can also answer the question of Master and Salve node running by different Operating System (OS).
Say example, Master running on Ubuntu and Salve running on different OS. One of the Slave running on windows. In this case windows doesn't have the same path as Master HADOOP_PATH running on Ubuntu.

 In my case
    Master node HADOOP_PATH : /opt/hadoop-1.1.0
    Slave Node HADOOP_PATH : /opt/softwares/hadoop-1.1.0
while i'm starting Master node by,
    hduser@solaiv[bin]$start-dfs.sh
it throwing error
    slave: bash: line 0: cd: /opt/hadoop-1.1.0/libexec/..: No such file or directory
    slave: bash: /opt/hadoop-1.1.0/bin/hadoop-daemon.sh: No such file or directory

SOLUTION 11)
         In this case, it is searching the same path as master on slave, but Salve has different path. Create the same user for both Master and Salve node. Create same path (dir) as master on salve and then create the symbolic link for the actual path.
This issue can be solved in two ways.

1) Create same path and then create symbolic link from actual path. (OR)
2) Start Salve daemons manually by  start-dfs.sh (OR)
3) root@solaiv[bin]#./hadoop-daemon.sh start datanode 



ERROR 12)
        Hadoop Master cannot start the Slave with different user (or) Slave with different user failed to start by Hadoop Master node.
        In my another scenario, I tried with set-up Hadoop multi-node cluster with different user for Master and  Slave.
        Say example Master running on Ubuntu with hduser, one client running on Debian with hduser and another Salve runnning on BOSS 5.0 with root user.               
 In my case
    Master : hduser
    Slave1 : hduser
    Salve2 : root
while i'm starting Master node by,
    hduser@solaiv[bin]$start-dfs.sh
both Master and Salve1 running successfully and asking prompt Salve2 password (even password less SSH configured).

Its asking password for hduser@slave2 instead of root@slave2 

SOLUTION 12)

1) Start Salve daemons manually by  start-dfs.sh (OR )
2) root@solaiv[bin]#./hadoop-daemon.sh start datanode  

Both ERROR 11 & 12)
Note : Hadoop framework does not require ssh and that the DataNode and TaskTracker daemons can be started manually on each node. So need not require each salve configure same path. simply ignore the error and start slave daemons manually ( start-dfs.sh and start-mapred.sh). However make sure to include all the slaves (ip/dns) in the master's conf/slave file
                 
Post a Comment