Friday, August 17, 2012

Hadoop : Bad connection to FS. command aborted.

I've successfully configured Hadoop on Debian, I got following error message while running the command,
hduser@boss:/opt/hadoop-0.21.0/bin$ ./hadoop fs  -ls /

localhost/127.0.0.1:54310. Already tried 0 time(s).
12/08/16 15:46:12 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:54310. Already tried 1 time(s).
............
Bad connection to FS. command aborted. 
solution for the above issues is,

just check whether all the port are up and running by
hduser@boss:/opt/hadoop-0.21.0/bin$ netstat -nltp

tcp  0  0 127.0.0.1:8020  0.0.0.0:*   LISTEN      9469/java       
tcp        0  127.0.0.1:8021    0.0.0.0:*  LISTEN   9879/java  
tcp   0   0 0.0.0.0:37301     0.0.0.0:*     LISTEN   9879/java 
tcp     0      0 0.0.0.0:50070  0.0.0.0:*    LISTEN  9469/java  
............ 

in my case, the port which was mentioned in core-site.xml not running, then I figured it out, 

 fs.default.name

  was mentioned in both hdfs-site.xml and core-site.xml, 
in hdfs-site.xml

 fs.default.name
 localhost:9000
 
 
    dfs.replication
    1
  
and in core-site.xml


  hadoop.tmp.dir
  /app/hadoop/tmp



  fs.default.name
  hdfs://localhost:8020


after I removed the property from the hdfs-site.xml, it is working fine.  

Post a Comment