When I start Hbase, my HMaster started successfully, but after some time (with in
one minute) HMaster aborted all other daemons are still running.
Here is the log.
2014-05-21 15:45:05,075 INFO
org.apache.zookeeper.ClientCnxn: Opening socket connection to server
localhost/127.0.0.1:2181.
Will not attempt to authenticate using SASL (unknown error)
FATAL
org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting
shutdown.
java.net.ConnectException: Call to
localhost/127.0.0.1:8020
failed on connection exception: java.net.ConnectException: Connection
refused
2014-05-21 15:45:16,794 INFO
org.apache.hadoop.hbase.master.HMaster: Aborting
2014-05-21 15:45:16,794 DEBUG
org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2014-05-21 15:45:16,794 INFO
org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2014-05-21 15:45:16,794 INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000:
exiting
2014-05-21 15:45:16,794 INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000:
exiting
2014-05-21 15:45:16,794 INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000:
exiting
java.lang.RuntimeException: HMaster
Aborted
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:160)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java
SOLUTION
In my hbase-site.xml, I have missed out hbase.rootdir propery
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
Once I added, restarted the hbase now
Hmaster running fine.
13 comments:
This is a great inspiring article.I am pretty much pleased with your good work.You put really very helpful information. Keep it up
Hadoop Training in hyderabad
Hi,
I have setup a cluster 3 node setup,
All region servers and backup-master is running fine. But my Master hbase is not not running. any idea how to resolve this?
I am getting the same error as you have mentioned in the thread.
Hi Harish,
can you post the error..
2015-01-04 18:51:54,188 FATAL [master:xxxx:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call From xxxx.xxx.com/192.168.61.63 to xxxx.xxxx.com:9001 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Harish,
Many way to look around the issue.
1) Check HBase home dir enough rights to create log file.
2) Check the hbase-site configuration file.
3) 127.0.1.1 to 127.0.0.1 in /etc/hosts file.
4) hostname in Hadoop core-site.xml matches with Hbase.
let me know the status.
Hi Solaimurugan,
As I setup a 3 node cluster setup with HA for Hadoop and hbase, my core-site.xml file has the following,
fs.default.name
hdfs://mycluster:9001
tried changing the the above to
hdfs://192.168.61.63:9001
and my hbase-site.xml file has,
hbase.master
192.168.61.63:60000
hbase.rootdir
hdfs://192.168.61.63:9001/hbase
In /etc/hosts the entry are correct.
could you please help me out on this.
The issue has been resolved. I make sure that namenode is up and running and then I started the Hbase.. it works fine
Hadoop HA fails:
I killed the Active namenode manually by performing kill -9 PID(Active NN). The StandbyNamenode becomes active, works good.
But.. When I shutdown the Active NN server by turning off the VM. The Standby NN does not become active and the ZKFC logs in standby server as,
2015-01-20 12:43:47,628 ERROR org.apache.hadoop.ha.NodeFencer: Unable to fence service by any configured method.
2015-01-20 12:43:47,629 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
java.lang.RuntimeException: Unable to fence NameNode at redhat6.localadmin.com/192.168.61.63:9001
2015-01-20 12:43:44,629 INFO org.apache.hadoop.ha.NodeFencer: ====== Beginning Service Fencing Process... ======
2015-01-20 12:43:44,629 INFO org.apache.hadoop.ha.NodeFencer: Trying method 1/1: org.apache.hadoop.ha.SshFenceByTcpPort(null)
2015-01-20 12:43:44,631 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Connecting to redhat6.localadmin.com...
2015-01-20 12:43:44,632 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Connecting to redhat6.localadmin.com port 22
2015-01-20 12:43:47,626 WARN org.apache.hadoop.ha.SshFenceByTcpPort: Unable to connect to redhat6.localadmin.com as user harish
com.jcraft.jsch.JSchException: java.net.NoRouteToHostException: No route to host
2015-01-20 12:43:47,628 WARN org.apache.hadoop.ha.NodeFencer: Fencing method org.apache.hadoop.ha.SshFenceByTcpPort(null) was unsuccessful.
2015-01-20 12:43:47,628 ERROR org.apache.hadoop.ha.NodeFencer: Unable to fence service by any configured method.
2015-03-07 17:24:24,576 FATAL [master:node1:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call From node1/203.145.207.134 to node1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source)
2015-03-07 17:24:24,659 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3031)
Hi,
I have installed three node hadoop cluster. Planning to install Hbase in my cluster.
I am new NOSQL. Could someone please provide any reference document to install Hbase in fully distributed mode.
sure I will share you the document,
Hi,
I am looking for Live Hbase backup and restore steps. I am storing all my data in Hadoop thru Hbase. So I am in need for which would be the best scenario to take Hbase backup and restore without affecting Hadoop and hbase.
Xxx
Post a Comment