RegionServer crashing with error "Direct buffer memory"

Region server might crash with below error

INFO  [main] zookeeper.ZooKeeper: Initiating client connection, connectString=ip-100-122-218-159.us-east-1.ec2.aws.symcpe.net:2181,ip-******.us-east-1.ec2.aws.net:2181,ip-*****.us-east-1.ec2.aws.net:2181 sessionTimeout=180000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@253c1256
ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
 at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2666)
 at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
 at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
 at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
 at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2681)
Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
 at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2664)
 ... 5 more
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
 at java.nio.Bits.reserveMemory(Bits.java:658)
 at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
 at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
 at org.apache.zookeeper.ClientCnxnSocket.(ClientCnxnSocket.java:51)
 at org.apache.zookeeper.ClientCnxnSocketNIO.(ClientCnxnSocketNIO.java:48)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
 at java.lang.Class.newInstance(Class.java:442)
 at org.apache.zookeeper.ZooKeeper.getClientCnxnSocket(ZooKeeper.java:1779)
 at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:447)
 at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
 at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
 at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:128)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:136)
 at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:171)
 at org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:593)
 ... 10 more

This might happen because the Direct Memory size is not large enough to hold the offheap block cache (if configured) and some offset from HDFS client offheap buffers (usually small).

In order to recover the RegionServer change the MaxDirectMemorySize param.

You can use Ambari to update the "HBase off-heap MaxDirectMemorySize"  ro you can also update the "-XX: MaxDirectMemorySize=" param value in "hbase-env" file.

HMaster error during startup

HMaster throws error as below during startup

FATAL [ip-:16000.activeMasterManager] master.HMaster: The coprocessor org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor threw java.lang.RuntimeException: java.io.FileNotFoundException: /etc/hbase/2.5.0.55-1/0/xasecure-audit.xml (No such file or directory)

This is happening because the ranger co-processor is defined in xml config to load but the requested plugin is not installed. Remove below attributes from hbase-site.xml

  • hbase.coprocessor.master.classes
  • hbase.coprocessor.region.classes
  • hbase.coprocessor.regionserver.classes
Now restart the HMaster process. 

Unable to start namenode. Error "couldn't find resource file location"

Sometimes starting the namenode might throw below error.
  
ERROR config.RangerConfiguration (RangerConfiguration.java:addResourceIfReadable(110)) - addResourceIfReadable(ranger-hdfs-security.xml): couldn't find resource file location
INFO  provider.AuditProviderFactory (AuditProviderFactory.java:(77)) - AuditProviderFactory: creating..
FATAL conf.Configuration (Configuration.java:loadResource(2672)) - error parsing conf file:/etc/hadoop/2.5.0.55-1/0/xasecure-audit.xml
java.io.FileNotFoundException: /etc/hadoop/2.5.0.55-1/0/xasecure-audit.xml (No such file or directory)

This is causing because, ranger plugin is not installed and authorization provider class is registered for ranger. Your config (hdfs-site.xml) will have below xml attribute with the value
 
ATTRIBUTE : dfs.namenode.inode.attributes.provider.class
VALUE: org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer
    

Now search for above attribute in hdfs-site.xml and remove attribute.Now you are good to start the service.

Golang: Http POST Request with JSON Body example

Go standard library comes with "net/http" package which has excellent support for HTTP Client and Server.   In order to post JSON ...