I have to connect Cloudhub to Hbase. I have trid from community edition HBase connector but not succeeded. Then I tried with Java Code and again failed. From HBase Team, they have given only Master IP (10.99.X.X) and Port(2181) and userName (hadoop).
I have tried with following options:
Through Java Code:
public Object transformMessage(MuleMessage message, String outputEncoding) throws TransformerException { try {
Configuration conf = HBaseConfiguration.create();
//conf.set("hbase.rotdir", "/hbase");
conf.set("hbase.zookeeper.quorum", "10.99.X.X");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("hbase.client.retries.number", "3");
logger.info("############# Config Created ##########");
// Create a get api for consignment table
logger.info("############# Starting Consignment Test ##########");
// read from table
// Creating a HTable instance
HTable table = new HTable(conf, "consignment");
logger.info("############# HTable instance Created ##########");
// Create a Get object
Get get = new Get(Bytes.toBytes("6910358750"));
logger.info("############# RowKey Created ##########");
// Set column family to be queried
get.addFamily(Bytes.toBytes("consignment_detail"));
logger.info("############# CF Created ##########");
// Perform get and capture result in a iterable
Result result = table.get(get);
logger.info("############# Result Created ##########");
// Print consignment data
logger.info(result);
logger.info(" #### Ending Consignment Test ###");
// Begining Consignment Item Scanner api
logger.info("############# Starting Consignmentitem test ##########");
HTable table1 = new HTable(conf, "consignmentitem");
logger.info("############# HTable instance Created ##########");
// Create a scan object with start rowkey and end rowkey (partial
// row key scan)
// actual rowkey design: <consignment_id>-<trn>-<orderline>
Scan scan = new Scan(Bytes.toBytes("6910358750"),Bytes.toBytes("6910358751"));
logger.info("############# Partial RowKeys Created ##########");
// Perform a scan using start and stop rowkeys
ResultScanner scanner = table1.getScanner(scan);
// Iterate over result and print them
for (Result result1 = scanner.next(); result1 != null; result1 = scanner.next()) {
logger.info("Printing Records\n");
logger.info(result1);
}
return scanner;
} catch (MasterNotRunningException e) {
logger.error("HBase connection failed! --> MasterNotRunningException");
logger.error(e);
} catch (ZooKeeperConnectionException e) {
logger.error("Zookeeper connection failed! -->ZooKeeperConnectionException");
logger.error(e);
} catch (Exception e) {
logger.error("Main Exception Found! -- Exception");
logger.error(e);
}
return "Not Connected";
}
Above Code giving below Error
java.net.UnknownHostException: unknown host: ip-10-99-X-X.ap-southeast-2.compute.internal
It Seems that CloudHub is not able to find host name because cloudHub is not configured with DNS
When I tried with Community Edition HBase Connector it is giving following Exception:
org.apache.hadoop.hbase.MasterNotRunningException: Retried 3 times
Please suggest some way... Rgeards Nilesh Email: bit.nilesh.kumar@gmail.com
It appears that you are configuring your client to try to connect to the zookeeper quorum at a private IP address (10.99.X.X). I'll assume you've already set up a VPC, which is required for your CloudHub worker to connect to your private network.
Your UnknownHostException implies that the HBase server you are connecting to is hosted on AWS, which defines private domain names similar to the one in the error message.
So what might be happening is this:
ip-10-99-X-X.ap-southeast-2.compute.internal
.Unfortunately, if this is what's going on, it will take some networking changes to fix it. The FAQ in the VPC discovery form says this about private DNS:
Currently we don't have the ability to relay DNS queries to internal DNS servers. You would need to either use IP addresses or public DNS entries. Beware of connecting to systems which may redirect to a Virtual IP endpoint by using an internal DNS entry.
You could use public DNS and possibly an Elastic IP to get around this problem, but that would require you to expose your HBase cluster to the internet.