I am tring to read data from Bigtable in Google cloud data proc. Below code i am using to read data from Bigdtable.
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();
options.setRunner(BlockingDataflowPipelineRunner.class);
Scan scan = new Scan();
scan.setFilter(new FirstKeyOnlyFilter());
Pipeline p = Pipeline.create(options);
p.apply(Read.from(CloudBigtableIO.read(new CloudBigtableScanConfiguration.Builder()
.withProjectId("xxxxxxxx").withZoneId("xxxxxxx")
.withClusterId("xxxxxx").withTableId("xxxxx").withScan(scan).build())))
.apply(ParDo.named("Reading data from big table").of(new DoFn<Result, Mutation>() {
@Override
public void processElement(DoFn<Result, Mutation>.ProcessContext arg0) throws Exception {
System.out.println("Inside printing");
if (arg0==null)
{
System.out.println("arg0 is null");
} else
{
System.out.println("arg0 is not null");
System.out.println(arg0.element());
}
}
}));
p.run();
Whenever i am calling arg0.element() in my method i am getting below error.
2017-03-21T12:29:28.884Z: Error: (deec5a839a59cbca): java.lang.ArrayIndexOutOfBoundsException: 12338
at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1231)
at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:1190)
at com.google.bigtable.repackaged.com.google.cloud.hbase.adapters.read.RowCell.toString(RowCell.java:234)
at org.apache.hadoop.hbase.client.Result.toString(Result.java:804)
at java.lang.String.valueOf(String.java:2994)
at java.io.PrintStream.println(PrintStream.java:821)
at com.slb.StarterPipeline$2.processElement(StarterPipeline.java:102)
Can anyone let me know what i am doing wrong here.
This is unfortunately a known issue. We fixed the underlying implementation, and we're hoping to release a new version of our client in the next week or so. I would suggest changing this line:
System.out.println(arg0.element());
To something like:
System.out.println(Bytes.toStringBinary(arg0.element().getRow());
Sorry for your troubles.