I have written a UDF that reads an input file and segregates the Data into String and Integer or String and Double.
My UDF is working fine. Also I have written a Pig Script to use the above jar on HDFS.
Now I want to have this code integrated with Talend for Big Data. How can I acheive this.
The java code in the UDF is below :
package com.test.udf;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
public class CheckDataType extends EvalFunc<Tuple> {
@Override
public Tuple exec(Tuple input) throws IOException {
// TODO Auto-generated method stub
String valString = null;
Integer valInt = null;
Double valDouble =null;
String str = (String) input.get(0);
Tuple outputTuple =TupleFactory.getInstance().newTuple(2);
if (str != null){
try{
valInt = Integer.parseInt(str);
outputTuple.set(0, valString);
outputTuple.set(1, valInt);
}
catch(Exception e){
try{
valDouble = Double.parseDouble(str) ;
outputTuple.set(0, valString);
outputTuple.set(1, valDouble);
}
catch(Exception ew){
outputTuple.set(0, str);
outputTuple.set(1, null);
}
}
}
return outputTuple;
}
}
Also The pig script I have written is below :
REGISTER 'CONVERT.jar';
data_load = LOAD '/tmp/input/testfile.txt' USING PigStorage(',') AS (col1:chararray, col2:chararray, col3:chararray, col4:chararray, col5:chararray);
data_grp = GROUP data_load BY ($input_col);
data_flatten = FOREACH data_grp GENERATE FLATTEN(com.test.udf.CheckDataType(*));
rmf /tmp/output;
STORE data_flatten INTO '/tmp/output' USING PigStorage(',');
How can I integrate this in Talend for Big Data.
Updated answer:
You need to split your pig script into 3 components, PigLoad, PigCode and PigStoreResult and connect them. UDF can be included as a code or as a seperate jar included to PigLoad component.
Step by step instruction can be found here: https://www.evernote.com/l/AJONeXS0_sBNwpDfmPByJSUVS0vmAs04EGM