Search code examples
hadoopmapreduceapache-pig

Getting exception while trying to execute a Pig Latin Script


I am learning Pig on my own and while trying to explore a dataset I am encountering an exception. What is wrong in the script and why:

movies_data = LOAD '/movies_data' using PigStorage(',') as (id:chararray,title:chararray,year:int,rating:double,duration:double);
high   = FILTER movies_data by rating > 4.0;
high_rated = FOREACH high GENERATE movies_data.title,movies_data.year,movies_data.rating,movies_data.duration;
DUMP high_rated;

At the end of the MAP Reduce execution I am getting the below error.

2018-07-22 20:11:07,213 [main] ERROR org.apache.pig.tools.grunt.Grunt

ERROR 1066: Unable to open iterator for alias high_rated. 
Backend error : org.apache.pig.backend.executionengine.ExecException: 
ERROR 0: Scalar has more than one row in the output. 
1st : (1,The Nightmare Before Christmas,1993,3.9,4568.0), 
2nd :(2,The Mummy,1932,3.5,4388.0) 
(common cause: "JOIN" then "FOREACH ... GENERATE foo.bar" should be "foo::bar" )

Solution

  • First, let's see how we can fix your problem. You don't need to access your fields using the alias name. Your third line could be simply:

    high_rated = FOREACH high GENERATE title, year, rating, duration;
    

    If you wanted to use the alias name for some reason you should use the referential operator (::) as can be seen in the ERROR suggestion. Then your line would look like:

    high_rated = FOREACH high GENERATE movies_data::title, movies_data::year, movies_data::rating, movies_data::duration;
    

    Next, let's try to understand the exact reason behind the error message. When you try to access the fields using a dot operator (.), pig will assume that the alias is a scalar (alias having only one row). Since your alias had more than one row, it complained. You can read more about scalars in Pig here: https://issues.apache.org/jira/browse/PIG-1434

    In the JIRA's release notes section, you will notice at the end, the expected error message matches the error you are getting:

    If a relation contains more than single tuple, a runtime error is generated: 
    "Scalar has more than one row in the output"