Search code examples
hadoophivehiveqlimpyla

Impyla return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask when querying HiveServer2


I am using Impyla for querying some results from HIVE, however, I met this problem:

From Impyla:

impala.error.OperationalError: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

and from HiveServer2:

WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = hduser_20180827031927_fdb148b0-725b-434c-a0f8-98b6843d4348
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

and my source code is:

from impala.dbapi import connect
import sys


dbName = sys.argv[1:][0]
query = sys.argv[1:][1]

conn = connect(host='192.168.0.10', port=10000, database=dbName , auth_mechanism='NOSASL' , use_ssl=True)
cursor = conn.cursor()
cursor.execute(query, configuration={'hive.exec.reducers.bytes.per.reducer': '100000', 'hive.auto.convert.join.noconditionaltask':'false','mapreduce.job.reduces':'1','hive.auto.convert.join':'false'})
returnData = []
for row in cursor:
    returnData.append(row[0])
pprint(returnData)

as you see I have added many configurations, but it does not work


Solution

  • From your error, there is no way to know what happened.

    I'm not sure to enable debug logging in impyla, so you'll need to go to the YARN UI to find the query.

    If YARN isn't running, I would think you'd get a more descriptive error such as "unable to submit job", though maybe that error is not being propagated from HiveServer