Search code examples
apache-sparkhadooppysparkhiveapache-spark-sql

Table in Pyspark shows headers from CSV File


I have a csv file with contents as below which has a header in the 1st line .

id,name
1234,Rodney
8984,catherine

Now I was able create a table in hive to skip header and read the data appropriately. Table in Hive

CREATE EXTERNAL TABLE table_id(
  `tmp_id` string, 
  `tmp_name` string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
WITH SERDEPROPERTIES ( 
  'field.delim'=',', 
  'serialization.format'=',') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION  's3://some-testing/test/data/'
tblproperties ("skip.header.line.count"="1");

Results in Hive

select * from table_id;
OK
1234    Rodney
8984    catherine
Time taken: 1.219 seconds, Fetched: 2 row(s)

But, when I use the same table in pyspark (Ran the same query) I see even the headers from file in pyspark results as below.

>>> spark.sql("select * from table_id").show(10,False)
+------+---------+                                                              
|tmp_id|tmp_name |
+------+---------+
|id    |name     |
|1234  |Rodney   |
|8984  |catherine|
+------+---------+

Now, how can I ignore these showing up in the results in pyspark. I'm aware that we can read the csv file and add .option("header",True) to achieve this but, I wanna know if there's a way to do something similar in pyspark while querying tables.

Can someone suggest me a way.... Thanks 🙏 in Advance !!


Solution

  • u can use below two properties: serdies properties and table properties, you will be able to access table from hive and spark by skipping header in both env.

    CREATE EXTERNAL TABLE `student_test_score_1`(
     student string,
     age string)
    ROW FORMAT SERDE
      'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
    WITH SERDEPROPERTIES (
      'delimiter'=',',
      'field.delim'=',',
      'header'='true',
      'skip.header.line.count'='1',
      'path'='hdfs:<path>')
      LOCATION
      'hdfs:<path>'
      TBLPROPERTIES (
      'spark.sql.sources.provider'='CSV')