Search code examples
javapythonapache-sparkjupyterjupyterhub

java.io.IOException: Could not read footer for file FileStatus when trying to read parquet file from Spark cluster from IBM Cloud Object Storage


I have created a Spark Cluster with 3 workers on Kubernetes and a JupyterHub deployment to attach to it so I can run huge queries.

My parquet files are stored into IBM Cloud Object Storage (COS) and when I run a simple code to read from COS, I'm getting the following error:

Could not read footer: java.io.IOException: Could not read footer for file FileStatus{path=file:/path/myfile.parquet/_common_metadata; isDirectory=false; length=413; replication=0; blocksize=0; modification_time=0; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false} at parquet.hadoop.ParquetFileReader.readAllFootersInParallel

I have added all the required libraries to the /jars directory on SPARK_HOME directory in the driver.

This is the code I'm using to connect:

# Initial Setup - Once
import os

from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession

spark_session = SparkSession(sc)


credentials_staging_parquet = {
  'bucket_dm':'mybucket1',
  'bucket_eid':'bucket2',
  'secret_key':'XXXXXXXX',
  'iam_url':'https://iam.ng.bluemix.net/oidc/token',
  'api_key':'XXXXXXXX',
  'resource_instance_id':'crn:v1:bluemix:public:cloud-object-storage:global:a/XXXXX:XXXXX::',
  'access_key':'XXXXX',
  'url':'https://s3-api.us-geo.objectstorage.softlayer.net'
}

conf = {
    'fs.cos.service.access.key': credentials_staging_parquet.get('access_key'),
    'fs.cos.service.endpoint': credentials_staging_parquet.get('url'),
    'fs.cos.service.secret.key': credentials_staging_parquet.get('secret_key'),
    'fs.cos.service.iam.endpoint': credentials_staging_parquet.get('iam_url'),
    'fs.cos.service.iam.service.id': credentials_staging_parquet.get('resource_instance_id'),
    'fs.stocator.scheme.list': 'cos',
    'fs.cos.impl': 'com.ibm.stocator.fs.ObjectStoreFileSystem',
    'fs.stocator.cos.impl': 'com.ibm.stocator.fs.cos.COSAPIClient',
    'fs.stocator.cos.scheme': 'cos',
    'fs.cos.client.execution.timeout': '18000000',
    'fs.stocator.glob.bracket.support': 'true'
}

hadoop_conf = sc._jsc.hadoopConfiguration()
for key in conf:
    hadoop_conf.set(key, conf.get(key))

parquet_path = 'store/MY_FILE/*'
cos_url = 'cos://{bucket}.service/{parquet_path}'.format(bucket=credentials_staging_parquet.get('bucket_eid'), parquet_path=parquet_path)

df2 = spark_session.read.parquet(cos_url)

Solution

  • Found the problem to my issue, the required libraries were not available for all workers in the cluster.

    There are 2 ways to fix that:

    • Make sure you add the dependencies on the spark-submit command so it's distributed to the whole cluster, in this case it should be done in the kernel.json file on Jupyterhub located in /usr/local/share/jupyter/kernels/pyspark/kernel.json (assuming you created that).

    OR

    • Add the dependencies on the /jars directory on your SPARK_HOME for each worker in the cluster and the driver (if you didn't do so).

    I used the second approach. During my docker image creation, I added the libs so when I start my cluster, all containers already have the libraries required.