I am trying to do a simple operation on a spark cluster, by simply running in pyspark --master yarn
the following code:
op = spark.read.format("csv")
op = op.options(header=True, sep=";")
# This is actually a custom S3 endpoint on a AWS Snowball Edge device
op = op.load("s3a://some-bucket/some/path/file_*.txt")
No errors show, but the operation does not complete. Also if I pass an inexistent path in S3 it will throw an error saying the path does not exist. If I try to read from HDFS it will work. So it seems it is communication issue with S3 on reading data.
Here are the details of my stack:
spark: https://dlcdn.apache.org/spark/spark-3.2.1/spark-3.2.1-bin-hadoop3.2.tgz
awscli: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
hadoop: https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
hive: https://dlcdn.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz
hadoop_aws: https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.3.1/hadoop-aws-3.3.1.jar
aws_java_sdk_bundle: https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.874/aws-java-sdk-bundle-1.11.874.jar
My core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://34.223.14.233:9000</value>
</property>
<property>
<name>fs.s3a.endpoint</name>
<value>http://172.16.100.1:8080</value>
</property>
<property>
<name>fs.s3a.access.key</name>
<value>foo</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>bar</value>
</property>
<property>
<name>fs.s3a.connection.ssl.enabled</name>
<value>false</value>
</property>
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
<property>
<name>fs.s3a.connection.maximum</name>
<value>100</value>
</property>
</configuration>
Any ideias on troubleshooting this issue? Thank you so much!
I ended up here when investigating a similar problem. I also had s3a on a custom endpoint stalling (i.e. freezing or hanging). However, my setup is different – I set HadoopConf
in code instead of a configuration XML.
The order of config setting statements in code is relevant: Setting fs.s3a.endpoint
has to be first, and only after that fs.s3a.access.key
and fs.s3a.secret.key
can be set. What lead me to this solution was that I logged all hadoop conf values and noticed that fs.s3a.endpoint
was empty.