Search code examples
csvpysparkapache-spark-sqljupyter-notebook

pyspark.sql error reading csv file: WARN FileStreamSink: Assume no metadata directory. Error while looking for metadata directory in the path


I am starting with pyspark.sql and I am trying to read a simple csv file using jupyter-notebook. See code below

from pyspark.sql import SparkSession

spark = SparkSession \
    .builder \
    .getOrCreate()

data_path = '//Users/myuser/pysparktest/'
utilization_path = data_path + '/utilization.csv'
user_df = spark.read.csv(utilization_path)

However I am getting the following error that I am not able to solve:

24/06/05 23:14:32 WARN FileStreamSink: Assume no metadata directory. Error while looking for metadata directory in the path: //Users/myuser/pysparktest/utilization.csv.
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "null"
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3443)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
    at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:229)
    at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:211)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:538)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
    at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
    at java.lang.Thread.run(Thread.java:750)

Can anybody help me to figure out what is missing here?

Thanks,

I tried to install the native hadoop library following this tutorial:

https://medium.com/@GalarnykMichael/install-spark-on-mac-pyspark-453f395f240b#.be80dcqat.

Tried multiple times uninstalling and installing spark, pyspark and jupyter.

Expectation:

Being able to read a simple csv file.


Solution

  • First of all, your path string starts with two slashes. Then you concatenate 2 strings, one of which ends with a slash and the second one starts with a slash which leads to your path being //Users/myuser/pysparktest//utilization.csv though traceback says that it treats the path as //Users/myuser/pysparktest/utilization.csv. But the path dhould look like: /Users/myuser/pysparktest/utilization.csv