I am trying to read/write to Azure blob storage but am constantly getting the "No FileSystem for scheme: wasbs". Here is what my gradle file looks like
plugins {
// Apply the scala plugin to add support for Scala
id 'scala'
id 'idea'
id 'application'
}
repositories {
mavenLocal()
jcenter()
maven {
url "https://repository.mulesoft.org/nexus/content/repositories/public"
}
}
dependencies {
// Spark SQL subsumes Spark Core
compileOnly 'org.apache.spark:spark-sql_2.12:3.0.3'
implementation group: 'org.scala-lang', name: 'scala-library', version: '2.12.1'
implementation group: 'com.typesafe', name: 'config', version: '1.4.1'
implementation group: 'com.microsoft.azure', name: 'azure-storage', version: '8.6.6'
implementation group: 'org.apache.hadoop', name: 'hadoop-azure', version: '3.3.1'
}
jar {
manifest {
attributes('Main-Class': 'AppRunner')
}
from {
configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) }
}
exclude 'META-INF/*.RSA'
exclude 'META-INF/*.SF'
exclude 'META-INF/*.DSA'
duplicatesStrategy(DuplicatesStrategy.EXCLUDE)
}
I am creating a jar file with all the required dependencies for hadoop-azure and azure-storage.
This is what my Scala file is primarily doing.
spark.conf.set("fs.azure.account.key.<blob-name>.blob.core.windows.net", "<blob-key>")
spark.sparkContext.hadoopConfiguration.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
val df = spark.read.parquet("wasbs://<container-name>@<blob-name>.blob.core.windows.net/data/")
My Spark setup is currently on a VM in the Azure environment where I am running Spark 3.1.2 in standalone mode.
My spark-submit command looks like
./spark-3.1.2-bin-hadoop2.7/bin/spark-submit --master "local[*]" --jars jars/hadoop-azure-3.3.1.jar,jars/azure-storage-8.6.6.jar compiled-job.jar
I do not need to include the jars as a parameter but I included it for testing because it seems that the Spark job cannot primarily find the wasbs
filesystem.
Here is the exception I receive when I run the jar file
Exception in thread "main" java.io.IOException: No FileSystem for scheme: wasbs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
Any idea what I am doing wrong here?
Just add this configuration: spark.conf.set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")