I need some clarity on Databricks DBFS.
In simple basic terms, what is it, what is the purpose of it and what does it allow me to do?
The documentation on databricks, says to this effect..
"Files in DBFS persist to Azure Blob storage, so you won’t lose data even after you terminate a cluster."
Any insight will be helpful, haven't been able to find documentation that goes into the details of it from architecture and usage perspective
I have experience with DBFS, it is a great storage which is holding data which you can upload from your local computer using DBFS CLI! The CLI setup a bit tricky, but when you manage, you can easily move whole folders around in this environment (remember using -overwrite! )
With Scala you can easily pull in the data you store in this storage with a code like this:
val df1 = spark
.read
.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("dbfs:/foldername/test.csv")
.select(some_column_name)
Or read in the whole folder to process all csv the files available:
val df1 = spark
.read
.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("dbfs:/foldername/*.csv")
.select(some_column_name)
I think it is easy to use and learn, I hope you find this info helpful!