Search code examples
scalaapache-sparkdataframerdd

Save file without brackets


I would like my final result to be without brackets

I have tried this but it gave back so many errors:

.map(x => x.mkString(",").saveAsTextFile("/home/amel/new")

This my code

val x= sc.textFile("/home/amel/1MB").filter(!_.contains("NULL"))
.filter(!_.contains("Null"))
val re = x.map(row => {
val cols = row.split(",")
val Cycle = cols(2)
val Duration = Cycle match {
case "Licence" => "3 years"
case "Master" => "2 years"
case "Ingéniorat" => "5 years"
case "Ingeniorat" => "5 years"
case "Doctorat" => "3 years"
case _ => "NULL"
}
(cols(1).split("-")(0) + "," + Cycle + "," + Duration + "," + 
cols(3), 1)
}).reduceByKey(_ + _)
re.collect.foreach(println)
}

This is the result I get:

(1999,2 years,Master,IC,57)

(2013,3 years,Doctorat,SI,44)

(2013,3 years,Licence,IC,73)

(2009,5 years,Ingeniorat,IC,58)

(2011,2 years,Master,SI,61)

(2003,5 years,Ingeniorat,IC,65)

(2019,3 years,Doctorat,SI,80)

I would like to: remove the brackets at the beginning and end.


Solution

  • instead of collect and print like this re.collect.foreach(println)

    you can do some thing like this...

    val x: Seq[(Int, String, String, String, Int)] = Seq((1999, "2 years", "Master", "IC", 57), (2013,"3 years","Doctorat","SI",44))
        x.map(p => p.productIterator.mkString(",")).foreach(println)
    

    Result :

    1999,2 years,Master,IC,57
    2013,3 years,Doctorat,SI,44
    

    or simply you can use dataframes to achieve this result :

    import org.apache.log4j.Level
    import org.apache.spark.sql.SparkSession
    
    object TupleTest {
      org.apache.log4j.Logger.getLogger("org").setLevel(Level.ERROR)
      def main(args: Array[String]): Unit = {
        val spark = SparkSession.builder().appName(this.getClass.getName).config("spark.master", "local").getOrCreate()
        spark.sparkContext.setLogLevel("ERROR")
        import spark.implicits._
        val rdd = spark.sparkContext.parallelize(Seq((1, "Spark"), (2, "Databricks"), (3, "Notebook")))
        val df = rdd.toDF("Id", "Name")
        df.coalesce(1).write.mode("overwrite").csv("./src/main/resouces/single")
      }
    
    }
    

    Result savad in a text file :

    1,Spark
    2,Databricks
    3,Notebook