Different behavior of Spark reading CSV and text file using iso-8859-1 file

Problem: I'm having a problem with encoding conversion using a text file, a problem that doesn't happen when I use a csv file.

OS: Ubuntu 23.10

Scala: 2.13.12

Spark: 3.5.0


package sct

import org.apache.spark.sql.{DataFrame, DataFrameReader, Dataset, SparkSession}

object EncodingApp {
  def main(args: Array[String]): Unit = {
    val inFile: String = "ISO_8859_1.txt" // iso-8859-1 encoded file with only one line: "José, André"
    val spark: SparkSession = SparkSession.builder.appName("Encoding Application")
    val reader: DataFrameReader ="encoding", "ISO-8859-1")

    val text: Dataset[String] = reader.textFile(inFile)
    val csv: DataFrame = reader.csv(inFile)



|      value|
|Jos�, Andr�|

| _c0|   _c1|
|José| André|

What am I doing wrong?


  • The difference in behavior you're observing is likely due to how the Spark DataFrameReader handles text files versus CSV files, particularly in how the encoding is applied during the read process.

    I would approach this as following. First importing the required libraries:

    import spark.implicits._
    import org.apache.spark.sql.functions._

    Then reading the text as a binary file that then we can apply the right encoding

    val text =
      .flatMap(bytes => new String(bytes, "ISO-8859-1")
      .split("\n")) // Split lines here

    However while read properly,right now the data is still in one column

    |      value|
    |José, André|

    So in order to create the dataframe as you want it you can do the following:

    val csvData = text
      .withColumn("_tmp", split(col("value"), ","))


    | _c0|  _c1|

    Note: you will probably need to make this whole thing into a function or something and change it to taste. But I think its a good start.