Search code examples
csvawk

What's the most robust way to efficiently parse CSV using awk?


Given a CSV as might be generated by Excel or other tools with embedded newlines and/or double quotes and/or commas in fields, and empty fields like:

$ cat file.csv
"rec1, fld1",,"rec1"",""fld3.1
"",
fld3.2","rec1
fld4"
"rec2, fld1.1

fld1.2","rec2 fld2.1""fld2.2""fld2.3","",rec2 fld4
"""""","""rec3,fld2""",

What's the most robust way efficiently using awk to identify the separate records and fields:

Record 1:
    $1=<rec1, fld1>
    $2=<>
    $3=<rec1","fld3.1
",
fld3.2>
    $4=<rec1
fld4>
----
Record 2:
    $1=<rec2, fld1.1

fld1.2>
    $2=<rec2 fld2.1"fld2.2"fld2.3>
    $3=<>
    $4=<rec2 fld4>
----
Record 3:
    $1=<"">
    $2=<"rec3,fld2">
    $3=<>
----

so it can be used as those records and fields internally by the rest of the awk script.

A valid CSV would be one that conforms to RFC 4180 or can be generated by MS-Excel.

The solution must tolerate the end of record just being LF (\n) as is typical for UNIX files rather than CRLF (\r\n) as that standard requires and Excel or other Windows tools would generate. It will also tolerate unquoted fields mixed with quoted fields. It will specifically not need to tolerate escaping "s with a preceding backslash (i.e. \" instead of "") as some other CSV formats allow - if you have that then adding a gsub(/\\"/,"\"\"") up front would handle it and trying to handle both escaping mechanisms automatically in one script would make the script unnecessarily fragile and complicated.


Solution

  • Updated to reflect the release of GNU awk 5.3 for the --csv option to enable CSV parsing (also available now in Kernighan's One True Awk so I expect the gawk 5.3 scripts below will also work there but I don't have a copy of onetrueawk to test with):

    If you have GNU awk 5.3 or later for --csv (or equivalently -k) and you neither have to retain the input quotes nor have a separator other than ,:

    --csv reads multi-line records and splits quoted fields correctly.

    awk --csv -v OFS=',' '
        {
            printf "Record %d:\n", NR
            for (i=1;i<=NF;i++) {
                printf "    $%d=<%s>\n", i, $i
            }
            print "----"
        }
    '
    

    This automatically reads multi-line records that have newlines within quoted fields but you can only use , as a separator and " as the field-surrounding quote with --csv.

    Note that --csv automatically strips quotes from around quoted fields and converts escaped quotes ("") within fields to single quotes ("). That is usually the desired behavior but if it's not what you want there are alternatives below.

    If you have GNU awk 4.0 or later for FPAT:

    FPAT splits quoted fields correctly but does not automatically read multi-line records so you need to implement that part.

    awk -v FPAT='[^,]*|("([^"]|"")*")' -v OFS=',' '
        function readRec(      line,rec) {
            # Keep reading lines until we have an even number of quotes
            # as an incomplete record will have an uneven number.
            rec = $0
            while ( (gsub(/"/,"&",rec) % 2) && ((getline line) > 0) ) {
                rec = rec RS line
                NR--
                FNR--
            }
            $0 = rec
        }
    
        {
            readRec()
            printf "Record %d:\n", ++recNr
            for (i=1;i<=NF;i++) {
                # Convert <"foo"> to <foo> and <"foo""bar"> to <foo"bar>
                gsub(/^"|"$/, "", $i)
                gsub(/""/, "\"", $i)
    
                printf "    $%d=<%s>\n", i, $i
            }
            print "----"
        }
    '
    

    You can use any character as a separator and any character as the field-surrounding quote with FPAT since you define those and write some of the code to parse the input for them but you have to write your own code to read multi-line records.

    See https://www.gnu.org/software/gawk/manual/gawk.html#More-CSV for info on the FPAT setting I use above. That would also be a good choice if you have gawk 5.3+ but --csv isn't a good option for you, e.g. because you need to retain quotes or use a different separator character.

    If you have any modern* awk:

    With neither --csv nor FPAT you have to implement both reading multi-line fields and splitting quoted fields correctly.

    awk -F',' -v OFS=',' '
        function readRec(      line,rec) {
            # Keep reading lines until we have an even number of quotes
            # as an incomplete record will have an uneven number.
            rec = $0
            while ( (gsub(/"/,"&",rec) % 2) && ((getline line) > 0) ) {
                rec = rec RS line
                NR--
                FNR--
            }
            $0 = rec
        }
    
        function splitRec(      orig,tail,fpat,fldNr) {
            # You must call this function every time you change $0 to
            # repopulate the fields taking quoted fields into account.
        
            orig = tail = $0
            $0 = ""
            fpat = "([^" FS "]*)|(\"([^\"]|\"\")*\")"
            while ( (tail != "") && match(tail,fpat) ) {
                $(++fldNr) = substr(tail,RSTART,RLENGTH)
                tail = substr(tail,RSTART+RLENGTH+1)
            }
        
            # If the original $0 ended with a null field we would exit the
            # loop above before handling it so handle it here.
            if ( orig ~ ("["FS"]$") ) {
                $(++fldNr) = ""
            }
        }
        
        {
            readRec()
            splitRec()
            printf "Record %d:\n", NR
            for (i=1;i<=NF;i++) {
                # Convert <"foo"> to <foo> and <"foo""bar"> to <foo"bar>
                gsub(/^"|"$/, "", $i)
                gsub(/""/, "\"", $i)
        
                printf "    $%d=<%s>\n", i, $i
            }
            print "----"
        }
    '
    

    You can use any character as a separator and any character as the field-surrounding quote with the the above "any modern* awk" solution since you define those and write all of the code to parse the input for them and you also have to write your own code to read multi-line records.

    Decrementing NR and FNR above is done because getline increments them every time it reads a line to add to the record. Changing the value of NR and FNR is undefined behavior per POSIX, though, so if your awk doesn't support that then just create and use your own Fnr and Nr or similarly named variables to keep track of record numbers.


    Some related information/scripts:

    To add quotes to output fields when using --csv

    By default given this input:

    $ cat file
    foo,"field 2 ""contains"" quotes, and comma",bar
    

    you might expect this script which prints the first 2 fields:

    awk --csv -v OFS=',' '{print $1,$2}' file
    

    to produce this output:

    foo,"field 2 ""contains"" quotes, and comma"
    

    but it doesn't, it produces this output instead:

    $ awk --csv -v OFS=',' '{print $1,$2}' file
    foo,field 2 "contains" quotes, and comma
    

    Note that field 2 from the input has now, probably undesirably, become 2 separate fields in the output as it's surrounding quotes were stripped so the , that was inside of those quotes is now a separator.

    If you want the output fields quoted then you need to manually add the quotes by doing whichever of the following makes sense for your application:

    To add quotes around a field or any other string:

    enquote(str) {
        gsub(/"/,"\"\"",str)
        return "\"" str "\""
    }
    

    To add quotes around all fields:

    enquote_all() {
        for ( i=1; i<=NF; i++ ) {
            $i = enquote($i)
        }
    }
    

    To add quotes around only the fields that NEED to be quoted:

    enquote_needed() {
        for ( i=1; i<=NF; i++ ) {
            if ( $i ~ /[\n,"]/ ) {
                $i = enquote($i)
            }
        }
    }
    

    To retain the quotes that were present in the input

    You need to use readRec() and/or splitRec() above, depending if you're using --csv or FPAT or neither. There is no way to make --csv not remove quotes when populating fields and you cannot use --csv and FPAT, FS or FIELDWIDTHS together as using --csv tells awk to ignore those other variables that otherwise control field splitting.

    So, assuming your fields retain their original quotes, you can do the following or similar to conditionally reproduce those quotes in the output (and to also add new quotes if the field would otherwise be invalid CSV):

    function dequote_cond(fldNr,    had_quotes) {
        had_quotes = ( gsub(/^"|"$/,"",$fldNr) || gsub(/""/,"\"",$fldNr) )
        return had_quotes+0
    }
    function enquote_cond(fldNr,had_quotes,    got_quotes) {
        if ( had_quotes || ($fldNr ~ /[\n",]/) ) {
            gsub(/"/,"\"\",$fldNr)
            $fldNr = "\"" $fldNr "\""
            got_quotes = 1
        }
        return got_quotes+0     # probably unused, for symmetry with dequote_cond() 
    }
    ...
    q = dequote_cond(i)
    $i = new value
    enquote_cond(i,q)
    

    To print fields with quotes removed if your CSV cannot contain newlines:

    • With GNU awk 5.3 or later for --csv: Exactly the same as above --csv case that includes newlines in fields.
    • With GNU awk 4.0 or later for FPAT: The same as above FPAT case that includes newlines in fields except you don't need to define or call readRec().
    • With any modern* awk: The same as above "any modern* awk"" case that includes newlines in fields except you don't need to define or call readRec().

    If you don't want the quotes removed then, if using --csv add quotes as described above, and if you're using either of the other solutions then don't remove the quotes with the 2 gsub()s.

    To convert newlines within fields to blanks and commas within fields to semi-colons

    If all you actually want to do is convert your CSV to a simpler version with individual lines by, say, replacing newlines with blanks and commas with semi-colons inside quoted fields (and have quoted fields in the output) then:

    With GNU awk 3.0 or later for RT:

    awk -v RS='"([^"]|"")*"' -v ORS= '{
        gsub(/\n/," ",RT)
        gsub(/,/,";",RT)
        print $0 RT
    }'
    

    Otherwise: use one of the solutions discussed earlier that use readRec() and add the 2 gsub()s above in a loop on the fields.

    If you want a CSV but have a different character than , as your delimiter.

    If you have a CSV-like file that uses tabs or ;s or |s or some other character to separate fields then you can do the following using any awk to convert your file to CSV:

    $ cat changeSeps.awk
    BEGIN {
        FS = OFS = "\""
    
        if ( (old == "") || (new == "") ) {
            printf "Error: old=\047%s\047 and/or new=\047%s\047 separator string missing.\n", old, new |"cat>&2"
            printf "Usage: awk -v old=\047;\047 -v new=\047,\047 -f changeSeps.awk infile [> outfile]\n" |"cat>&2"
            err = 1
            exit
        }
    
        sanitized_old = old
        sanitized_new = new
    
        # Ensure all regexp and replacement chars get treated as literal
        gsub(/[^^\\]/,"[&]",sanitized_old)  # regexp: char other than ^ or \ -> [char]
        gsub(/\\/,"\\\\",sanitized_old)     # regexp: \ -> \\
        gsub(/\^/,"\\^",sanitized_old)      # regexp: ^ -> \^
        gsub(/[&]/,"\\\\&",sanitized_new)   # replacement: & -> \\&
    }
    {
        $0 = prev ors $0
        prev = $0
        ors = ORS
    }
    NF%2 {
        for ( i=1; i<=NF; i+=2 ) {
            cnt += gsub(sanitized_old,sanitized_new,$i)
        }
        print
        prev = ors = ""
    }
    END {
        if ( !err ) {
            printf "Converted %d \047%s\047 field separators to \047%s\047s.\n", cnt+0, old, new |"cat>&2"
        }
        exit err
    }
    

    which you'd call as, for example to change ;-separated to ,-separated format:

    awk -v old=';' -v new=',' -f changeSeps.awk file
    

    If you have DOS/Windows line endings

    With Windows \r\n line endings, e.g. as in a CSV exported from Excel, processing can be simpler than the above as the "newlines" within each field can often actually just be line feeds (i.e. \ns) and so you can set RS="\r\n" (using GNU awk for multi-char RS) and then the \ns within fields will not be treated as line endings so all you additionally need to parse CSV is to set FPAT. Sometimes, though, the underlying C primitives will not pass along the \r to gawk and you'd need to set -v BINDMODE=3 on the gawk command line to see them.

    To print fields from a CSV by the column header names rather than field numbers

    Using any awk:

    $ cat file.csv
    foo,bar,etc
    17,35,21
    

    $ awk '
        BEGIN { FS=OFS="," }
        NR==1 {
            for ( i=1; i<=NF; i++ ) {
                f[$i] = i
            }
        }
        { print $(f["etc"]), $(f["foo"]) }
    ' file.csv
    etc,foo
    21,17
    

    Other notes

    *I say "modern awk" above because there's apparently extremely old (i.e. circa 2000) versions of tawk and mawk1 still around which have bugs in their gsub() implementation such that gsub(/^"|"$/,"",fldStr) would not remove the start/end "s from fldStr. If you're using one of those then get a new awk, preferably gawk, as there could be other issues with them too but if that's not an option then I expect you can work around that particular bug by changing this:

    gsub(/^"|"$/,"",fldStr)
    

    to this:

    sub(/^"/,"",fldStr)
    sub(/"$/,"",fldStr)
    

    Thanks to the following people for identifying and suggesting solutions to the stated issues with the original version of this answer:

    1. @mosvy for escaped double quotes within fields.
    2. @datatraveller1 for multiple contiguous pairs of escaped quotes in a field and null fields at the end of records.

    Related: also see How do I use awk under cygwin to print fields from an excel spreadsheet? for how to generate CSVs from Excel spreadsheets.