I am working with plotting extremely large files with N number of relevant data entries. (N varies between files).
In each of these files, comments are automatically generated at the start and end of the file and would like to filter these out before recombining them into one grand data set.
Unfortunately, I am using MacOSx, where I encounter some issues when trying to remove the last line of the file. I have read that the most efficient way was to use head/tail bash commands to cut off sections of data. Since head -n -1 does not work for MacOSx I had to install coreutils through homebrew where the ghead command works wonderfully. However the command,
tail -n+9 $COUNTER/test.csv | ghead -n -1 $COUNTER/test.csv >> gfinal.csv
does not work. A less than pleasing workaround was I had to separate the commands, use ghead > newfile, then use tail on newfile > gfinal. Unfortunately, this will take while as I have to write a new file with the first ghead.
Is there a workaround to incorporating both GNU Utils with the standard Mac Utils?
Thanks, Keven
The problem with your command is that you specify the file operand again for the ghead
command, instead of letting it take its input from stdin, via the pipe; this causes ghead
to ignore stdin input, so the first pipe segment is effectively ignored; simply omit the file operand for the ghead
command:
tail -n+9 "$COUNTER/test.csv" | ghead -n -1 >> gfinal.csv
That said, if you only want to drop the last line, there's no need for GNU head
- OS X's own BSD sed
will do:
tail -n +9 "$COUNTER/test.csv" | sed '$d' >> gfinal.csv
$
matches the last line, and d
deletes it (meaning it won't be output).
Finally, as @ghoti points out in a comment, you could do it all using sed
:
sed -n '9,$ {$!p;}' file
Option -n
tells sed
to only produce output when explicitly requested; 9,$
matches everything from line 9
through (,
) the end of the file (the last line, $
), and {$!p;}
prints (p
) every line in that range, except (!
) the last ($
).