Search code examples
linuxgreppipelineuniqwc

print search term with line count


Hello bash beginner question. I want to look through multiple files, find the lines that contain a search term, count the number of unique lines in this list and then print into a tex file:

  1. the input file name
  2. the search term used
  3. the count of unique lines

so an example output line for file 'Firstpredictoroutput.txt' using search term 'Stop_gained' where there are 10 unique lines in the file would be:

Firstpredictoroutput.txt Stop_gained 10 

I can get the unique count for a single file using:

grep 'Search_term' inputfile.txt | uniq -c | wc -l | >>output.txt 

But I don't know enough yet about implementing loops in pipelines using bash. All my inputfiles end with *predictoroutput.txt

Any help is greatly appreciated.

Thanks in advance,

Rubal


Solution

  • You can write a function called fun, and call the fun with two arguments: filename and pattern

    $ fun() { echo "$1 $2 `grep -c $2 $1`"; }
    $ fun input.txt Stop_gained
    input.txt Stop_gained 2