I have a log file that looks somewhat like this after grep my_function $LOG_FILE
:
[0] my_function took 96.78581194020808 ms
[1] my_function took 82.0779490750283 ms
[2] my_function took 187.79653799720109 ms
[1] my_function took 98.69955899193883 ms
[0] my_function took 10.296131949871778 ms[1] my_function took 2.5152561720460653 ms
[1] my_function took 2.210912061855197 ms
[2] my_function took 3.418975044041872 ms
From this file, I would like to only extract the numbers from each line. Normally, I would use awk '{print $4}'
to do this, but this log contains a few lines with two entries. However, here, I sometimes need to select two separate entries from a single line. How can I appropriately select these with bash
/GNU tools?
With your shown samples please try following awk
solutions. We need not to use grep
to search string first and then print the required value(s), we could do both of these with awk
itself.
Using GNU awk
here.
awk '
{
while(match($0,/my_function took (\S+)/,arr)){
print arr[1]
$0=substr($0,RSTART+RLENGTH)
}
}
' Input_file
2nd solution: Setting RS
as my_function took (\\S+)
in GNU awk
and dealing with RT
and split
functions later on to get required output as per shown samples.
awk -v RS='my_function took (\\S+)' 'RT && split(RT,arr,FS){print arr[3]}' Input_file