I have a large file and many lines are duplicated. I'm trying to delete lines with every second occurrence of a pattern. Did tried searching similar question on SO but no luck.
I can delete all lines matching pattern with ":g/pattern/d" but don't want to loose data.
Sample pattern to delete "John-------Doe"
Sample data:
Time--------FName---------LName
11:05-------John------------Doe
11:05-------John------------Doe
11:06-------Michael---------Lawrence
11:06-------Michale---------Lawrence
Expected result:
11:05-------John------------Doe
11:06-------Michael---------Lawrence
%!uniq
if file contains only sorted data
%!sort -u
if file contains only data, but unsorted
S,E!uniq
where S is starting line number of data, E is ending line number of data , assuming data is sorted
S,E!sort -u
where S is starting line number of data, E is ending line number of data, for unsorted data