Search code examples
regexawksedsublimetext3

Remove duplicate lines from file


I have a list of URLs, most of which are duplicates:

> http://example.com/some/a-test-link.html
> http://example.com/some/a-test-link.html
> http://example.com/some/another-link.html
> http://example.com/some/another-link.html
> http://example.com/some/again-link.html
> http://example.com/some/again-link.html

I don't need the same link twice, so I need to remove duplicates and keep only one link. How can I do this using regular expressions, or sed, or awk (I am not sure which technology would be best). I am using Ubuntu as the operating system and Sublime Text 3 as my editor.


Solution

  • Very trivial using awk:

    awk '!seen[$0]++' file
    

    which basically means:

    awk "!($0 in seen) {seen[$0];print}"
    

    So if the line is not in the array it will add to it and print it. All subsequent lines if they exist in the array will be skipped.

    $ cat file
    > http://example.com/some/a-test-link.html
    > http://example.com/some/a-test-link.html
    > http://example.com/some/another-link.html
    > http://example.com/some/another-link.html
    > http://example.com/some/again-link.html
    > http://example.com/some/again-link.html
    $ awk '!seen[$0]++' file
    > http://example.com/some/a-test-link.html
    > http://example.com/some/another-link.html
    > http://example.com/some/again-link.html