Search code examples
perldownloadwww-mechanize

How can I download link targets from a web site using Perl?


I just made a script to grab links from a website, and in turn saves them into a text file.

Now I'm working on my regexes so it will grab links which contains php?dl= in the url from the text file:

E.g.: www.example.com/site/admin/a_files.php?dl=33931

Its pretty much the address you get when you hover over the dl button on the site. From which you can click to download or "right click save".

I'm just wondering on how to achieve this, having to download the content of the given address which will download a *.txt file. All from the script of course.


Solution

  • Make WWW::Mechanize your new best friend.

    Here's why:

    • It can identify links on a webpage that match a specific regex (/php\?dl=/ in this case)
    • It can follow those links through the follow_link method
    • It can get the targets of those links and save them to file

    All this without needing to save your wanted links in an intermediate file! Life's sweet when you have the right tool for the job...


    Example

    use strict;
    use warnings;
    use WWW::Mechanize;
    
    my $url  = 'http://www.example.com/';
    my $mech = WWW::Mechanize->new();
    
    $mech->get ( $url );
    
    my @linksOfInterest = $mech->find_all_links ( text_regex => qr/php\?dl=/ );
    
    my $fileNumber++;
    
    foreach my $link (@linksOfInterest) {
    
        $mech->get ( $link, ':contentfile' => "file".($fileNumber++).".txt" );
        $mech->back();
    }