Search code examples
htmlperlfileslurp

How do I read a file's contents into a Perl scalar?


what i am trying to do is get the contents of a file from another server. Since im not in tune with perl, nor know its mods and functions iv'e gone about it this way:

 my $fileContents;
 if( $md5Con =~ m/\.php$/g ) {
     my $ftp = Net::FTP->new($DB_ftpserver, Debug => 0) or die "Cannot connect to some.host.name: $@";
     $ftp->login($DB_ftpuser, $DB_ftppass) or die "Cannot login ", $ftp->message;
     $ftp->get("/" . $root . $webpage, "c:/perlscripts/" . md5_hex($md5Con) . "-code.php") or die $ftp->message;
     open FILE, ">>c:/perlscripts/" . md5_hex($md5Con) . "-code.php" or die $!;
     $fileContents = <FILE>;
     close(FILE);
     unlink("c:/perlscripts/" . md5_hex($md5Con) . "-code.php");
     $ftp->quit;
 }

What i thought id do is get the file from the server, put on my local machine, edit the content, upload to where ever an then delete the temp file.

But I cannot seem to figure out how to get the contents of the file;

open FILE, ">>c:/perlscripts/" . md5_hex($md5Con) . "-code.php" or die $!;
$fileContents = <FILE>;
close(FILE);

keep getting error;

Use of uninitialized value $fileContents

Which im guessing means it isn't returning a value.

Any help much appreciated.

>>>>>>>>>> EDIT <<<<<<<<<<

my $fileContents;
if( $md5Con =~ m/\.php$/g ) {
    my $ftp = Net::FTP->new($DB_ftpserver, Debug => 0) or die "Cannot connect to some.host.name: $@";
    $ftp->login($DB_ftpuser, $DB_ftppass) or die "Cannot login ", $ftp->message;
    $ftp->get("/" . $root . $webpage, "c:/perlscripts/" . md5_hex($md5Con) . "-code.php") or die $ftp->message;
    my $file = "c:/perlscripts/" . md5_hex($md5Con) . "-code.php";
    {
        local( $/ ); # undefine the record seperator
        open FILE, "<", $file or die "Cannot open:$!\n";
        my $fileContents = <FILE>;
        #print $fileContents;
        my $bodyContents;
        my $headContents;

        if( $fileContents =~ m/<\s*body[^>]*>.*$/gi ) {
            print $0 . $1 . "\n";
            $bodyContents = $dbh->quote($1);    
        }
        if( $fileContents =~ m/^.*<\/head>/gi ) {
            print $0 . $1 . "\n";
            $headContents = $dbh->quote($1);    
        }

        $bodyTable = $dbh->quote($bodyTable);
        $headerTable = $dbh->quote($headerTable);
        $dbh->do($createBodyTable) or die " error: Couldn't create body table: " . DBI->errstr;
        $dbh->do($createHeadTable) or die " error: Couldn't create header table: " . DBI->errstr;
        $dbh->do("INSERT INTO $headerTable ( headData, headDataOutput ) VALUES ( $headContents, $headContents )") or die " error: Couldn't connect to database: " . DBI->errstr;
        $dbh->do("INSERT INTO $bodyTable ( bodyData, bodyDataOutput ) VALUES ( $bodyContents, $bodyContents )") or die " error: Couldn't connect to database: " . DBI->errstr;
        $dbh->do("INSERT INTO page_names (linkFromRoot, linkTrue, page_name, table_name, navigation, location) VALUES ( $linkFromRoot, $linkTrue, $page_name, $table_name, $navigation, $location )") or die " error: Couldn't connect to database: " . DBI->errstr;

        unlink("c:/perlscripts/" . md5_hex($md5Con) . "-code.php");
    }
    $ftp->quit;
}

the above using print WILL print the whole file. BUT, for some reason the two regular expresions are returning false. Any idea why?

if( $fileContents =~ m/<\s*body[^>]*>.*$/gi ) {
            print $0 . $1 . "\n";
            $bodyContents = $dbh->quote($1);    
        }
        if( $fileContents =~ m/^.*<\/head>/gi ) {
            print $0 . $1 . "\n";
            $headContents = $dbh->quote($1);    
        }

Solution

  • This is covered in section 5 of the Perl FAQ included with the standard distribution.

    How can I read in an entire file all at once?

    You can use the Path::Class::File::slurp module to do it in one step.

    use Path::Class;
    $all_of_it = file($filename)->slurp; # entire file in scalar
    @all_lines = file($filename)->slurp; # one line per element
    

    The customary Perl approach for processing all the lines in a file is to do so one line at a time:

    open (INPUT, $file) || die "can't open $file: $!";
    while (<INPUT>) {
        chomp;
        # do something with $_
    }
    close(INPUT)        || die "can't close $file: $!";
    

    This is tremendously more efficient than reading the entire file into memory as an array of lines and then processing it one element at a time, which is often—if not almost always—the wrong approach. Whenever you see someone do this:

    @lines = <INPUT>;
    

    you should think long and hard about why you need everything loaded at once. It's just not a scalable solution. You might also find it more fun to use the standard Tie::File module, or the DB_File module's $DB_RECNO bindings, which allow you to tie an array to a file so that accessing an element the array actually accesses the corresponding line in the file.

    You can read the entire filehandle contents into a scalar.

    {
    local(*INPUT, $/);
    open (INPUT, $file) || die "can't open $file: $!";
    $var = <INPUT>;
    }
    

    That temporarily undefs your record separator, and will automatically close the file at block exit. If the file is already open, just use this:

    $var = do { local $/; <INPUT> };
    

    For ordinary files you can also use the read function.

    read( INPUT, $var, -s INPUT );
    

    The third argument tests the byte size of the data on the INPUT filehandle and reads that many bytes into the buffer $var.