Search code examples
perlfilehttpout-of-memorycgi

Out of memory when serving a very big binary file over HTTP


The code below is the original code of a Perl CGI script we are using. Even for very big files it seems to be working, but not for really huge files.

The current code is :

$files_location = $c->{target_dir}.'/'.$ID;
open(DLFILE, "<$files_location") || Error('open', 'file');
@fileholder = <DLFILE>;
close (DLFILE) || Error ('close', 'file');

print "Content-Type:application/x-download\n";
print "Content-Disposition:attachment;filename=$name\n\n";
print @fileholder;
binmode $DLFILE;

If I understand the code correctly, it is loading the whole file in memory before "printing" it. Of course I suppose it would be a lot better to load and display it by chunks ? But after having read many forums and tutorials I am still not sure how to do it best, with standard Perl libraries...

Last question, why is "binmode" specified at the end ?

Thanks a lot for any hint or advice,


Solution

  • I have no idea what binmode $DLFILE is for. $DLFILE is nothing to do with the file handle DLFILE, and it's a bit late to set the binmode of the file now that it has been read to the end. It's probably just a mistake

    You can use this instead. It uses modern Perl best practices and reads and sends the file in 8K chunks

    The file name seems to be made from $ID so I'm not sure that $name would be correct, but I can't tell

    Make sure to keep the braces, as the block makes Perl restore the old value of $/ and close the open file handle

    my $files_location = "$c->{target_dir}/$ID";
    
    {
        print "Content-Type: application/x-download\n";
        print "Content-Disposition: attachment; filename=$name\n\n";
    
        open my $fh, '<:raw', $files_location or Error('open', "file $files_location");
        local $/ = \( 8 * 1024 );
    
        print while <$fh>;
    }