I have a Perl Mojo server running and when posting to a certain url, there is a script that creates a sub process for a very long process (around a minute's time).
This process runs for about 30 seconds then crashes, and here are no exceptions being thrown or any logs being generated.
My natural assumption is that this has something to do with a connection timeout, so I increased the server's timeout. This being said, I'm pretty confident that this has nothing to do with the server process but rather the perl script itself timing out.
I came across the docs on the subprocess page that says:
Note that it does not increase the timeout of the connection, so if your forked process is going to take a very long time, you might need to increase that using "inactivity_timeout" in Mojolicious::Plugin::DefaultHelpers.
The DefaultHelpers docs say:
inactivity_timeout
$c = $c->inactivity_timeout(3600);
Use "stream" in Mojo::IOLoop to find the current connection and increase timeout if possible.
Longer version
Mojo::IOLoop->stream($c->tx->connection)->timeout(3600);
but I'm not eactly sure how (or where) to define the inactivity timeout, or what excatly the $c variable is in the docs.
My Code:
sub long_process{
my ($self) = @_;
my $fc = Mojo::IOLoop::Subprocess->new;
$fc->run(
sub {
my @args = @_;
sleep(60);
},[],
);
}
links:
Here is a minimal example:
use Mojolicious::Lite;
get '/',
sub {
my $c = shift;
say Mojo::IOLoop->stream($c->tx->connection)->timeout;
$self->inactivity_timeout(60);
say Mojo::IOLoop->stream($c->tx->connection)->timeout;
my $fc = Mojo::IOLoop::Subprocess->new;
$fc->run(
sub {
my @args = @_;
sleep(20);
return 'Hello Mojo!';
},
sub {
my ($subprocess, $err, $result) = @_;
say $result;
$c->stash(result => $result);
$c->render(template => 'foo');
}
);
};
app->start;
__DATA__
@@ foo.html.ep
%== $result
The second callback passed to run()
does the processing when the subprocess has finished.
See Mojo::IOLoop::Subprocess for details.