Search code examples
kubernetesazure-akslaravel-9laravel-jobslaravel-scheduler

laravel 9 scheduled job executed but not queued


I recognised a strange behavior after upgrading laravel from 8 to 9. I have a scheduler that executes some jobs.

Kernel.php:

$schedule->job(new ImportAzApplications, 'imports')
    ->everyFiveMinutes()
    ->onOneServer()
    ->onFailure(function () {
        Log::error('Scheduled "ImportAzApplications" failed');
    });
$schedule->job(new ImportServicePrincipals, 'imports')
    ->everyFiveMinutes()
    ->onOneServer()
    ->onFailure(function () {
        Log::error('Scheduled "ImportServicePrincipals" failed');
    });

The scheduler will be executed by an cron job in kubernetes. At the artisan schedule:run command will be executed every 5 seconds.

Logs:

[2022-04-23T10:55:06+00:00] Running scheduled command: App\Jobs\ImportServicePrincipals
[2022-04-23T10:55:06+00:00] Running scheduled command: App\Jobs\ImportAzApplications

Now I would except having two jobs in the imports queue. And I do. But only on my dev machine, not on the staging server.

One of the Jobs look like this:

class ImportAzApplications implements ShouldQueue, ShouldBeUnique
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct()
    {
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        $instance = new AzureApplications();
        $params = $instance->azApplicationImportParams();
        try {
            $instance->import($params);
            Log::debug('ImportAzApplications: Processing AzureApplications::importAzApplications');
        } catch (Exception $exception) {
            Log::error('ImportAzApplications: '.$exception->getMessage(), $params);
        }
    }
}
  • I enabled debug log-level to fetch debug logs
  • I run unit tests successfully local
  • I validated the scheduler workflow on the stating server with artisan schedule:list', artisan schedule:run', artisan schedule:test and `artisan queue:work --queue=imports'
  • I monitored the database online via tinker: DB::table('jobs')->get() and DB::table('failed_jobs')->get(). Both remains empty
  • I run the Jobs manually in tinker (new ImportAzApplications)->handle() successfully
  • I executed a different job (export task) in the same queue successfully

I'm pretty sure it's a super simple thing that I can't see atm but for now I'm running out of ideas and hope someone has some further ideas

Many thanks

Edited

during the weekend, some magic happened and the jobs where executed according to the new logs. I'm going to investigate the voodoo-magic further on kubernetes.


Solution

  • I have to excuse myself to the community. The Problem was actually caused by an pipeline workflow.

    I run laravel on kubernetes. There're multiple pod for different exercises available.

    app

    the actual laraval application

    queue

    the same container image with a slide different configuration to process the jobs in the queues using the php artisan queue:work --queue=imports command

    jobs

    a kubernetes job that executes scheduled jobs using the php artisan schedule:run command defined in the App\Console\Kernel every minute

    Once a new release will be pushed to the registry, a kubectl set env command sets the new build version which forces the app Pod to grab a new container image. This is defined the kubernetes strategy.

    However the queue run with the old image. I've forced the queue to get latest image (no voodoo-magic) and the jobs where executed.

    Thanks for all that who took there time to investigate and try to reproduce this beavure. That wasn't easy.

    Cheers!