Did you ever have troubles optimizing or configuring your queues? Jobs are great but they sometimes seem to fall apart and you cannot tell what is going on? Then this could be the right article for you. Let’s start with the beginning.
Basic configuration
For the sake of simplicity I am going to use the database queue connection and supervisor in this tutorial.
Let’s view our connection in the queue.php config file:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
'after_commit' => false,
]
Default queue
The default queue is ‘default’ but feel free to name it however you want.
Retry After
‘retry_after’ option specifies the amount of seconds a job can be processed before the job is released back into the queue.
Retry After – Tip number 1
Your biggest concern here is to ensure that your job always timeouts before the specified ‘retry_after’. To do this you have to set the –timeout option of your command to a reasonable amount, smaller than the ‘retry_after’ option. E.g. the default timeout for a php artisan queue:work command which is 60.
One more thing about supervisor, you should always set a stopwaitsecs option in your worker configuration in order to prevent your jobs from being directly killed when a SIGQUIT or a SIGTERM signal is emitted. This allows your worker to finish the job before it is shutting down.
Quoting from the Laravel documentation:
E.g. I always set the stopwaitsecs to the same number as ‘retry_after’.
Retry After – Tip number 2
Although we have configured a reasonable timeout there is still one more problem to solve which is IO blocking processes such as sockets or outgoing HTTP connections. E.g. if the mail server does not respond, these processes may not respect your specified timeout. Therefore you should always configure a connection & request timeout.
After commit
The ‘after_commit’ option becomes relevant when dispatching jobs inside database transactions. You can either set this option to true in order to dispatch the job after the parent transactions are committed, which will impact all your jobs dispatched inside transactions or configure the job locally to Job::dispatch()->afterCommit().
Long running jobs
If you ever have the need to allow jobs to run for e.g. 1 hour, then just simply create another connection, with the desired configurations.
Deployments
As you are reading this, you probably already have written some jobs, everything works great but then you have to make that one change in the code and everything starts falling apart on your testing server and you are asking “Why?”. Let me tell you.
Deployments – Tip number 1
php artisan queue:restart is not enough. If you have used this command in your deployment script to restart the workers there is still the possibility that although your code was updated these changes are not going to be reflected directly. Workers might still process existing jobs using the old code. You have to explicitly stop all the workers at the beginning of your deployment script, this will wait until the current workers have finished processing the jobs they worked on, and start them again at the end of the deployment script in order to start processing jobs using the new code, the newest database migration changes, etc.
E.g.
sudo supervisorctl stop my-worker:*
git pull
composer install
php artisan migrate
npm install
npm run prod
…
sudo supervisorctl start my-worker:*
Deployments – Tip number 2
Avoiding memory leaks can be quite challenging but there is an easy fix for this. Restart your workers more often. You can do this by adding the php artisan queue:restart command to your /etc/crontab.
E.g. 0 * * * * my-user php /home/my-user/my-project/artisan queue:restart
The command will restart the workers every hour. Don’t worry about deploys, when the sudo supervisorctl stop my-worker:* command is run, the queue:restart command is not going to start the workers again.
They can only be started using the sudo supervisorctl start my-worker:* command.
Workers scalability
One simple technique to scale your workers based on workload is to start some workers every x minutes while the queue is not empty.
E.g.
*/10 * * * * my-user php /home/my-user/my-project/artisan queue:work –stop-when-empty –max-time=480
The stop-when-empty will stop the worker when the queue is empty.
max-time ensures that the worker will stop after 8 minutes of activity. This will prevent the cron from creating too many workers which in turn results in balanced resource consumption.
Working with batches – Tip number 1
Avoid using $this variable inside batch callbacks.
Quoting from the Laravel documentation:
Working with batches – Tip number 2
Jobs with different queues do not work with batches. All jobs inside a batch should use the same queue. You can however specify on which connection/queue the batch should be dispatched.
Working with batches – Tip number 3
When dispatching many jobs inside a batch it is recommended to dispatch the jobs in chunks in order to avoid a mysql lock timeout.
Add the jobs from multiple “chunk” jobs to the batch.
E.g. there was an app which should have imported shares from foreign brokers.
batch()->canceled()) {
return;
}
$config = [
'url' => 'some-url',
'query' => [
'broker' => $this->broker
]
];
$shares = Http::get(...$config)->collect()->transform(function (array $share) {
return new ImportShareFromForeignBroker($share);
});
$this->batch()->add($shares);
}
}
Bus::batch([
new ImportSharesFromForeignBroker($user, 'broker-1'),
new ImportSharesFromForeignBroker($user, 'broker-2'),
new ImportSharesFromForeignBroker($user, 'broker-3'),
new ImportSharesFromForeignBroker($user, 'broker-4'),
])->dispatch();
Instead of dispatching all jobs which import all shares directly and thus generating the lock timeout, we will dispatch the ImportShareFromForeignBroker jobs from within “chunk” jobs (ImportSharesFromForeignBroker). This allows the batch to finish dispatching before the workers might attempt to process any of the ImportShareFromForeignBroker jobs.
Conclusion
That’s pretty much it folks, I presented you the caveats and the solutions when configuring and optimizing queues. Let me know what you think about these solutions. Stay healthy, wear your seat belt and as always happy coding!
Useful links:
Other uuid approaches:
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.