docs/jobs-queue/queues.mdx
Queues are the final aspect of Payload's Jobs Queue and deal with how to run your jobs. Up to this point, all we've covered is how to queue up jobs to run, but so far, we aren't actually running any jobs.
<Banner type="default"> A **Queue** is a grouping of jobs that should be executed in order of when they were added. </Banner>When you go to run jobs, Payload will query for any jobs that are added to the queue and then run them. By default, all queued jobs are added to the default queue.
But, imagine if you wanted to have some jobs that run nightly, and other jobs which should run every five minutes.
By specifying the queue name when you queue a new job using payload.jobs.queue(), you can queue certain jobs with queue: 'nightly', and other jobs can be left as the default queue.
Then, you could configure two different runner strategies:
cron that runs nightly, querying for jobs added to the nightly queuedefault queue every ~5 minutes or soAs mentioned above, you can queue jobs, but the jobs won't run unless a worker picks up your jobs and runs them. This can be done in four ways:
For dedicated servers, the recommended approach is to use Payload's bin script to run jobs. This creates a completely separate process from your Next.js server, making it easier to deploy, scale, and manage workers independently.
# Basic usage - run jobs from default queue
pnpm payload jobs:run
# Run with custom queue and limit
pnpm payload jobs:run --queue myQueue --limit 15
# Run on a cron schedule (recommended for production)
pnpm payload jobs:run --cron "*/5 * * * *" --queue myQueue
# Run and also handle schedules (both queuing and running)
pnpm payload jobs:run --cron "*/5 * * * *" --queue myQueue --handle-schedules
# Run all jobs from all queues
pnpm payload jobs:run --all-queues
Benefits of using bin scripts:
Deployment example:
# docker-compose.yml or similar
services:
nextjs:
# Your main Next.js app
command: pnpm start
worker-default:
# Worker for default queue
command: pnpm payload jobs:run --cron "*/5 * * * *" --queue default
worker-nightly:
# Worker for nightly queue
command: pnpm payload jobs:run --cron "* * * * *" --queue nightly --handle-schedules
This makes it easy to run multiple workers for different queues, scale them independently, and deploy them to different servers if needed.
The jobs.autoRun property allows you to configure cron jobs that automatically execute jobs from your queue at specified intervals. This runs within your Next.js process.
payload.jobs.queue() in your codeschedule property on
tasks/workflows (see Job Schedules)autoRun (this section)Think of it this way:
autoRun only handles the running part.
Example:
export default buildConfig({
// Other configurations...
jobs: {
tasks: [
{
slug: 'processPayment',
handler: async ({ input }) => {
// Payment processing logic
return { output: { success: true } }
},
},
{
slug: 'generateReport',
// This task auto-queues itself daily at midnight
schedule: [
{
cron: '0 0 * * *',
queue: 'nightly',
},
],
handler: async () => {
// Report generation logic
return { output: { reportId: '123' } }
},
},
],
// autoRun processes queued jobs from specified queues
// autoRun can optionally be a function that receives `payload` as an argument
autoRun: [
{
cron: '*/5 * * * *', // Check every 5 minutes
queue: 'default', // Process 'default' queue (for manually queued jobs)
limit: 50,
},
{
cron: '* * * * *', // Check every minute
queue: 'nightly', // Process 'nightly' queue (for scheduled jobs)
limit: 100,
},
],
shouldAutoRun: async (payload) => {
// Tell Payload if it should run jobs or not. This function is optional and will return true by default.
// This function will be invoked each time Payload goes to pick up and run jobs.
// If this function ever returns false, the cron schedule will be stopped.
return true
},
},
})
In this example:
processPayment jobs are queued manually (e.g., from API endpoints using payload.jobs.queue())generateReport jobs are auto-queued at midnight via the schedule propertyautoRun entry processes manually-queued jobs from the 'default' queue every 5 minutesautoRun entry processes scheduled jobs from the 'nightly' queue every minuteYou can execute jobs by making a fetch request to the /api/payload-jobs/run endpoint:
// Here, we're saying we want to run only 100 jobs for this invocation
// and we want to pull jobs from the `nightly` queue:
await fetch('/api/payload-jobs/run?limit=100&queue=nightly', {
method: 'GET',
headers: {
Authorization: `Bearer ${token}`,
},
})
This endpoint is automatically mounted for you and is helpful in conjunction with serverless platforms like Vercel, where you might want to use Vercel Cron to invoke a serverless function that executes your jobs.
limit: The maximum number of jobs to run in this invocation (default: 10).queue: The name of the queue to run jobs from. If not specified, jobs will be run from the default queue.allQueues: If set to true, all jobs from all queues will be run. This will ignore the queue parameter.If you're deploying on Vercel, you can add a vercel.json file in the root of your project that configures Vercel Cron to invoke the run endpoint on a cron schedule.
Here's an example of what this file will look like:
{
"crons": [
{
"path": "/api/payload-jobs/run",
"schedule": "*/5 * * * *"
}
]
}
The configuration above schedules the endpoint /api/payload-jobs/run to be invoked every 5 minutes.
The last step will be to secure your run endpoint so that only the proper users can invoke the runner.
To do this, you can set an environment variable on your Vercel project called CRON_SECRET, which should be a random string—ideally 16 characters or longer.
Then, you can modify the access function for running jobs by ensuring that only Vercel can invoke your runner.
export default buildConfig({
// Other configurations...
jobs: {
access: {
run: ({ req }: { req: PayloadRequest }): boolean => {
// Allow logged in users to execute this endpoint (default)
if (req.user) return true
const secret = process.env.CRON_SECRET
if (!secret) return false
// If there is no logged in user, then check
// for the Vercel Cron secret to be present as an
// Authorization header:
const authHeader = req.headers.get('authorization')
return authHeader === `Bearer ${secret}`
},
},
// Other job configurations...
},
})
This works because Vercel automatically makes the CRON_SECRET environment variable available to the endpoint as the Authorization header when triggered by the Vercel Cron, ensuring that the jobs can be run securely.
After the project is deployed to Vercel, the Vercel Cron job will automatically trigger the /api/payload-jobs/run endpoint in the specified schedule, running the queued jobs in the background.
If you want to process jobs programmatically from your server-side code, you can use the Local API:
Run all jobs:
// Run all jobs from the `default` queue - default limit is 10
const results = await payload.jobs.run()
// You can customize the queue name and limit by passing them as arguments:
await payload.jobs.run({ queue: 'nightly', limit: 100 })
// Run all jobs from all queues:
await payload.jobs.run({ allQueues: true })
// You can provide a where clause to filter the jobs that should be run:
await payload.jobs.run({
where: { 'input.message': { equals: 'secret' } },
})
Run a single job:
const results = await payload.jobs.runByID({
id: myJobID,
})
By default, jobs are processed first in, first out (FIFO). This means that the first job added to the queue will be the first one processed. However, you can also configure the order in which jobs are processed.
You can configure the order in which jobs are processed in the jobs configuration by passing the processingOrder property. This mimics the Payload sort property that's used for functionality such as payload.find().
export default buildConfig({
// Other configurations...
jobs: {
tasks: [
// your tasks here
],
processingOrder: '-createdAt', // Process jobs in reverse order of creation = LIFO
},
})
You can also set this on a queue-by-queue basis:
export default buildConfig({
// Other configurations...
jobs: {
tasks: [
// your tasks here
],
processingOrder: {
default: 'createdAt', // FIFO
queues: {
nightly: '-createdAt', // LIFO
myQueue: '-createdAt', // LIFO
},
},
},
})
If you need even more control over the processing order, you can pass a function that returns the processing order - this function will be called every time a queue starts processing jobs.
export default buildConfig({
// Other configurations...
jobs: {
tasks: [
// your tasks here
],
processingOrder: ({ queue }) => {
if (queue === 'myQueue') {
return '-createdAt' // LIFO
}
return 'createdAt' // FIFO
},
},
})
You can configure the order in which jobs are processed in the payload.jobs.queue method by passing the processingOrder property.
const createdJob = await payload.jobs.queue({
workflow: 'createPostAndUpdate',
input: {
title: 'my title',
},
processingOrder: '-createdAt', // Process jobs in reverse order of creation = LIFO
})
Here are typical patterns for organizing your queues:
Separate jobs by priority to ensure critical tasks run quickly:
export default buildConfig({
jobs: {
tasks: [
/* ... */
],
autoRun: [
{
cron: '* * * * *', // Every minute
limit: 100,
queue: 'critical',
},
{
cron: '*/5 * * * *', // Every 5 minutes
limit: 50,
queue: 'default',
},
{
cron: '0 2 * * *', // Daily at 2 AM
limit: 1000,
queue: 'batch',
},
],
},
})
Then queue jobs to appropriate queues:
// Critical: Password resets, payment confirmations
await payload.jobs.queue({
task: 'sendPasswordReset',
input: { userId: '123' },
queue: 'critical',
})
// Default: Welcome emails, notifications
await payload.jobs.queue({
task: 'sendWelcomeEmail',
input: { userId: '123' },
queue: 'default',
})
// Batch: Analytics, reports, cleanups
await payload.jobs.queue({
task: 'generateAnalytics',
input: { date: new Date() },
queue: 'batch',
})
Only run jobs on specific servers:
export default buildConfig({
jobs: {
tasks: [
/* ... */
],
shouldAutoRun: async (payload) => {
// Only run jobs if this env var is set
return process.env.ENABLE_JOB_WORKERS === 'true'
},
autoRun: [
{
cron: '*/5 * * * *',
limit: 50,
queue: 'default',
},
],
},
})
Use cases:
Group jobs by feature or domain:
autoRun: [
{ cron: '*/2 * * * *', queue: 'emails', limit: 100 },
{ cron: '*/10 * * * *', queue: 'images', limit: 50 },
{ cron: '0 * * * *', queue: 'analytics', limit: 1000 },
]
This makes it easy to:
Here's a quick guide to help you choose:
| Method | Best For | Pros | Cons |
|---|---|---|---|
| Bin script | Dedicated servers (Recommended) | Separate process, easy to deploy/scale | Requires dedicated server |
| autoRun | Dedicated servers (Alternative) | Simple setup, automatic execution | Runs in Next.js process |
| Endpoint | Serverless platforms (Vercel, Netlify) | Works with serverless, easy to trigger | Requires external cron (Vercel Cron, etc.) |
| Local API | Custom scheduling, testing | Full control, good for tests | Must implement your own scheduling |
Recommendations:
--cron flagpayload.jobs.runByID()If you configured autoRun but jobs aren't executing, check these common causes in order:
autoRun without queuing any jobsSymptom: autoRun is configured but no jobs ever execute.
Diagnosis: autoRun runs jobs that are already in the queue. If nothing is adding jobs to the queue, there's nothing to run.
Solution: Queue jobs either:
payload.jobs.queue() in your code (e.g., in hooks, endpoints), ORschedule property on tasks/workflowsExample of the problem:
// This alone won't do anything - no jobs are being queued!
jobs: {
tasks: [{ slug: 'myTask', handler: async () => {} }],
autoRun: [{ cron: '* * * * *', queue: 'default' }],
}
Solution:
// Option A: Queue manually in your code
await payload.jobs.queue({ task: 'myTask' })
// Option B: Add schedule property to auto-queue
jobs: {
tasks: [
{
slug: 'myTask',
schedule: [{ cron: '0 8 * * *', queue: 'default' }], // Now jobs will be queued
handler: async () => {},
},
]
}
Symptom: Jobs appear in the payload-jobs collection but never execute.
Diagnosis: The queue name in your task's schedule doesn't match the queue name in autoRun.
Example of the problem:
// Task queues to 'reports'
schedule: [{ cron: '0 8 * * *', queue: 'reports' }]
// But autoRun processes 'default'
autoRun: [{ queue: 'default' }] // Won't pick up the job!
Solution: Make sure queue names match exactly:
// Task queues to 'reports'
schedule: [{ cron: '0 8 * * *', queue: 'reports' }]
// autoRun also processes 'reports'
autoRun: [{ cron: '* * * * *', queue: 'reports' }] // Now they match!
Symptom: Jobs work locally but don't run in production on serverless platforms.
Diagnosis: autoRun requires a long-running server and won't work on serverless platforms where instances shut down between requests.
Solution: Use the endpoint method with Vercel Cron instead of autoRun.
Symptom: Jobs exist in the payload-jobs collection with completedAt: null but aren't running.
Diagnosis: Jobs may have waitUntil set to a future date - they won't run until that time passes.
Solution: Check the payload-jobs collection:
const jobs = await payload.find({
collection: 'payload-jobs',
where: {
completedAt: { exists: false },
},
})
console.log(jobs.docs[0].waitUntil) // Check if this is in the future
Symptom: Jobs work initially but stop running after you make code changes in development.
Diagnosis: Hot Module Reload (HMR) in Next.js disrupts cron schedules. This is expected behavior in development.
Solution: Restart your dev server after making changes to job configurations. This is not an issue in production.
Is shouldAutoRun returning true?
jobs: {
shouldAutoRun: async (payload) => {
console.log('shouldAutoRun called') // Add logging
return true
},
}
Is autoRun configured correctly?
// invalid cron syntax
autoRun: [{ cron: 'every 5 minutes' }]
// valid cron syntax
autoRun: [{ cron: '*/5 * * * *' }]
Are jobs in the correct queue?
// Queuing to 'critical' queue
await payload.jobs.queue({ task: 'myTask', queue: 'critical' })
// But autoRun only processes 'default' queue
autoRun: [{ queue: 'default' }] // won't pick up the job
Check the jobs collection
Enable the jobs collection in admin:
jobsCollectionOverrides: ({ defaultJobsCollection }) => ({
...defaultJobsCollection,
admin: {
...defaultJobsCollection.admin,
hidden: false,
},
})
Look for jobs with:
processing: true but stuck → Worker may have crashedhasError: true → Check the log field for errorscompletedAt: null → Job hasn't run yetCheck the job logs in the payload-jobs collection:
const job = await payload.findByID({
collection: 'payload-jobs',
id: jobId,
})
console.log(job.log) // View execution log
console.log(job.processingErrors) // View errors
Increase limit
autoRun: [
{ cron: '*/5 * * * *', limit: 100 }, // Process more jobs per run
]
Run more frequently
autoRun: [
{ cron: '* * * * *', limit: 50 }, // Run every minute instead of every 5
]
Add more workers
Scale horizontally by running multiple servers with ENABLE_JOB_WORKERS=true.