Back to Payload

Queues

docs/jobs-queue/queues.mdx

3.84.119.1 KB
Original Source

Queues are the final aspect of Payload's Jobs Queue and deal with how to run your jobs. Up to this point, all we've covered is how to queue up jobs to run, but so far, we aren't actually running any jobs.

<Banner type="default"> A **Queue** is a grouping of jobs that should be executed in order of when they were added. </Banner>

When you go to run jobs, Payload will query for any jobs that are added to the queue and then run them. By default, all queued jobs are added to the default queue.

But, imagine if you wanted to have some jobs that run nightly, and other jobs which should run every five minutes.

By specifying the queue name when you queue a new job using payload.jobs.queue(), you can queue certain jobs with queue: 'nightly', and other jobs can be left as the default queue.

Then, you could configure two different runner strategies:

  1. A cron that runs nightly, querying for jobs added to the nightly queue
  2. Another that runs any jobs that were added to the default queue every ~5 minutes or so

Executing jobs

As mentioned above, you can queue jobs, but the jobs won't run unless a worker picks up your jobs and runs them. This can be done in four ways:

For dedicated servers, the recommended approach is to use Payload's bin script to run jobs. This creates a completely separate process from your Next.js server, making it easier to deploy, scale, and manage workers independently.

sh
# Basic usage - run jobs from default queue
pnpm payload jobs:run

# Run with custom queue and limit
pnpm payload jobs:run --queue myQueue --limit 15

# Run on a cron schedule (recommended for production)
pnpm payload jobs:run --cron "*/5 * * * *" --queue myQueue

# Run and also handle schedules (both queuing and running)
pnpm payload jobs:run --cron "*/5 * * * *" --queue myQueue --handle-schedules

# Run all jobs from all queues
pnpm payload jobs:run --all-queues

Benefits of using bin scripts:

  • Separate process: Runs completely independently from your Next.js server, preventing any impact on API response times
  • Easy deployment: Can be deployed as a separate service/container, making it simple to scale workers independently
  • Simple monitoring: Each worker process can be monitored, restarted, and scaled independently
  • No Next.js overhead: Workers don't load the entire Next.js application, making them lighter and faster

Deployment example:

yaml
# docker-compose.yml or similar
services:
  nextjs:
    # Your main Next.js app
    command: pnpm start

  worker-default:
    # Worker for default queue
    command: pnpm payload jobs:run --cron "*/5 * * * *" --queue default

  worker-nightly:
    # Worker for nightly queue
    command: pnpm payload jobs:run --cron "* * * * *" --queue nightly --handle-schedules

This makes it easy to run multiple workers for different queues, scale them independently, and deploy them to different servers if needed.

autoRun (Alternative for Dedicated Servers)

The jobs.autoRun property allows you to configure cron jobs that automatically execute jobs from your queue at specified intervals. This runs within your Next.js process.

<Banner type="warning"> **Important Distinction:** `autoRun` does NOT queue new jobs—it only **runs** jobs that are already in the queue.
  • To manually queue jobs: Use payload.jobs.queue() in your code
  • To automatically queue recurring jobs: Use the schedule property on tasks/workflows (see Job Schedules)
  • To execute queued jobs: Use autoRun (this section)
</Banner>

Think of it this way:

  • Queuing = Adding jobs to the database to be run later
  • Running = Actually executing the job handler functions

autoRun only handles the running part.

Example:

ts
export default buildConfig({
  // Other configurations...
  jobs: {
    tasks: [
      {
        slug: 'processPayment',
        handler: async ({ input }) => {
          // Payment processing logic
          return { output: { success: true } }
        },
      },
      {
        slug: 'generateReport',
        // This task auto-queues itself daily at midnight
        schedule: [
          {
            cron: '0 0 * * *',
            queue: 'nightly',
          },
        ],
        handler: async () => {
          // Report generation logic
          return { output: { reportId: '123' } }
        },
      },
    ],

    // autoRun processes queued jobs from specified queues
    // autoRun can optionally be a function that receives `payload` as an argument
    autoRun: [
      {
        cron: '*/5 * * * *', // Check every 5 minutes
        queue: 'default', // Process 'default' queue (for manually queued jobs)
        limit: 50,
      },
      {
        cron: '* * * * *', // Check every minute
        queue: 'nightly', // Process 'nightly' queue (for scheduled jobs)
        limit: 100,
      },
    ],

    shouldAutoRun: async (payload) => {
      // Tell Payload if it should run jobs or not. This function is optional and will return true by default.
      // This function will be invoked each time Payload goes to pick up and run jobs.
      // If this function ever returns false, the cron schedule will be stopped.
      return true
    },
  },
})

In this example:

  • processPayment jobs are queued manually (e.g., from API endpoints using payload.jobs.queue())
  • generateReport jobs are auto-queued at midnight via the schedule property
  • The first autoRun entry processes manually-queued jobs from the 'default' queue every 5 minutes
  • The second autoRun entry processes scheduled jobs from the 'nightly' queue every minute
<Banner type="warning"> autoRun is intended for use with a dedicated server that is always running, and should not be used on serverless platforms like Vercel. For dedicated servers, consider using bin scripts instead, as they run in a separate process and are easier to deploy and scale. </Banner>

Endpoint

You can execute jobs by making a fetch request to the /api/payload-jobs/run endpoint:

ts
// Here, we're saying we want to run only 100 jobs for this invocation
// and we want to pull jobs from the `nightly` queue:
await fetch('/api/payload-jobs/run?limit=100&queue=nightly', {
  method: 'GET',
  headers: {
    Authorization: `Bearer ${token}`,
  },
})

This endpoint is automatically mounted for you and is helpful in conjunction with serverless platforms like Vercel, where you might want to use Vercel Cron to invoke a serverless function that executes your jobs.

Query Parameters

  • limit: The maximum number of jobs to run in this invocation (default: 10).
  • queue: The name of the queue to run jobs from. If not specified, jobs will be run from the default queue.
  • allQueues: If set to true, all jobs from all queues will be run. This will ignore the queue parameter.

Vercel Cron Example

If you're deploying on Vercel, you can add a vercel.json file in the root of your project that configures Vercel Cron to invoke the run endpoint on a cron schedule.

Here's an example of what this file will look like:

json
{
  "crons": [
    {
      "path": "/api/payload-jobs/run",
      "schedule": "*/5 * * * *"
    }
  ]
}

The configuration above schedules the endpoint /api/payload-jobs/run to be invoked every 5 minutes.

The last step will be to secure your run endpoint so that only the proper users can invoke the runner.

To do this, you can set an environment variable on your Vercel project called CRON_SECRET, which should be a random string—ideally 16 characters or longer.

Then, you can modify the access function for running jobs by ensuring that only Vercel can invoke your runner.

ts
export default buildConfig({
  // Other configurations...
  jobs: {
    access: {
      run: ({ req }: { req: PayloadRequest }): boolean => {
        // Allow logged in users to execute this endpoint (default)
        if (req.user) return true

        const secret = process.env.CRON_SECRET
        if (!secret) return false

        // If there is no logged in user, then check
        // for the Vercel Cron secret to be present as an
        // Authorization header:
        const authHeader = req.headers.get('authorization')
        return authHeader === `Bearer ${secret}`
      },
    },
    // Other job configurations...
  },
})

This works because Vercel automatically makes the CRON_SECRET environment variable available to the endpoint as the Authorization header when triggered by the Vercel Cron, ensuring that the jobs can be run securely.

After the project is deployed to Vercel, the Vercel Cron job will automatically trigger the /api/payload-jobs/run endpoint in the specified schedule, running the queued jobs in the background.

Local API (For Programmatic Control)

If you want to process jobs programmatically from your server-side code, you can use the Local API:

Run all jobs:

ts
// Run all jobs from the `default` queue - default limit is 10
const results = await payload.jobs.run()

// You can customize the queue name and limit by passing them as arguments:
await payload.jobs.run({ queue: 'nightly', limit: 100 })

// Run all jobs from all queues:
await payload.jobs.run({ allQueues: true })

// You can provide a where clause to filter the jobs that should be run:
await payload.jobs.run({
  where: { 'input.message': { equals: 'secret' } },
})

Run a single job:

ts
const results = await payload.jobs.runByID({
  id: myJobID,
})

Processing Order

By default, jobs are processed first in, first out (FIFO). This means that the first job added to the queue will be the first one processed. However, you can also configure the order in which jobs are processed.

Jobs Configuration

You can configure the order in which jobs are processed in the jobs configuration by passing the processingOrder property. This mimics the Payload sort property that's used for functionality such as payload.find().

ts
export default buildConfig({
  // Other configurations...
  jobs: {
    tasks: [
      // your tasks here
    ],
    processingOrder: '-createdAt', // Process jobs in reverse order of creation = LIFO
  },
})

You can also set this on a queue-by-queue basis:

ts
export default buildConfig({
  // Other configurations...
  jobs: {
    tasks: [
      // your tasks here
    ],
    processingOrder: {
      default: 'createdAt', // FIFO
      queues: {
        nightly: '-createdAt', // LIFO
        myQueue: '-createdAt', // LIFO
      },
    },
  },
})

If you need even more control over the processing order, you can pass a function that returns the processing order - this function will be called every time a queue starts processing jobs.

ts
export default buildConfig({
  // Other configurations...
  jobs: {
    tasks: [
      // your tasks here
    ],
    processingOrder: ({ queue }) => {
      if (queue === 'myQueue') {
        return '-createdAt' // LIFO
      }
      return 'createdAt' // FIFO
    },
  },
})

Local API

You can configure the order in which jobs are processed in the payload.jobs.queue method by passing the processingOrder property.

ts
const createdJob = await payload.jobs.queue({
  workflow: 'createPostAndUpdate',
  input: {
    title: 'my title',
  },
  processingOrder: '-createdAt', // Process jobs in reverse order of creation = LIFO
})

Common Queue Strategies

Here are typical patterns for organizing your queues:

Priority-Based Queues

Separate jobs by priority to ensure critical tasks run quickly:

ts
export default buildConfig({
  jobs: {
    tasks: [
      /* ... */
    ],
    autoRun: [
      {
        cron: '* * * * *', // Every minute
        limit: 100,
        queue: 'critical',
      },
      {
        cron: '*/5 * * * *', // Every 5 minutes
        limit: 50,
        queue: 'default',
      },
      {
        cron: '0 2 * * *', // Daily at 2 AM
        limit: 1000,
        queue: 'batch',
      },
    ],
  },
})

Then queue jobs to appropriate queues:

ts
// Critical: Password resets, payment confirmations
await payload.jobs.queue({
  task: 'sendPasswordReset',
  input: { userId: '123' },
  queue: 'critical',
})

// Default: Welcome emails, notifications
await payload.jobs.queue({
  task: 'sendWelcomeEmail',
  input: { userId: '123' },
  queue: 'default',
})

// Batch: Analytics, reports, cleanups
await payload.jobs.queue({
  task: 'generateAnalytics',
  input: { date: new Date() },
  queue: 'batch',
})

Environment-Based Execution

Only run jobs on specific servers:

ts
export default buildConfig({
  jobs: {
    tasks: [
      /* ... */
    ],
    shouldAutoRun: async (payload) => {
      // Only run jobs if this env var is set
      return process.env.ENABLE_JOB_WORKERS === 'true'
    },
    autoRun: [
      {
        cron: '*/5 * * * *',
        limit: 50,
        queue: 'default',
      },
    ],
  },
})

Use cases:

  • Dedicate specific servers to job processing
  • Disable job processing during deployments
  • Scale job workers independently from API servers

Feature-Based Queues

Group jobs by feature or domain:

ts
autoRun: [
  { cron: '*/2 * * * *', queue: 'emails', limit: 100 },
  { cron: '*/10 * * * *', queue: 'images', limit: 50 },
  { cron: '0 * * * *', queue: 'analytics', limit: 1000 },
]

This makes it easy to:

  • Monitor specific features
  • Scale individual features independently
  • Pause/resume specific types of work

Choosing an Execution Method

Here's a quick guide to help you choose:

MethodBest ForProsCons
Bin scriptDedicated servers (Recommended)Separate process, easy to deploy/scaleRequires dedicated server
autoRunDedicated servers (Alternative)Simple setup, automatic executionRuns in Next.js process
EndpointServerless platforms (Vercel, Netlify)Works with serverless, easy to triggerRequires external cron (Vercel Cron, etc.)
Local APICustom scheduling, testingFull control, good for testsMust implement your own scheduling

Recommendations:

  • Production (Dedicated Server): Use Bin script with --cron flag
  • Production (Serverless): Use Endpoint + Vercel Cron
  • Development: Use Bin script or Local API
  • Testing: Use Local API with payload.jobs.runByID()

Troubleshooting

Jobs aren't running (Most Common Issues)

If you configured autoRun but jobs aren't executing, check these common causes in order:

1. You only configured autoRun without queuing any jobs

Symptom: autoRun is configured but no jobs ever execute.

Diagnosis: autoRun runs jobs that are already in the queue. If nothing is adding jobs to the queue, there's nothing to run.

Solution: Queue jobs either:

  • Manually via payload.jobs.queue() in your code (e.g., in hooks, endpoints), OR
  • Automatically via the schedule property on tasks/workflows

Example of the problem:

ts
// This alone won't do anything - no jobs are being queued!
jobs: {
  tasks: [{ slug: 'myTask', handler: async () => {} }],
  autoRun: [{ cron: '* * * * *', queue: 'default' }],
}

Solution:

ts
// Option A: Queue manually in your code
await payload.jobs.queue({ task: 'myTask' })

// Option B: Add schedule property to auto-queue
jobs: {
  tasks: [
    {
      slug: 'myTask',
      schedule: [{ cron: '0 8 * * *', queue: 'default' }], //  Now jobs will be queued
      handler: async () => {},
    },
  ]
}

2. Queue name mismatch

Symptom: Jobs appear in the payload-jobs collection but never execute.

Diagnosis: The queue name in your task's schedule doesn't match the queue name in autoRun.

Example of the problem:

ts
// Task queues to 'reports'
schedule: [{ cron: '0 8 * * *', queue: 'reports' }]

// But autoRun processes 'default'
autoRun: [{ queue: 'default' }] // Won't pick up the job!

Solution: Make sure queue names match exactly:

ts
// Task queues to 'reports'
schedule: [{ cron: '0 8 * * *', queue: 'reports' }]

// autoRun also processes 'reports'
autoRun: [{ cron: '* * * * *', queue: 'reports' }] // Now they match!

3. You're on a serverless platform (Vercel, Netlify, etc.)

Symptom: Jobs work locally but don't run in production on serverless platforms.

Diagnosis: autoRun requires a long-running server and won't work on serverless platforms where instances shut down between requests.

Solution: Use the endpoint method with Vercel Cron instead of autoRun.

4. Jobs are scheduled for the future

Symptom: Jobs exist in the payload-jobs collection with completedAt: null but aren't running.

Diagnosis: Jobs may have waitUntil set to a future date - they won't run until that time passes.

Solution: Check the payload-jobs collection:

ts
const jobs = await payload.find({
  collection: 'payload-jobs',
  where: {
    completedAt: { exists: false },
  },
})

console.log(jobs.docs[0].waitUntil) // Check if this is in the future

5. Jobs stopped after hot module reload (development)

Symptom: Jobs work initially but stop running after you make code changes in development.

Diagnosis: Hot Module Reload (HMR) in Next.js disrupts cron schedules. This is expected behavior in development.

Solution: Restart your dev server after making changes to job configurations. This is not an issue in production.

Is shouldAutoRun returning true?

ts
jobs: {
  shouldAutoRun: async (payload) => {
    console.log('shouldAutoRun called') // Add logging
    return true
  },
}

Is autoRun configured correctly?

ts
// invalid cron syntax
autoRun: [{ cron: 'every 5 minutes' }]

// valid cron syntax
autoRun: [{ cron: '*/5 * * * *' }]

Are jobs in the correct queue?

ts
// Queuing to 'critical' queue
await payload.jobs.queue({ task: 'myTask', queue: 'critical' })

// But autoRun only processes 'default' queue
autoRun: [{ queue: 'default' }] // won't pick up the job

Check the jobs collection

Enable the jobs collection in admin:

ts
jobsCollectionOverrides: ({ defaultJobsCollection }) => ({
  ...defaultJobsCollection,
  admin: {
    ...defaultJobsCollection.admin,
    hidden: false,
  },
})

Look for jobs with:

  • processing: true but stuck → Worker may have crashed
  • hasError: true → Check the log field for errors
  • completedAt: null → Job hasn't run yet

Jobs running but failing

Check the job logs in the payload-jobs collection:

ts
const job = await payload.findByID({
  collection: 'payload-jobs',
  id: jobId,
})

console.log(job.log) // View execution log
console.log(job.processingErrors) // View errors

Jobs running too slowly

Increase limit

ts
autoRun: [
  { cron: '*/5 * * * *', limit: 100 }, // Process more jobs per run
]

Run more frequently

ts
autoRun: [
  { cron: '* * * * *', limit: 50 }, // Run every minute instead of every 5
]

Add more workers

Scale horizontally by running multiple servers with ENABLE_JOB_WORKERS=true.