apps/docs/content/troubleshooting/edge-function-shutdown-reasons-explained.mdx
Learn about the different reasons why an Edge Function (worker) might shut down and how to handle each scenario effectively.
When an Edge Function stops executing, the runtime emits a ShutdownEvent with a specific reason that explains why the worker ended. Understanding these shutdown reasons helps you diagnose issues, optimize your functions, and build more resilient serverless applications.
These events are surfaced through logs and observability tools, allowing you to monitor function behavior and take appropriate action such as implementing retries, adjusting resource limits, or optimizing your code.
What it means: The function's event loop finished naturally. There are no more pending tasks, timers, or microtasks. The worker completed all scheduled work successfully.
When it happens: This is the normal, graceful shutdown scenario. All synchronous code has executed, and all awaited promises have resolved.
What to do: Nothing special. This indicates successful completion. No retry is needed.
What it means: The worker exceeded the configured wall-clock timeout. This measures the total elapsed real time from when the function started, including time spent waiting for I/O, external API calls, and sleeps.
When it happens: Your function is taking too long to complete, regardless of how much actual computing it's doing. Currently, the wall clock limit is set at 400 seconds.
What to do:
Related: See Edge Function 'wall clock time limit reached' for more details.
What it means: The worker consumed more CPU time than allowed. CPU time measures actual processing cycles used by your code, excluding time spent waiting for I/O or sleeping. Currently limited to 200 milliseconds.
When it happens: Your function is performing too much computation. This includes complex calculations, data processing, encryption, or other CPU-intensive operations.
What to do:
What it means: The worker's memory usage exceeded the allowed limit. The ShutdownEvent includes detailed memory data showing total memory, heap usage, and external allocations.
When it happens: Your function is consuming too much RAM. This commonly happens when buffering large files, loading entire datasets into memory, or creating many objects without cleanup.
What to do:
memory_used fields in logs to identify whether heap or external memory is the issueWhat it means: The runtime detected that the function has completed all its work and can be shut down early, before reaching any resource limits. This is actually the most common shutdown reason and typically indicates efficient function execution.
When it happens: Your function has finished processing the request, sent the response, and has no remaining async work (pending promises, timers, or callbacks). The runtime recognizes that the worker can be safely terminated without waiting for timeouts or other limits.
Why this is good: EarlyDrop means your function is running efficiently. It completed quickly, didn't exhaust resources, and the runtime could reclaim the worker for other requests. Most well-designed functions should end with EarlyDrop.
What to do:
EarlyDrop but expected more work to happen, check for:
What it means: An external request explicitly asked the runtime to terminate the worker. This could be from orchestration systems, manual intervention, platform updates, deployments, or a user-initiated cancellation.
When it happens: The platform needs to stop your function immediately, often during deployments or infrastructure maintenance.
What to do:
While not shutdown reasons themselves, these events provide important context:
Each ShutdownEvent includes valuable diagnostic information:
reason: One of the shutdown reasons described abovecpu_time_used: Amount of CPU time consumed (in milliseconds)memory_used: Memory snapshot at shutdown with breakdown of total, heap, and external memoryexecution_id: Unique identifier for tracking this specific execution across logs and retriesMake your functions safe to run multiple times with the same input. Use execution IDs from metadata to detect duplicate runs and avoid repeating side effects.
// Store execution_id to detect retries
const executionId = Deno.env.get('EXECUTION_ID')
const alreadyProcessed = await checkIfProcessed(executionId)
if (alreadyProcessed) {
return new Response('Already processed', { status: 200 })
}
Save progress frequently to durable storage
// Save progress incrementally
for (const batch of dataBatches) {
await processBatch(batch)
await saveProgress(batchId)
}
Avoid loading entire files or responses into memory. Stream data to reduce memory footprint.
// Stream responses instead of buffering
return new Response(readableStream, {
headers: { 'Content-Type': 'application/json' },
})
Track shutdown reasons in your observability system. Set up alerts for:
Memory shutdowns: investigate memory usage patternsCPUTime shutdowns: optimize computational workWallClockTime shutdowns: reduce latency or break up workEarlyDrop or TerminationRequested: check platform scaling and deployment patternsAccess your function logs at Functions Logs.
While you can't always rely on cleanup code running, implement it anyway for the cases where graceful shutdown is possible.
// Cleanup handler (may not always run)
addEventListener('unload', () => {
// Close connections, flush buffers, etc.
cleanup()
})
Normal completion:
{
"event": {
"Shutdown": {
"reason": "EventLoopCompleted",
"cpu_time_used": 12,
"memory_used": {
"total": 1048576,
"heap": 512000,
"external": 1000
}
}
},
"metadata": {
"execution_id": "4b6a4e2e-7c4d-4f8b-9e1a-2d3c4e5f6a7b"
}
}
Wall-clock timeout:
{
"event": {
"Shutdown": {
"reason": "WallClockTime",
"cpu_time_used": 50,
"memory_used": {
"total": 2097152,
"heap": 1024000,
"external": 5000
}
}
},
"metadata": {
"execution_id": "5c7b5f3f-8d5e-5g9c-0f2b-3e4d5f6g7h8i"
}
}
Use this quick reference when investigating shutdown issues:
| Shutdown Reason | Primary Action |
|---|---|
Many Memory shutdowns | Switch to streaming; process data in chunks; investigate heap vs external allocations |
Many CPUTime shutdowns | Optimize algorithms; cache results; move heavy compute to background workers |
Many WallClockTime shutdowns | Reduce I/O waits; use async operations; break into smaller functions |
Frequent EarlyDrop or TerminationRequested | Check platform scaling policies; review deployment logs; implement better checkpointing |