docs/compute-private-beta.mdx
MicroVM compute is a new runtime for your tasks, designed for fast cold starts via boot snapshots. For most tasks the migration is transparent - the same task code runs unchanged.
micro and small-1x - there's no longer a cold-start penalty for picking a smaller machine.large-1x and large-2x no longer hard-fail. They're still not recommended - cold-start performance trails the smaller presets and we're ironing out reliability issues.You can't opt in yourself. Once we've enabled the private beta for your org, the us-east-1-next region becomes available on the Regions page in the dashboard. If you'd like access, ping us on Slack.
MicroVM compute is exposed as a region: us-east-1-next. The region is tagged with a microVM badge on the Regions page.
The region option is part of TriggerOptions, accepted by most triggering functions.
import { yourTask } from "./trigger/your-task";
await yourTask.trigger(payload, { region: "us-east-1-next" });
This is the safest way to opt in during the beta - you control exactly which runs land on microVM compute.
To gate it on an environment variable so prod isn't affected, define one constant and reuse it at every call site:
const defaultRegion = process.env.USE_COMPUTE_BETA === "1" ? "us-east-1-next" : undefined;
await yourTask.trigger(payload, { region: defaultRegion });
Set USE_COMPUTE_BETA=1 in the staging environment of the app that calls trigger() (typically Vercel, or wherever you deploy your app).
Open the Regions page in the dashboard and set us-east-1-next as the project-wide default. This applies to all environments in the project, so only do this on a project that is running staging traffic only.
A few things to be aware of during the beta:
small-1x is the default and what we've optimized for. Boot snapshots are precreated for small-1x only - other sizes generate the snapshot lazily on first run after a deploy.large-1x and large-2x aren't recommended yet. They run, but cold-start times trail the smaller presets and we're still ironing out reliability issues. Stick to other machine types for now and expect rough edges if you do try the large sizes.We'll continue shipping fixes, performance, and reliability improvements during the beta. Compute-specific prereleases will be announced on this page as we go, and we'll also reach out on Slack.
Send anything weird (errors, slow runs, restore failures, anything that surprises you) over Slack with the run ID. The more reproductions we get during the beta, the faster we harden it for public beta.