docs/compute-private-beta.mdx
MicroVM compute is a new runtime for your tasks, designed for fast cold starts via boot snapshots. For most tasks the migration is transparent - the same task code runs unchanged.
You can't opt in yourself. Once we've enabled the private beta for your org, the us-east-1-next region becomes available on the Regions page in the dashboard. If you'd like access, ping us on Slack.
MicroVM compute is exposed as a region: us-east-1-next. The region is tagged with a microVM badge on the Regions page.
The region option is part of TriggerOptions, accepted by most triggering functions.
import { yourTask } from "./trigger/your-task";
await yourTask.trigger(payload, { region: "us-east-1-next" });
This is the safest way to opt in during the beta - you control exactly which runs land on microVM compute.
To gate it on an environment variable so prod isn't affected, define one constant and reuse it at every call site:
const defaultRegion = process.env.USE_COMPUTE_BETA === "1" ? "us-east-1-next" : undefined;
await yourTask.trigger(payload, { region: defaultRegion });
Set USE_COMPUTE_BETA=1 in the staging environment of the app that calls trigger() (typically Vercel, or wherever you deploy your app).
Open the Regions page in the dashboard and set us-east-1-next as the project-wide default. This applies to all environments in the project, so only do this on a project that is running staging traffic only.
A few things to be aware of during the beta:
small-1x is the default and what we've optimized for. Boot snapshots are precreated for small-1x only - other sizes generate the snapshot lazily on first run after a deploy.small-2x, medium-1x, and medium-2x work, but the first run after each deploy is slower while the boot snapshot is generated. Subsequent runs use it.large-1x and large-2x aren't supported yet. Stick to small-* or medium-* for now.micro during the beta. Cold starts on micro are noticeably slower than other sizes.We'll be shipping CLI and SDK changes during the beta to make cold start times more consistent across machine sizes, lift the disk cap, and unlock the larger presets. Compute-specific prereleases will be announced on this page as we go, and we'll also reach out on Slack.
Send anything weird (errors, slow runs, restore failures, anything that surprises you) over Slack with the run ID. The more reproductions we get during the beta, the faster we harden it for public beta.