docs/upload/storage-adapters.mdx
Payload offers additional storage adapters to handle file uploads. These adapters allow you to store files in different locations, such as Amazon S3, Vercel Blob Storage, Google Cloud Storage, and more.
| Service | Package |
|---|---|
| Vercel Blob | @payloadcms/storage-vercel-blob |
| AWS S3 | @payloadcms/storage-s3 |
| Azure | @payloadcms/storage-azure |
| Google Cloud Storage | @payloadcms/storage-gcs |
| Uploadthing | @payloadcms/storage-uploadthing |
| R2 | @payloadcms/storage-r2 |
@payloadcms/storage-vercel-blob
pnpm add @payloadcms/storage-vercel-blob
collections object to specify which collections should use the Vercel Blob adapter. The slug must match one of your existing collection slugs.BLOB_READ_WRITE_TOKEN set in your Vercel environment variables. This is usually set by Vercel automatically after adding blob storage to your project.disableLocalStorage to true for each collection.clientUploads to true to do uploads directly on the client.import { vercelBlobStorage } from '@payloadcms/storage-vercel-blob'
import { Media } from './collections/Media'
import { MediaWithPrefix } from './collections/MediaWithPrefix'
export default buildConfig({
collections: [Media, MediaWithPrefix],
plugins: [
vercelBlobStorage({
enabled: true, // Optional, defaults to true
// Specify which collections should use Vercel Blob
collections: {
media: true,
'media-with-prefix': {
prefix: 'my-prefix',
},
},
// Token provided by Vercel once Blob storage is added to your Vercel project
token: process.env.BLOB_READ_WRITE_TOKEN,
}),
],
})
| Option | Description | Default |
|---|---|---|
enabled | Whether or not to enable the plugin | true |
collections | Collections to apply the Vercel Blob adapter to | |
addRandomSuffix | Add a random suffix to the uploaded file name in Vercel Blob storage | false |
cacheControlMaxAge | Cache-Control max-age in seconds | 365 * 24 * 60 * 60 (1 Year) |
token | Vercel Blob storage read/write token | '' |
clientUploads | Do uploads directly on the client to bypass limits on Vercel. | |
useCompositePrefixes | Combine collection prefix with document prefix instead of document prefix overriding it. | false |
pnpm add @payloadcms/storage-s3
collections object to specify which collections should use the S3 Storage adapter. The slug must match one of your existing collection slugs.config object can be any S3ClientConfig object (from @aws-sdk/client-s3). This is highly dependent on your AWS setup. Check the AWS documentation for more information.disableLocalStorage to true for each collection.clientUploads to true to do uploads directly on the client. You must allow CORS PUT method for the bucket to your website.signedDownloads (either globally or per-collection in collections) to use presigned URLs for files downloading. This can improve performance for large files (like videos) while still respecting your access control. Additionally, with signedDownloads.shouldUseSignedURL you can specify a condition whether Payload should use a presigned URL, if you want to use this feature only for specific files.enabled option. For example, enabled: Boolean(process.env.S3_BUCKET) skips the plugin in local development when credentials are not set.import { s3Storage } from '@payloadcms/storage-s3'
import { Media } from './collections/Media'
import { MediaWithPrefix } from './collections/MediaWithPrefix'
export default buildConfig({
collections: [Media, MediaWithPrefix],
plugins: [
s3Storage({
collections: {
media: true,
'media-with-prefix': {
prefix,
},
'media-with-presigned-downloads': {
// Filter only mp4 files
signedDownloads: {
shouldUseSignedURL: ({ collection, filename, req }) => {
return filename.endsWith('.mp4')
},
},
},
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
region: process.env.S3_REGION,
// ... Other S3 configuration
},
}),
],
})
| Option | Description | Default |
|---|---|---|
enabled | Whether or not to enable the plugin | true |
collections | Collections to apply the S3 adapter to | |
bucket | The name of the S3 bucket | |
config | S3ClientConfig object passed to the AWS SDK client | |
acl | Access control list for uploaded files (e.g. 'public-read') | undefined |
clientUploads | Do uploads directly on the client to bypass Vercel's 4.5MB server limit | |
signedDownloads | Use presigned URLs for file downloads. Can be overridden per collection | |
useCompositePrefixes | Combine collection prefix with document prefix instead of document prefix overriding it. | false |
For full S3ClientConfig options, see the AWS SDK Package and S3ClientConfig docs.
Cloudflare R2 exposes an S3-compatible API, so you can use @payloadcms/storage-s3 to connect to it. This is the recommended approach when deploying to Vercel, Netlify, or any Node.js environment. (The @payloadcms/storage-r2 adapter is for Cloudflare Workers only, where R2 is available as a native bucket binding.)
import { s3Storage } from '@payloadcms/storage-s3'
export default buildConfig({
collections: [Media],
plugins: [
s3Storage({
enabled: Boolean(process.env.R2_BUCKET),
collections: {
media: {
disablePayloadAccessControl: true,
generateFileURL: ({ filename, prefix }) => {
const key = prefix ? `${prefix}/${filename}` : filename
return `${process.env.R2_PUBLIC_URL}/${key}`
},
},
},
bucket: process.env.R2_BUCKET,
config: {
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
region: 'auto',
// R2 S3 API endpoint — for uploads only, not for serving files
endpoint: process.env.R2_ENDPOINT,
forcePathStyle: true,
},
}),
],
})
Required environment variables:
R2_BUCKET=my-bucket
R2_ACCESS_KEY_ID=...
R2_SECRET_ACCESS_KEY=...
R2_ENDPOINT=https://<accountId>.r2.cloudflarestorage.com
R2_PUBLIC_URL=https://media.yourdomain.com
region: 'auto' — required by R2; standard AWS region values are not acceptedendpoint — your R2 S3 API endpoint from the Cloudflare dashboard. This is for uploads only — not for serving filesforcePathStyle: true — required for R2's path-style bucket addressingR2_PUBLIC_URL — your bucket's public URL (either the *.r2.dev subdomain or a custom domain you've connected in Cloudflare). This is separate from the S3 API endpointdisablePayloadAccessControl: true — bypasses Payload's file proxy so URLs point directly to your public R2 domainpnpm add @payloadcms/storage-azure
collections object to specify which collections should use the Azure Blob adapter. The slug must match one of your existing collection slugs.disableLocalStorage to true for each collection.clientUploads to true to do uploads directly on the client. You must allow CORS PUT method to your website.import { azureStorage } from '@payloadcms/storage-azure'
import { Media } from './collections/Media'
import { MediaWithPrefix } from './collections/MediaWithPrefix'
export default buildConfig({
collections: [Media, MediaWithPrefix],
plugins: [
azureStorage({
collections: {
media: true,
'media-with-prefix': {
prefix,
},
},
allowContainerCreate:
process.env.AZURE_STORAGE_ALLOW_CONTAINER_CREATE === 'true',
baseURL: process.env.AZURE_STORAGE_ACCOUNT_BASEURL,
connectionString: process.env.AZURE_STORAGE_CONNECTION_STRING,
containerName: process.env.AZURE_STORAGE_CONTAINER_NAME,
}),
],
})
| Option | Description | Default |
|---|---|---|
enabled | Whether or not to enable the plugin | true |
collections | Collections to apply the Azure Blob adapter to | |
allowContainerCreate | Whether or not to allow the container to be created if it does not exist | false |
baseURL | Base URL for the Azure Blob storage account | |
connectionString | Azure Blob storage connection string | |
containerName | Azure Blob storage container name | |
clientUploads | Do uploads directly on the client to bypass limits on Vercel. | |
useCompositePrefixes | Combine collection prefix with document prefix instead of document prefix overriding it. | false |
pnpm add @payloadcms/storage-gcs
collections object to specify which collections should use the Google Cloud Storage adapter. The slug must match one of your existing collection slugs.disableLocalStorage to true for each collection.clientUploads to true to do uploads directly on the client. You must allow CORS PUT method for the bucket to your website.import { gcsStorage } from '@payloadcms/storage-gcs'
import { Media } from './collections/Media'
import { MediaWithPrefix } from './collections/MediaWithPrefix'
export default buildConfig({
collections: [Media, MediaWithPrefix],
plugins: [
gcsStorage({
collections: {
media: true,
'media-with-prefix': {
prefix,
},
},
bucket: process.env.GCS_BUCKET,
options: {
apiEndpoint: process.env.GCS_ENDPOINT,
projectId: process.env.GCS_PROJECT_ID,
},
}),
],
})
| Option | Description | Default |
|---|---|---|
enabled | Whether or not to enable the plugin | true |
collections | Collections to apply the storage to | |
bucket | The name of the bucket to use | |
options | Google Cloud Storage client configuration. See Docs | |
acl | Access control list for files that are uploaded | Private |
clientUploads | Do uploads directly on the client to bypass limits on Vercel. | |
useCompositePrefixes | Combine collection prefix with document prefix instead of document prefix overriding it. | false |
@payloadcms/storage-uploadthing
pnpm add @payloadcms/storage-uploadthing
collections object to specify which collections should use uploadthing. The slug must match one of your existing collection slugs and be an upload type.token in the options object.acl is optional and defaults to public-read.clientUploads to true to do uploads directly on the client.export default buildConfig({
collections: [Media],
plugins: [
uploadthingStorage({
collections: {
media: true,
},
options: {
token: process.env.UPLOADTHING_TOKEN,
acl: 'public-read',
},
}),
],
})
| Option | Description | Default |
|---|---|---|
token | Token from Uploadthing. Required. | |
acl | Access control list for files that are uploaded | public-read |
logLevel | Log level for Uploadthing | info |
fetch | Custom fetch function | fetch |
defaultKeyType | Default key type for file operations | fileKey |
clientUploads | Do uploads directly on the client to bypass limits on Vercel. |
Use this adapter to store uploads in a Cloudflare R2 bucket via the Cloudflare Workers environment. If you're connecting to R2 from a Node.js environment (Vercel, Netlify, etc.) using the S3-compatible API, see Using with Cloudflare R2 (via S3 API) instead.
pnpm add @payloadcms/storage-r2
collections object to specify which collections should use r2. The slug must match one of your existing collection slugs and be an upload type.bucket option, this should be done in the environment where Payload is running (e.g. Cloudflare Worker).enabled option.export default buildConfig({
collections: [Media],
plugins: [
r2Storage({
collections: {
media: true,
},
bucket: cloudflare.env.R2,
}),
],
})
If you need to create a custom storage adapter, you can use the @payloadcms/plugin-cloud-storage package. This package is used internally by the storage adapters mentioned above.
pnpm add @payloadcms/plugin-cloud-storage
Reference any of the existing storage adapters for guidance on how this should be structured. Create an adapter following the GeneratedAdapter interface. Then, pass the adapter to the cloudStorage plugin.
export interface GeneratedAdapter {
/**
* Additional fields to be injected into the base
* collection and image sizes
*/
fields?: Field[]
/**
* Generates the public URL for a file
*/
generateURL?: GenerateURL
handleDelete: HandleDelete
handleUpload: HandleUpload
name: string
onInit?: () => void
staticHandler: StaticHandler
}
import { buildConfig } from 'payload'
import { cloudStoragePlugin } from '@payloadcms/plugin-cloud-storage'
export default buildConfig({
plugins: [
cloudStoragePlugin({
collections: {
'my-collection-slug': {
adapter: theAdapterToUse, // see docs for the adapter you want to use
},
},
}),
],
// The rest of your config goes here
})
This plugin is configurable to work across many different Payload collections. A * denotes that the property is required.
| Option | Type | Description |
|---|---|---|
alwaysInsertFields | boolean | When enabled, fields (like the prefix field) will always be inserted into the collection schema regardless of whether the plugin is enabled. This will be enabled by default in Payload v4. Default: false. |
collections * | Record<string, CollectionOptions> | Object with keys set to the slug of collections you want to enable the plugin for, and values set to collection-specific options. |
enabled | boolean | To conditionally enable/disable plugin. Default: true. |
| Option | Type | Description |
|---|---|---|
adapter * | Adapter | Pass in the adapter that you'd like to use for this collection. You can also set this field to null for local development if you'd like to bypass cloud storage in certain scenarios and use local storage. |
disableLocalStorage | boolean | Choose to disable local storage on this collection. Defaults to true. |
disablePayloadAccessControl | true | Set to true to disable Payload's Access Control. More |
generateFileURL | GenerateFileURL | Override the generated file URL with one that you create. |
prefix | string | Set to media/images to upload files inside media/images folder in the bucket. |
Storage adapters support two types of prefixes:
prefix: 'media-folder')prefix field on the upload collectionBy default, if a document has a prefix, it overrides the collection prefix entirely.
With useCompositePrefixes: true, the prefixes are combined:
# Without useCompositePrefixes (default)
Collection prefix: media-folder
Document prefix: user-123
Result: user-123/image.jpg
# With useCompositePrefixes: true
Collection prefix: media-folder
Document prefix: user-123
Result: media-folder/user-123/image.jpg
This is useful when you want a base folder structure (collection prefix) while still allowing per-document organization (document prefix).
s3Storage({
collections: {
media: {
prefix: 'uploads', // All files go under uploads/
},
},
useCompositePrefixes: true, // Document prefixes append to collection prefix
bucket: process.env.S3_BUCKET,
// ...
})
Payload ships with Access Control that runs even on statically served files. The same read Access Control property on your upload-enabled collections is used, and it allows you to restrict who can request your uploaded files.
To preserve this feature, by default, this plugin keeps all file URLs exactly the same. Your file URLs won't be updated to point directly to your cloud storage source, as in that case, Payload's Access control will be completely bypassed and you would need public readability on your cloud-hosted files.
Instead, all uploads will still be reached from the default /collectionSlug/staticURL/filename path. This plugin will "pass through" all files that are hosted on your third-party cloud service—with the added benefit of keeping your existing Access Control in place.
If this does not apply to you (your upload collection has read: () => true or similar) you can disable this functionality by setting disablePayloadAccessControl to true. When this setting is in place, this plugin will update your file URLs to point directly to your cloud host. For a concrete example, see Using with Cloudflare R2 (via S3 API).
The proper way to conditionally enable/disable this plugin is to use the enabled property.
cloudStoragePlugin({
enabled: process.env.MY_CONDITION === 'true',
collections: {
'my-collection-slug': {
adapter: theAdapterToUse, // see docs for the adapter you want to use
},
},
}),