docs/config.md
The Kubo config file is a JSON document located at $IPFS_PATH/config. It
is read once at node instantiation, either for an offline command, or when
starting the daemon. Commands that execute on a running daemon do not read the
config file at runtime.
Addresses
API
AutoNAT
AutoTLS
AutoConf
Bitswap
BootstrapDatastore
Discovery
Experimental
Gateway
Gateway.NoFetchGateway.NoDNSLinkGateway.DeserializedResponsesGateway.AllowCodecConversionGateway.DisableHTMLErrorsGateway.ExposeRoutingAPIGateway.RetrievalTimeoutGateway.MaxRequestDurationGateway.MaxRangeRequestFileSizeGateway.MaxConcurrentRequestsGateway.HTTPHeadersGateway.RootRedirectGateway.DiagnosticServiceURLGateway.FastDirIndexThresholdGateway.WritableGateway.PathPrefixesGateway.PublicGateways
Gateway recipesIdentity
Internal
Internal.Bitswap
Internal.UnixFSShardingSizeThresholdIpns
Migration
Mounts
Pinning
Provide
Provider
Pubsub
Peering
Reprovider
Routing
Swarm
Swarm.AddrFiltersSwarm.DisableBandwidthMetricsSwarm.DisableNatPortMapSwarm.EnableHolePunchingSwarm.EnableAutoRelaySwarm.RelayClient
Swarm.RelayService
Swarm.RelayService.EnabledSwarm.RelayService.Limit
Swarm.RelayService.ReservationTTLSwarm.RelayService.MaxReservationsSwarm.RelayService.MaxCircuitsSwarm.RelayService.BufferSizeSwarm.RelayService.MaxReservationsPerPeerSwarm.RelayService.MaxReservationsPerIPSwarm.RelayService.MaxReservationsPerASNSwarm.EnableRelayHopSwarm.DisableRelaySwarm.EnableAutoNATServiceSwarm.ConnMgr
Swarm.ResourceMgr
Swarm.TransportsSwarm.Transports.Network
Swarm.Transports.Security
Swarm.Transports.MultiplexersSwarm.Transports.Multiplexers.YamuxSwarm.Transports.Multiplexers.MplexDNS
HTTPRetrieval
Import
Import.CidVersionImport.UnixFSRawLeavesImport.UnixFSChunkerImport.HashFunctionImport.FastProvideRootImport.FastProvideDAGImport.FastProvideWaitImport.BatchMaxNodesImport.BatchMaxSizeImport.UnixFSFileMaxLinksImport.UnixFSDirectoryMaxLinksImport.UnixFSHAMTDirectoryMaxFanoutImport.UnixFSHAMTDirectorySizeThresholdImport.UnixFSHAMTDirectorySizeEstimationImport.UnixFSDAGLayoutVersion
server profilerandomports profiledefault-datastore profilelocal-discovery profiledefault-networking profileautoconf-on profileautoconf-off profileflatfs profileflatfs-measure profilepebbleds profilepebbleds-measure profilebadgerds profilebadgerds-measure profilelowpower profileannounce-off profileannounce-on profileunixfs-v0-2015 profilelegacy-cid-v0 profileunixfs-v1-2025 profileAddressesContains information about various listener addresses to be used by this node.
Addresses.APIMultiaddr or array of multiaddrs describing the addresses to serve
the local Kubo RPC API (/api/v0).
Supported Transports:
/ipN/.../tcp/.../unix/path/to/socket[!CAUTION] NEVER EXPOSE UNPROTECTED ADMIN RPC TO LAN OR THE PUBLIC INTERNET
The RPC API grants admin-level access to your Kubo IPFS node, including configuration and secret key management.
By default, it is bound to localhost for security reasons. Exposing it to LAN or the public internet is highly risky—similar to exposing a SQL database or backend service without authentication middleware
- If you need secure access to a subset of RPC, secure it with
API.Authorizationsor custom auth middleware running in front of the localhost-only RPC port defined here.- If you are looking for an interface designed for browsers and public internet, use
Addresses.Gatewayport instead.- See Security section for network exposure considerations.
Default: /ip4/127.0.0.1/tcp/5001
Type: strings (multiaddrs)
Addresses.GatewayMultiaddr or array of multiaddrs describing the address to serve
the local HTTP gateway (/ipfs, /ipns) on.
Supported Transports:
/ipN/.../tcp/.../unix/path/to/socket[!CAUTION] SECURITY CONSIDERATIONS FOR GATEWAY EXPOSURE
By default, the gateway is bound to localhost for security. If you bind to
0.0.0.0or a public IP, anyone with access can trigger retrieval of arbitrary CIDs, causing bandwidth usage and potential exposure to malicious content. Limit withGateway.NoFetch. Consider firewall rules, authentication, andGateway.PublicGatewaysfor public exposure. See Security section for network exposure considerations.
Default: /ip4/127.0.0.1/tcp/8080
Type: strings (multiaddrs)
Addresses.SwarmAn array of multiaddrs describing which addresses to listen on for p2p swarm connections.
Supported Transports:
/ipN/.../tcp/.../ipN/.../tcp/.../ws/ipN/.../udp/.../quic-v1 - can share the same two tuple with /quic-v1/webtransport/ipN/.../udp/.../quic-v1/webtransport - can share the same two tuple with /quic-v1[!IMPORTANT] Make sure your firewall rules allow incoming connections on both TCP and UDP ports defined here. See Security section for network exposure considerations.
Note that quic (Draft-29) used to be supported with the format /ipN/.../udp/.../quic, but has since been removed.
Default:
[
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001",
"/ip4/0.0.0.0/udp/4001/quic-v1",
"/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
"/ip6/::/udp/4001/quic-v1",
"/ip6/::/udp/4001/quic-v1/webtransport"
]
Type: array[string] (multiaddrs)
Addresses.AnnounceIf non-empty, this array specifies the swarm addresses to announce to the network. If empty, the daemon will announce inferred swarm addresses.
Default: []
Type: array[string] (multiaddrs)
Addresses.AppendAnnounceSimilar to Addresses.Announce except this doesn't
override inferred swarm addresses if non-empty.
Default: []
Type: array[string] (multiaddrs)
Addresses.NoAnnounceAn array of multiaddrs (exact matches or /ipcidr/ netmasks). Kubo does not
announce these addresses and strips them from libp2p identify, the DHT
self-record, and the signed peer record. Matching entries in
Addresses.Announce and
Addresses.AppendAnnounce are removed as well.
This is the publish-side filter: it controls what other peers learn about
this node's addresses. It does not affect what this node dials. For the
dial-side filter see Swarm.AddrFilters. The
server profile typically populates both fields together
so that a range is neither advertised nor dialed.
[!TIP] The
serverprofile populates this field with a set of private, local-only, and non-globally-reachable prefixes (RFC 1918 private, RFC 6598 CGNAT, ULA, link-local, and others). See theserverprofile section for the full list and for optional entries operators may add manually.
Default: []
Type: array[string] (multiaddrs)
APIContains information used by the Kubo RPC API.
API.HTTPHeadersMap of HTTP headers to set on responses from the RPC (/api/v0) HTTP server.
Example:
{
"Foo": ["bar"]
}
Default: null
Type: object[string -> array[string]] (header names -> array of header values)
API.AuthorizationsThe API.Authorizations field defines user-based access restrictions for the
Kubo RPC API, which is located at
Addresses.API under /api/v0 paths.
By default, the admin-level RPC API is accessible without restrictions as it is only
exposed on 127.0.0.1 and safeguarded with Origin check and implicit
CORS headers that
block random websites from accessing the RPC.
When entries are defined in API.Authorizations, RPC requests will be declined
unless a corresponding secret is present in the HTTP Authorization header,
and the requested path is included in the AllowedPaths list for that specific
secret.
[!CAUTION] NEVER EXPOSE UNPROTECTED ADMIN RPC TO LAN OR THE PUBLIC INTERNET
The RPC API is vast. It grants admin-level access to your Kubo IPFS node, including configuration and secret key management.
- If you need secure access to a subset of RPC, make sure you understand the risk, block everything by default and allow basic auth access with
API.Authorizationsor custom auth middleware running in front of the localhost-only port defined inAddresses.API.- If you are looking for an interface designed for browsers and public internet, use
Addresses.Gatewayport instead.
Default: null
Type: object[string -> object] (user name -> authorization object, see below)
For example, to limit RPC access to Alice (access id and MFS files commands with HTTP Basic Auth)
and Bob (full access with Bearer token):
{
"API": {
"Authorizations": {
"Alice": {
"AuthSecret": "basic:alice:password123",
"AllowedPaths": ["/api/v0/id", "/api/v0/files"]
},
"Bob": {
"AuthSecret": "bearer:secret-token123",
"AllowedPaths": ["/api/v0"]
}
}
}
}
API.Authorizations: AuthSecretThe AuthSecret field denotes the secret used by a user to authenticate,
usually via HTTP Authorization header.
Field format is type:value, and the following types are supported:
bearer: For secret Bearer tokens, set as bearer:token.
type: prefix is present, bearer: is assumed.basic: For HTTP Basic Auth introduced in RFC7617. Value can be:
basic:user:passbasic:base64EncodedBasicAuthOne can use the config value for authentication via the command line:
ipfs id --api-auth basic:user:pass
Type: string
API.Authorizations: AllowedPathsThe AllowedPaths field is an array of strings containing allowed RPC path
prefixes. Users authorized with the related AuthSecret will only be able to
access paths prefixed by the specified prefixes.
For instance:
["/api/v0"], the user will have access to the complete RPC API.["/api/v0/id", "/api/v0/files"], the user will only have access
to the id command and all MFS commands under files.Note that /api/v0/version is always permitted access to allow version check
to ensure compatibility.
Default: []
Type: array[string]
AutoNATContains the configuration options for the libp2p's AutoNAT service. The AutoNAT service helps other nodes on the network determine if they're publicly reachable from the rest of the internet.
AutoNAT.ServiceModeWhen unset (default), the AutoNAT service defaults to enabled. Otherwise, this field can take one of two values:
enabled - Enable the V1+V2 service (unless the node determines that it,
itself, isn't reachable by the public internet).legacy-v1 - DEPRECATED Same as enabled but only V1 service is enabled. Used for testing
during as few releases as we transition to V2, will be removed in the future.disabled - Disable the service.Additional modes may be added in the future.
[!IMPORTANT] We are in the progress of rolling out AutoNAT V2. Right now, by default, a publicly dialable Kubo provides both V1 and V2 service to other peers, and V1 is still used by Kubo for Autorelay feature. In a future release we will remove V1 and switch all features to use V2.
Default: enabled
Type: optionalString
AutoNAT.ThrottleWhen set, this option configures the AutoNAT services throttling behavior. By default, Kubo will rate-limit the number of NAT checks performed for other nodes to 30 per minute, and 3 per peer.
AutoNAT.Throttle.GlobalLimitConfigures how many AutoNAT requests to service per AutoNAT.Throttle.Interval.
Default: 30
Type: integer (non-negative, 0 means unlimited)
AutoNAT.Throttle.PeerLimitConfigures how many AutoNAT requests per-peer to service per AutoNAT.Throttle.Interval.
Default: 3
Type: integer (non-negative, 0 means unlimited)
AutoNAT.Throttle.IntervalConfigures the interval for the above limits.
Default: 1 Minute
Type: duration (when 0/unset, the default value is used)
AutoConfThe AutoConf feature enables Kubo nodes to automatically fetch and apply network configuration from a remote JSON endpoint. This system allows dynamic configuration updates for bootstrap peers, DNS resolvers, delegated routing, and IPNS publishing endpoints without requiring manual updates to each node's local config.
AutoConf works by using special "auto" placeholder values in configuration fields. When Kubo encounters these placeholders, it fetches the latest configuration from the specified URL and resolves the placeholders with the appropriate values at runtime. The original configuration file remains unchanged - "auto" values are preserved in the JSON and only resolved in memory during node operation.
GOLOG_LOG_LEVEL="error,autoconf=debug"AutoConf can resolve "auto" placeholders in the following configuration fields:
Bootstrap - Bootstrap peer addressesDNS.Resolvers - DNS-over-HTTPS resolver endpointsRouting.DelegatedRouters - Delegated routing HTTP API endpointsIpns.DelegatedPublishers - IPNS delegated publishing HTTP API endpoints{
"AutoConf": {
"URL": "https://example.com/autoconf.json",
"Enabled": true,
"RefreshInterval": "24h"
},
"Bootstrap": ["auto"],
"DNS": {
"Resolvers": {
".": ["auto"],
"eth.": ["auto"],
"custom.": ["https://dns.example.com/dns-query"]
}
},
"Routing": {
"DelegatedRouters": ["auto", "https://router.example.org/routing/v1"]
}
}
Notes:
"auto" and static values"auto" values exist, daemon startup will fail with validation errors$IPFS_PATH/autoconf/ with up to 3 versions retainedAutoConf supports path-based routing URLs that automatically enable specific routing operations based on the URL path. This allows precise control over which HTTP Routing V1 endpoints are used for different operations:
Supported paths:
/routing/v1/providers - Enables provider record lookups only/routing/v1/peers - Enables peer routing lookups only/routing/v1/ipns - Enables IPNS record operations onlyAutoConf JSON structure with path-based routing:
{
"DelegatedRouters": {
"mainnet-for-nodes-with-dht": [
"https://cid.contact/routing/v1/providers"
],
"mainnet-for-nodes-without-dht": [
"https://delegated-ipfs.dev/routing/v1/providers",
"https://delegated-ipfs.dev/routing/v1/peers",
"https://delegated-ipfs.dev/routing/v1/ipns"
]
},
"DelegatedPublishers": {
"mainnet-for-ipns-publishers-with-http": [
"https://delegated-ipfs.dev/routing/v1/ipns"
]
}
}
Node type categories:
mainnet-for-nodes-with-dht: Mainnet nodes with DHT enabled (typically only need additional provider lookups)mainnet-for-nodes-without-dht: Mainnet nodes without DHT (need comprehensive routing services)mainnet-for-ipns-publishers-with-http: Mainnet nodes that publish IPNS records via HTTPThis design enables efficient, selective routing where each endpoint URL automatically determines its capabilities based on the path, while maintaining semantic grouping by node configuration type.
Default: {}
Type: object
AutoConf.EnabledControls whether the AutoConf system is active. When enabled, Kubo will fetch configuration from the specified URL and resolve "auto" placeholders at runtime. When disabled, any "auto" values in the configuration will cause daemon startup to fail with validation errors.
This provides a safety mechanism to ensure nodes don't start with unresolved placeholders when AutoConf is intentionally disabled.
Default: true
Type: flag
AutoConf.URLSpecifies the HTTP(S) URL from which to fetch the autoconf JSON. The endpoint should return a JSON document containing Bootstrap peers, DNS resolvers, delegated routing endpoints, and IPNS publishing endpoints that will replace "auto" placeholders in the local configuration.
The URL must serve a JSON document matching the AutoConf schema. Kubo validates all multiaddr and URL values before caching to ensure they are properly formatted.
When not specified in the configuration, the default mainnet URL is used automatically.
<a href="https://ipshipyard.com/"></a>
[!NOTE] Public good autoconf manifest at
conf.ipfs-mainnet.orgis provided by the team at Shipyard.
Default: "https://conf.ipfs-mainnet.org/autoconf.json" (when not specified)
Type: optionalString
AutoConf.RefreshIntervalSpecifies how frequently Kubo should refresh autoconf data. This controls both how often cached autoconf data is considered fresh and how frequently the background service checks for new configuration updates.
When a new configuration version is detected during background updates, Kubo logs an ERROR message informing the user that a node restart is required to apply the changes to any "auto" entries in their configuration.
Default: 24h
Type: optionalDuration
AutoConf.TLSInsecureSkipVerifyFOR TESTING ONLY - Allows skipping TLS certificate verification when fetching autoconf from HTTPS URLs. This should never be enabled in production as it makes the configuration fetching vulnerable to man-in-the-middle attacks.
Default: false
Type: flag
AutoTLSThe AutoTLS feature enables publicly reachable Kubo nodes (those dialable from the public
internet) to automatically obtain a wildcard TLS certificate for a DNS name
unique to their PeerID at *.[PeerID].libp2p.direct. This enables direct
libp2p connections and retrieval of IPFS content from browsers Secure Context
using transports such as Secure WebSockets,
without requiring user to do any manual domain registration and certificate configuration.
Under the hood, p2p-forge client uses public utility service at libp2p.direct as an ACME DNS-01 Challenge
broker enabling peer to obtain a wildcard TLS certificate tied to public key of their PeerID.
By default, the certificates are requested from Let's Encrypt. Origin and rationale for this project can be found in community.letsencrypt.org discussion.
<a href="https://ipshipyard.com/"></a>
[!NOTE] Public good DNS and p2p-forge infrastructure at
libp2p.directis run by the team at Interplanetary Shipyard.
Default: {}
Type: object
AutoTLS.EnabledEnables the AutoTLS feature to provide DNS and TLS support for libp2p Secure WebSocket over a /tcp port,
to allow JS clients running in web browser Secure Context to connect to Kubo directly.
When activated, together with AutoTLS.AutoWSS (default) or manually including a /tcp/{port}/tls/sni/*.libp2p.direct/ws multiaddr in Addresses.Swarm
(with SNI suffix matching AutoTLS.DomainSuffix), Kubo retrieves a trusted PKI TLS certificate for *.{peerid}.libp2p.direct and configures the /ws listener to use it.
Note:
Swarm.DisableNatPortMap=false) is required.AutoTLS.RegistrationDelay before /ws listener is added. Be patient.AutoTLS.AutoWSS=true should automatically add /ws listener to existing, firewall-forwarded /tcp ports.GOLOG_LOG_LEVEL="error,autotls=debug for detailed logs, or GOLOG_LOG_LEVEL="error,autotls=info for quieter output.$IPFS_PATH/p2p-forge-certs; deleting this directory and restarting the daemon forces a certificate rotation./ws libp2p WebSocket connections, not HTTP Gateway, which still need separate reverse proxy TLS setup with a custom domain.Default: true
Type: flag
AutoTLS.AutoWSSOptional. Controls if Kubo should add /tls/sni/*.libp2p.direct/ws listener to every pre-existing /tcp port IFF no explicit /ws is defined in Addresses.Swarm already.
Default: true (if AutoTLS.Enabled)
Type: flag
AutoTLS.ShortAddrsOptional. Controls if final AutoTLS listeners are announced under shorter /dnsX/A.B.C.D.peerid.libp2p.direct/tcp/4001/tls/ws addresses instead of fully resolved /ip4/A.B.C.D/tcp/4001/tls/sni/A-B-C-D.peerid.libp2p.direct/tls/ws.
The main use for AutoTLS is allowing connectivity from Secure Context in a web browser, and DNS lookup needs to happen there anyway, making /dnsX a more compact, more interoperable option without obvious downside.
Default: true
Type: flag
AutoTLS.SkipDNSLookupOptional. Controls whether to skip network DNS lookups for p2p-forge domains like *.libp2p.direct.
This applies to DNS resolution performed via DNS.Resolvers, including /dns* multiaddrs resolved by go-libp2p (e.g., peer addresses from DHT or delegated routing).
When enabled (default), A/AAAA queries for hostnames matching AutoTLS.DomainSuffix are resolved locally by parsing the IP address directly from the hostname (e.g., 1-2-3-4.peerID.libp2p.direct resolves to 1.2.3.4 without network I/O). This avoids unnecessary DNS queries since the IP is already encoded in the hostname.
If the hostname format is invalid (wrong peerID, malformed IP encoding), the resolver falls back to network DNS, ensuring forward compatibility with potential future DNS record types.
Set to false to always use network DNS for these domains. This is primarily useful for debugging or if you need to override resolution behavior via DNS.Resolvers.
Default: true
Type: flag
AutoTLS.DomainSuffixOptional override of the parent domain suffix that will be used in DNS+TLS+WebSockets multiaddrs generated by p2p-forge client. Do not change this unless you self-host p2p-forge.
Default: libp2p.direct (public good run by Interplanetary Shipyard)
Type: optionalString
AutoTLS.RegistrationEndpointOptional override of p2p-forge HTTP registration API. Do not change this unless you self-host p2p-forge under own domain.
[!IMPORTANT] The default endpoint performs libp2p Peer ID Authentication over HTTP (proving ownership of PeerID), probes if your Kubo node can correctly answer to a libp2p Identify query. This ensures only a correctly configured, publicly dialable Kubo can initiate ACME DNS-01 challenge for
peerid.libp2p.direct.
Default: https://registration.libp2p.direct (public good run by Interplanetary Shipyard)
Type: optionalString
AutoTLS.RegistrationTokenOptional value for Forge-Authorization token sent with request to RegistrationEndpoint
(useful for private/self-hosted/test instances of p2p-forge, unset by default).
Default: ""
Type: optionalString
AutoTLS.RegistrationDelayAn additional delay applied before sending a request to the RegistrationEndpoint.
The default delay is bypassed if the user explicitly set AutoTLS.Enabled=true in the JSON configuration file.
This ensures that ephemeral nodes using the default configuration do not spam theAutoTLS.CAEndpoint with unnecessary ACME requests.
Default: 1h (or 0 if explicit AutoTLS.Enabled=true)
Type: optionalDuration
AutoTLS.CAEndpointOptional override of CA ACME API used by p2p-forge system. Do not change this unless you self-host p2p-forge under own domain.
[!IMPORTANT] CAA DNS record at
libp2p.directlimits CA choice to Let's Encrypt. If you want to use a different CA, use your own domain.
Default: certmagic.LetsEncryptProductionCA (see community.letsencrypt.org discussion)
Type: optionalString
BitswapHigh level client and server configuration of the Bitswap Protocol over libp2p.
For internal configuration see Internal.Bitswap.
For HTTP version see HTTPRetrieval.
Bitswap.Libp2pEnabledDetermines whether Kubo will use Bitswap over libp2p.
Disabling this, will remove /ipfs/bitswap/* protocol support from libp2p identify responses, effectively shutting down both Bitswap libp2p client and server.
[!WARNING] Bitswap over libp2p is a core component of Kubo and the oldest way of exchanging blocks. Disabling it completely may cause unpredictable outcomes, such as retrieval failures, if the only providers were libp2p ones. Treat this as experimental and use it solely for testing purposes with
HTTPRetrieval.Enabled.
Default: true
Type: flag
Bitswap.ServerEnabledDetermines whether Kubo functions as a Bitswap server to host and respond to block requests.
Disabling the server retains client and protocol support in libp2p identify responses but causes Kubo to reply with "don't have" to all block requests.
Default: true (requires Bitswap.Libp2pEnabled)
Type: flag
BootstrapBootstrap peers help your node discover and connect to the IPFS network when starting up. This array contains multiaddrs of trusted nodes that your node contacts first to find other peers and content.
The special value "auto" automatically uses curated, up-to-date bootstrap peers from AutoConf, ensuring your node can always connect to the healthy network without manual maintenance.
What this gives you:
Default: ["auto"]
Type: array[string] (multiaddrs or "auto")
DatastoreContains information related to the construction and operation of the on-disk storage system.
Datastore.StorageMaxA soft upper limit for the size of the ipfs repository's datastore. With StorageGCWatermark,
is used to calculate whether to trigger a gc run (only if --enable-gc flag is set).
[!NOTE] This only controls when automatic GC of raw blocks is triggered. It is not a hard limit on total disk usage. The metadata stored alongside blocks (pins, MFS, provider system state, pubsub message ID tracking, and other internal data) is not counted against this limit. Always include extra headroom to account for metadata overhead. See datastores.md for details on how different datastore backends handle disk space reclamation.
Default: "10GB"
Type: string (size)
Datastore.StorageGCWatermarkThe percentage of the StorageMax value at which a garbage collection will be
triggered automatically if the daemon was run with automatic gc enabled (that
option defaults to false currently).
Default: 90
Type: integer (0-100%)
Datastore.GCPeriodA time duration specifying how frequently to run a garbage collection. Only used if automatic gc is enabled.
Default: 1h
Type: duration (an empty string means the default value)
Datastore.HashOnReadA boolean value. If set to true, all block reads from the disk will be hashed and verified. This will cause increased CPU utilization.
Default: false
Type: bool
Datastore.BloomFilterSizeA number representing the size in bytes of the blockstore's bloom filter. A value of zero represents the feature is disabled.
This site generates useful graphs for various bloom filter values:
https://hur.st/bloomfilter/?n=1e6&p=0.01&m=&k=7 You may use it to find a
preferred optimal value, where m is BloomFilterSize in bits. Remember to
convert the value m from bits, into bytes for use as BloomFilterSize in the
config file. For example, for 1,000,000 blocks, expecting a 1% false-positive
rate, you'd end up with a filter size of 9592955 bits, so for BloomFilterSize
we'd want to use 1199120 bytes. As of writing, 7 hash
functions
are used, so the constant k is 7 in the formula.
Enabling the BloomFilter can provide performance improvements specially when responding to many requests for inexistent blocks. It however requires a full sweep of all the datastore keys on daemon start. On very large datastores this can be a very taxing operation, particularly if the datastore does not support querying existing keys without reading their values at the same time (blocks).
Default: 0 (disabled)
Type: integer (non-negative, bytes)
Datastore.WriteThroughThis option controls whether a block that already exist in the datastore
should be written to it. When set to false, a Has() call is performed
against the datastore prior to writing every block. If the block is already
stored, the write is skipped. This check happens both on the Blockservice and
the Blockstore layers and this setting affects both.
When set to true, no checks are performed and blocks are written to the
datastore, which depending on the implementation may perform its own checks.
This option can affect performance and the strategy should be taken in
conjunction with BlockKeyCacheSize and
BloomFilterSize.
Default: true
Type: bool
Datastore.BlockKeyCacheSizeA number representing the maximum size in bytes of the blockstore's Two-Queue
cache, which caches block-cids and their block-sizes. Use 0 to disable.
This cache, once primed, can greatly speed up operations like ipfs repo stat
as there is no need to read full blocks to know their sizes. Size should be
adjusted depending on the number of CIDs on disk (NumObjects inipfs repo stat`).
Default: 65536 (64KiB)
Type: optionalInteger (non-negative, bytes)
Datastore.SpecSpec defines the structure of the ipfs datastore. It is a composable structure, where each datastore is represented by a json object. Datastores can wrap other datastores to provide extra functionality (eg metrics, logging, or caching).
[!NOTE] For more information on possible values for this configuration option, see
kubo/docs/datastores.md
Default:
{
"mounts": [
{
"mountpoint": "/blocks",
"path": "blocks",
"prefix": "flatfs.datastore",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": false,
"type": "flatfs"
},
{
"compression": "none",
"mountpoint": "/",
"path": "datastore",
"prefix": "leveldb.datastore",
"type": "levelds"
}
],
"type": "mount"
}
With flatfs-measure profile:
{
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
}
Type: object
DiscoveryContains options for configuring IPFS node discovery mechanisms.
Discovery.MDNSOptions for ZeroConf Multicast DNS-SD peer discovery.
Discovery.MDNS.EnabledA boolean value to activate or deactivate Multicast DNS-SD.
Default: true
Type: bool
Discovery.MDNS.IntervalREMOVED: this is not configurable anymore in the new mDNS implementation.
ExperimentalToggle and configure experimental features of Kubo. Experimental features are listed here.
Experimental.Libp2pStreamMountingEnables the ipfs p2p commands for tunneling TCP connections through libp2p
streams, similar to SSH port forwarding.
See docs/p2p-tunnels.md for usage examples.
Default: false
Type: bool
GatewayOptions for the HTTP gateway.
[!IMPORTANT] By default, Kubo's gateway is configured for local use at
127.0.0.1andlocalhost. To run a public gateway, configure your domain names inGateway.PublicGateways. For production deployment considerations (reverse proxy, timeouts, rate limiting, CDN), see Running in Production.
Gateway.NoFetchWhen set to true, the gateway will only serve content already in the local repo and will not fetch files from the network.
Default: false
Type: bool
Gateway.NoDNSLinkA boolean to configure whether DNSLink lookup for value in Host HTTP header
should be performed. If DNSLink is present, the content path stored in the DNS TXT
record becomes the / and the respective payload is returned to the client.
Default: false
Type: bool
Gateway.DeserializedResponsesAn optional flag to explicitly configure whether this gateway responds to deserialized requests, or not. By default, it is enabled. When disabling this option, the gateway operates as a Trustless Gateway only: https://specs.ipfs.tech/http-gateways/trustless-gateway/.
Default: true
Type: flag
Gateway.AllowCodecConversionAn optional flag to enable automatic conversion between codecs when the requested format differs from the block's native codec (e.g., converting dag-pb or dag-cbor to dag-json).
When disabled (the default), the gateway returns 406 Not Acceptable for
codec mismatches, following behavior specified in
IPIP-524.
Most users should keep this disabled unless legacy
IPLD Logical Format
support is needed as a stop-gap while switching clients to ?format=raw
and converting client-side.
Instead of relying on gateway-side conversion, fetch the raw block using
?format=raw (application/vnd.ipld.raw) and convert client-side. This:
Default: false
Type: flag
Gateway.DisableHTMLErrorsAn optional flag to disable the pretty HTML error pages of the gateway. Instead,
a text/plain page will be returned with the raw error message from Kubo.
It is useful for whitelabel or middleware deployments that wish to avoid
text/html responses with IPFS branding and links on error pages in browsers.
Default: false
Type: flag
Gateway.ExposeRoutingAPIAn optional flag to expose Kubo Routing system on the gateway port
as an HTTP /routing/v1 endpoint on 127.0.0.1.
Use reverse proxy to expose it on a different hostname.
This endpoint can be used by other Kubo instances, as illustrated in
delegated_routing_v1_http_proxy_test.go.
Kubo will filter out routing results which are not actionable, for example, all
graphsync providers will be skipped. If you need a generic pass-through, see
standalone router implementation named someguy.
Default: true
Type: flag
Gateway.RetrievalTimeoutMaximum duration Kubo will wait for content retrieval (new bytes to arrive).
Timeout behavior:
Truncation handling: When timeout occurs after HTTP 200 headers are sent (e.g., during CAR streams), the gateway:
truncated=true flagMonitoring: Track ipfs_http_gw_retrieval_timeouts_total by status code and truncation status.
Tuning guidance:
ipfs_http_gw_retrieval_timeouts_total) with success rates (ipfs_http_gw_responses_total{status="200"})truncated=true timeouts indicate retrieval stalled mid-file with no new bytes for the timeout durationA value of 0 disables this timeout.
Default: 30s
Type: optionalDuration
Gateway.MaxRequestDurationAn absolute deadline for the entire gateway request. Unlike RetrievalTimeout (which resets on each data write and catches stalled transfers), this is a hard limit on the total time a request can take.
Returns 504 Gateway Timeout when exceeded. This protects the gateway from edge cases and slow client attacks.
Default: 1h
Type: optionalDuration
Gateway.MaxRangeRequestFileSizeMaximum file size for HTTP range requests on deserialized responses. Range requests for files larger than this limit return 501 Not Implemented.
Why this exists:
Some CDNs like Cloudflare intercept HTTP range requests and convert them to full file downloads when files exceed their cache bucket limits. Cloudflare's default plan only caches range requests for files up to 5GiB. Files larger than this receive HTTP 200 with the entire file instead of HTTP 206 with the requested byte range. A client requesting 1MB from a 40GiB file would unknowingly download all 40GiB, causing bandwidth overcharges for the gateway operator, unexpected data costs for the client, and potential browser crashes.
This only affects deserialized responses. Clients fetching verifiable blocks as application/vnd.ipld.raw are not impacted because they work with small chunks that stay well below CDN cache limits.
How to use:
Set this to your CDN's range request cache limit (e.g., "5GiB" for Cloudflare's default plan). The gateway returns 501 Not Implemented for range requests over files larger than this limit, with an error message suggesting verifiable block requests as an alternative.
[!NOTE] Cloudflare users running open gateway hosting deserialized responses should deploy additional protection via Cloudflare Snippets (requires Enterprise plan). The Kubo configuration alone is not sufficient because Cloudflare has already intercepted and cached the response by the time it reaches your origin. See boxo#856 for a snippet that aborts HTTP 200 responses when Content-Length exceeds the limit.
Default: 0 (no limit)
Type: optionalBytes
Gateway.MaxConcurrentRequestsLimits concurrent HTTP requests. Requests beyond limit receive 429 Too Many Requests.
Protects nodes from traffic spikes and resource exhaustion, especially behind reverse proxies without rate-limiting. Default (4096) aligns with common reverse proxy configurations (e.g., nginx: 8 workers × 1024 connections).
Monitoring: ipfs_http_gw_concurrent_requests tracks current requests in flight.
Tuning guidance:
ipfs_http_gw_concurrent_requests gauge for usage patternsipfs_http_gw_responses_total{status="429"}) and success rate ({status="200"})A value of 0 disables the limit.
Default: 4096
Type: optionalInteger
Gateway.HTTPHeadersHeaders to set on gateway responses.
Default: {} + implicit CORS headers from boxo/gateway#AddAccessControlHeaders and ipfs/specs#423
Type: object[string -> array[string]]
Gateway.RootRedirectA URL to redirect requests for / to.
Default: ""
Type: string (url)
Gateway.DiagnosticServiceURLURL for a service to diagnose CID retrievability issues. When the gateway returns a 504 Gateway Timeout error, an "Inspect retrievability of CID" button will be shown that links to this service with the CID appended as ?cid=<CID-to-diagnose>.
Set to empty string to disable the button.
Default: "https://check.ipfs.network"
Type: optionalstring (url)
Gateway.FastDirIndexThresholdREMOVED: this option is no longer necessary. Ignored since Kubo 0.18.
Gateway.WritableREMOVED: this option no longer available as of Kubo 0.20.
We are working on developing a modern replacement. To support our efforts, please leave a comment describing your use case in ipfs/specs#375.
Gateway.PathPrefixesREMOVED: see go-ipfs#7702
Gateway.PublicGateways[!IMPORTANT] This configuration is NOT for HTTP Client, it is for HTTP Server – use this ONLY if you want to run your own IPFS gateway.
PublicGateways is a configuration map used for dictionary for customizing gateway behavior
on specified hostnames that point at your Kubo instance.
It is useful when you want to run Path gateway on example.com/ipfs/cid,
and Subdomain gateway on cid.ipfs.example.org,
or limit verifiable.example.net to response types defined in Trustless Gateway specification.
[!CAUTION] Keys (Hostnames) MUST be unique. Do not use the same parent domain for multiple gateway types, it will break origin isolation.
Hostnames can optionally be defined with one or more wildcards.
Examples:
*.example.com will match requests to http://foo.example.com/ipfs/* or http://{cid}.ipfs.bar.example.com/*.foo-*.example.com will match requests to http://foo-bar.example.com/ipfs/* or http://{cid}.ipfs.foo-xyz.example.com/*.[!IMPORTANT] Reverse Proxy: If running behind nginx or another reverse proxy, ensure
HostandX-Forwarded-*headers are forwarded correctly. See Reverse Proxy Caveats in gateway documentation.
Gateway.PublicGateways: PathsAn array of paths that should be exposed on the hostname.
Example:
{
"Gateway": {
"PublicGateways": {
"example.com": {
"Paths": ["/ipfs"],
}
}
}
}
Above enables http://example.com/ipfs/* but not http://example.com/ipns/*
Default: []
Type: array[string]
Gateway.PublicGateways: UseSubdomainsA boolean to configure whether the gateway at the hostname should be a Subdomain Gateway and provide Origin isolation between content roots.
true - enables subdomain gateway at http://*.{hostname}/
Requires whitelist: make sure respective Paths are set.
For example, Paths: ["/ipfs", "/ipns"] are required for http://{cid}.ipfs.{hostname} and http://{foo}.ipns.{hostname} to work:
```json
"Gateway": {
"PublicGateways": {
"dweb.link": {
"UseSubdomains": true,
"Paths": ["/ipfs", "/ipns"]
}
}
}
```
Backward-compatible: requests for content paths such as http://{hostname}/ipfs/{cid} produce redirect to http://{cid}.ipfs.{hostname}
false - enables path gateway at http://{hostname}/*
Example:
"Gateway": {
"PublicGateways": {
"ipfs.io": {
"UseSubdomains": false,
"Paths": ["/ipfs", "/ipns"]
}
}
}
Default: false
Type: bool
[!IMPORTANT] See Reverse Proxy Caveats if running behind nginx or another reverse proxy.
Gateway.PublicGateways: NoDNSLinkA boolean to configure whether DNSLink for hostname present in Host
HTTP header should be resolved. Overrides global setting.
If Paths are defined, they take priority over DNSLink.
Default: false (DNSLink lookup enabled by default for every defined hostname)
Type: bool
[!IMPORTANT] See Reverse Proxy Caveats if running behind nginx or another reverse proxy.
Gateway.PublicGateways: InlineDNSLinkAn optional flag to explicitly configure whether subdomain gateway's redirects
(enabled by UseSubdomains: true) should always inline a DNSLink name (FQDN)
into a single DNS label (specification):
//example.com/ipns/example.net → HTTP 301 → //example-net.ipns.example.com
DNSLink name inlining allows for HTTPS on public subdomain gateways with single
label wildcard TLS certs (also enabled when passing X-Forwarded-Proto: https),
and provides disjoint Origin per root CID when special rules like
https://publicsuffix.org, or a custom localhost logic in browsers like Brave
has to be applied.
Default: false
Type: flag
Gateway.PublicGateways: DeserializedResponsesAn optional flag to explicitly configure whether this gateway responds to deserialized requests, or not. By default, it is enabled.
When disabled, the gateway operates strictly as a Trustless Gateway.
[!TIP] Disabling deserialized responses will protect you from acting as a free web hosting, while still allowing trustless clients like @helia/verified-fetch to utilize it for trustless, verifiable data retrieval.
Default: same as global Gateway.DeserializedResponses
Type: flag
Gateway.PublicGatewaysDefault entries for localhost hostname and loopback IPs are always present.
If additional config is provided for those hostnames, it will be merged on top of implicit values:
{
"Gateway": {
"PublicGateways": {
"localhost": {
"Paths": ["/ipfs", "/ipns"],
"UseSubdomains": true
}
}
}
}
It is also possible to remove a default by setting it to null.
For example, to disable subdomain gateway on localhost
and make that hostname act the same as 127.0.0.1:
ipfs config --json Gateway.PublicGateways '{"localhost": null }'
Gateway recipesBelow is a list of the most common gateway setups.
[!IMPORTANT] See Reverse Proxy Caveats if running behind nginx or another reverse proxy.
Public subdomain gateway at http://{cid}.ipfs.dweb.link (each content root gets its own Origin)
$ ipfs config --json Gateway.PublicGateways '{
"dweb.link": {
"UseSubdomains": true,
"Paths": ["/ipfs", "/ipns"]
}
}'
Performance: Consider enabling Routing.AcceleratedDHTClient=true to improve content routing lookups. Separately, gateway operators should decide if the gateway node should also co-host and provide (announce) fetched content to the DHT. If providing content, enable Provide.DHT.SweepEnabled=true for efficient announcements. If announcements are still not fast enough, adjust Provide.DHT.MaxWorkers. For a read-only gateway that doesn't announce content, use Provide.Enabled=false.
Backward-compatible: this feature enables automatic redirects from content paths to subdomains:
http://dweb.link/ipfs/{cid} → http://{cid}.ipfs.dweb.link
X-Forwarded-Proto: if you run Kubo behind a reverse proxy that provides TLS, make it add a X-Forwarded-Proto: https HTTP header to ensure users are redirected to https://, not http://. It will also ensure DNSLink names are inlined to fit in a single DNS label, so they work fine with a wildcard TLS cert (details). The NGINX directive is proxy_set_header X-Forwarded-Proto "https";.:
http://dweb.link/ipfs/{cid} → https://{cid}.ipfs.dweb.link
http://dweb.link/ipns/your-dnslink.site.example.com → https://your--dnslink-site-example-com.ipfs.dweb.link
X-Forwarded-Host: we also support X-Forwarded-Host: example.com if you want to override subdomain gateway host from the original request:
http://dweb.link/ipfs/{cid} → http://{cid}.ipfs.example.com
Public path gateway at http://ipfs.io/ipfs/{cid} (no Origin separation)
$ ipfs config --json Gateway.PublicGateways '{
"ipfs.io": {
"UseSubdomains": false,
"Paths": ["/ipfs", "/ipns"]
}
}'
Routing.AcceleratedDHTClient=true to improve content routing lookups. When running an open, recursive gateway, decide if the gateway should also co-host and provide (announce) fetched content to the DHT. If providing content, enable Provide.DHT.SweepEnabled=true for efficient announcements. If announcements are still not fast enough, adjust Provide.DHT.MaxWorkers. For a read-only gateway that doesn't announce content, use Provide.Enabled=false.Public DNSLink gateway resolving every hostname passed in Host header.
ipfs config --json Gateway.NoDNSLink false
NoDNSLink: false is the default (it works out of the box unless set to true manually)Hardened, site-specific DNSLink gateway.
Disable fetching of remote data (NoFetch: true) and resolving DNSLink at unknown hostnames (NoDNSLink: true).
Then, enable DNSLink gateway only for the specific hostname (for which data
is already present on the node), without exposing any content-addressing Paths:
$ ipfs config --json Gateway.NoFetch true
$ ipfs config --json Gateway.NoDNSLink true
$ ipfs config --json Gateway.PublicGateways '{
"en.wikipedia-on-ipfs.org": {
"NoDNSLink": false,
"Paths": []
}
}'
IdentityIdentity.PeerIDThe unique PKI identity label for this configs peer. Set on init and never read, it's merely here for convenience. Ipfs will always generate the peerID from its keypair at runtime.
Type: string (peer ID)
Identity.PrivKeyThe base64 encoded protobuf describing (and containing) the node's private key.
Type: string (base64 encoded)
InternalThis section includes internal knobs for various subsystems to allow advanced users with big or private infrastructures to fine-tune some behaviors without the need to recompile Kubo.
Be aware that making informed change here requires in-depth knowledge and most users should leave these untouched. All knobs listed here are subject to breaking changes between versions.
Internal.BitswapInternal.Bitswap contains knobs for tuning bitswap resource utilization.
[!TIP] For high level configuration see
Bitswap.
The knobs (below) document how their value should related to each other.
Whether their values should be raised or lowered should be determined
based on the metrics ipfs_bitswap_active_tasks, ipfs_bitswap_pending_tasks,
ipfs_bitswap_pending_block_tasks and ipfs_bitswap_active_block_tasks
reported by bitswap.
These metrics can be accessed as the Prometheus endpoint at {Addresses.API}/debug/metrics/prometheus (default: http://127.0.0.1:5001/debug/metrics/prometheus)
The value of ipfs_bitswap_active_tasks is capped by EngineTaskWorkerCount.
The value of ipfs_bitswap_pending_tasks is generally capped by the knobs below,
however its exact maximum value is hard to predict as it depends on task sizes
as well as number of requesting peers. However, as a rule of thumb,
during healthy operation this value should oscillate around a "typical" low value
(without hitting a plateau continuously).
If ipfs_bitswap_pending_tasks is growing while ipfs_bitswap_active_tasks is at its maximum then
the node has reached its resource limits and new requests are unable to be processed as quickly as they are coming in.
Raising resource limits (using the knobs below) could help, assuming the hardware can support the new limits.
The value of ipfs_bitswap_active_block_tasks is capped by EngineBlockstoreWorkerCount.
The value of ipfs_bitswap_pending_block_tasks is indirectly capped by ipfs_bitswap_active_tasks, but can be hard to
predict as it depends on the number of blocks involved in a peer task which can vary.
If the value of ipfs_bitswap_pending_block_tasks is observed to grow,
while ipfs_bitswap_active_block_tasks is at its maximum, there is indication that the number of
available block tasks is creating a bottleneck (either due to high-latency block operations,
or due to high number of block operations per bitswap peer task).
In such cases, try increasing the EngineBlockstoreWorkerCount.
If this adjustment still does not increase the throughput of the node, there might
be hardware limitations like I/O or CPU.
Internal.Bitswap.TaskWorkerCountNumber of threads (goroutines) sending outgoing messages. Throttles the number of concurrent send operations.
Type: optionalInteger (thread count, null means default which is 8)
Internal.Bitswap.EngineBlockstoreWorkerCountNumber of threads for blockstore operations.
Used to throttle the number of concurrent requests to the block store.
The optimal value can be informed by the metrics ipfs_bitswap_pending_block_tasks and ipfs_bitswap_active_block_tasks.
This would be a number that depends on your hardware (I/O and CPU).
Type: optionalInteger (thread count, null means default which is 128)
Internal.Bitswap.EngineTaskWorkerCountNumber of worker threads used for preparing and packaging responses before they are sent out.
This number should generally be equal to TaskWorkerCount.
Type: optionalInteger (thread count, null means default which is 8)
Internal.Bitswap.MaxOutstandingBytesPerPeerMaximum number of bytes (across all tasks) pending to be processed and sent to any individual peer. This number controls fairness and can vary from 250Kb (very fair) to 10Mb (less fair, with more work dedicated to peers who ask for more). Values below 250Kb could cause thrashing. Values above 10Mb open the potential for aggressively-wanting peers to consume all resources and deteriorate the quality provided to less aggressively-wanting peers.
Type: optionalInteger (byte count, null means default which is 1MB)
Internal.Bitswap.ProviderSearchDelayThis parameter determines how long to wait before looking for providers outside of bitswap. Other routing systems like the Amino DHT are able to provide results in less than a second, so lowering this number will allow faster peers lookups in some cases.
Type: optionalDuration (null means default which is 1s)
Internal.Bitswap.ProviderSearchMaxResultsMaximum number of providers bitswap client should aim at before it stops searching for new ones. Setting to 0 means unlimited.
Type: optionalInteger (null means default which is 10)
Internal.Bitswap.BroadcastControlInternal.Bitswap.BroadcastControl contains settings for the bitswap client's broadcast control functionality.
Broadcast control tries to reduce the number of bitswap broadcast messages sent to peers by choosing a subset of of the peers to send to. Peers are chosen based on whether they have previously responded indicating they have wanted blocks, as well as other configurable criteria. The settings here change how peers are selected as broadcast targets. Broadcast control can also be completely disabled to return bitswap to its previous behavior before broadcast control was introduced.
Enabling broadcast control should generally reduce the number of broadcasts significantly without significantly degrading the ability to discover which peers have wanted blocks. However, if block discovery on your network relies sufficiently on broadcasts to discover peers that have wanted blocks, then adjusting the broadcast control configuration or disabling it altogether, may be helpful.
Internal.Bitswap.BroadcastControl.EnableEnables or disables broadcast control functionality. Setting this to false disables broadcast reduction logic and restores the previous (Kubo < 0.36) broadcast behavior of sending broadcasts to all peers. When disabled, all other Bitswap.BroadcastControl configuration items are ignored.
Default: true (Enabled)
Type: flag
Internal.Bitswap.BroadcastControl.MaxPeersSets a hard limit on the number of peers to send broadcasts to. A value of 0 means no broadcasts are sent. A value of -1 means there is no limit.
Default: 0 (no limit)
Type: optionalInteger (non-negative, 0 means no limit)
Internal.Bitswap.BroadcastControl.LocalPeersEnables or disables broadcast control for peers on the local network. Peers that have private or loopback addresses are considered to be on the local network. If this setting is false, than always broadcast to peers on the local network. If true, apply broadcast control to local peers.
Default: false (Always broadcast to peers on local network)
Type: flag
Internal.Bitswap.BroadcastControl.PeeredPeersEnables or disables broadcast reduction for peers configured for peering. If false, than always broadcast to peers configured for peering. If true, apply broadcast reduction to peered peers.
Default: false (Always broadcast to peers configured for peering)
Type: flag
Internal.Bitswap.BroadcastControl.MaxRandomPeersSets the number of peers to broadcast to anyway, even though broadcast control logic has determined that they are not broadcast targets. Setting this to a non-zero value ensures at least this number of random peers receives a broadcast. This may be helpful in cases where peers that are not receiving broadcasts my have wanted blocks.
Default: 0 (do not send broadcasts to peers not already targeted broadcast control)
Type: optionalInteger (non-negative, 0 means do not broadcast to any random peers)
Internal.Bitswap.BroadcastControl.SendToPendingPeersEnables or disables sending broadcasts to any peers to which there is a pending message to send. When enabled, this sends broadcasts to many more peers, but does so in a way that does not increase the number of separate broadcast messages. There is still the increased cost of the recipients having to process and respond to the broadcasts.
Default: false (Do not send broadcasts to all peers for which there are pending messages)
Type: flag
Internal.UnixFSShardingSizeThresholdMOVED: see Import.UnixFSHAMTDirectorySizeThreshold
Internal.MFSNoFlushLimitControls the maximum number of consecutive MFS operations allowed with --flush=false
before requiring a manual flush. This prevents unbounded memory growth and ensures
data consistency when using deferred flushing with ipfs files commands.
When the limit is reached, further operations will fail with an error message
instructing the user to run ipfs files flush, use --flush=true, or increase
this limit in the configuration.
Why operations fail instead of auto-flushing: Automatic flushing once the limit is reached was considered but rejected because it can lead to data corruption issues that are difficult to debug. When the system decides to flush without user knowledge, it can:
By failing explicitly, users maintain control over when their data is persisted, allowing them to:
If you expect automatic flushing behavior, simply use the default --flush=true
(or omit the flag entirely) instead of --flush=false.
⚠️ WARNING: Increasing this limit or disabling it (setting to 0) can lead to:
Default: 256
Type: optionalInteger (0 disables the limit, strongly discouraged)
Note: This is an EXPERIMENTAL feature and may change or be removed in future releases. See #10842 for more information.
IpnsIpns.RepublishPeriodA time duration specifying how frequently to republish ipns records to ensure they stay fresh on the network.
Default: 4 hours.
Type: interval or an empty string for the default.
Ipns.RecordLifetimeA time duration specifying the value to set on ipns records for their validity lifetime.
Default: 48 hours.
Type: interval or an empty string for the default.
Ipns.ResolveCacheSizeThe number of entries to store in an LRU cache of resolved ipns entries. Entries will be kept cached until their lifetime is expired.
Default: 128
Type: integer (non-negative, 0 means the default)
Ipns.MaxCacheTTLMaximum duration for which entries are valid in the name system cache. Applied
to everything under /ipns/ namespace, allows you to cap
the Time-To-Live (TTL) of
IPNS Records
AND also DNSLink TXT records (when DoH-specific DNS.MaxCacheTTL
is not set to a lower value).
When Ipns.MaxCacheTTL is set, it defines the upper bound limit of how long a
IPNS Name lookup result
will be cached and read from cache before checking for updates.
Examples:
"1m" IPNS results are cached 1m or less (good compromise for system where
faster updates are desired)."0s" IPNS caching is effectively turned off (useful for testing, bad for production use)
0 will turn off TTL-based caching entirely.
This is discouraged in production environments. It will make IPNS websites
artificially slow because IPNS resolution results will expire as soon as
they are retrieved, forcing expensive IPNS lookup to happen on every
request. If you want near-real-time IPNS, set it to a low, but still
sensible value, such as 1m.Default: No upper bound, TTL from IPNS Record (see ipns name publish --help) is always respected.
Type: optionalDuration
Ipns.UsePubsubEnables IPNS over PubSub for publishing and resolving IPNS records in real time.
EXPERIMENTAL: read about current limitations at experimental-features.md#ipns-pubsub.
Default: disabled
Type: flag
Ipns.DelegatedPublishersHTTP endpoints for delegated IPNS publishing operations. These endpoints must support the IPNS API from the Delegated Routing V1 HTTP specification.
The special value "auto" loads delegated publishers from AutoConf when enabled.
Publishing behavior depends on routing configuration:
Routing.Type=auto (default): Uses DHT for publishing, "auto" resolves to empty listRouting.Type=delegated: Uses HTTP delegated publishers only, "auto" resolves to configured endpointsWhen using "auto", inspect the effective publishers with: ipfs config Ipns.DelegatedPublishers --expand-auto
Command flags override publishing behavior:
--allow-offline - Publishes to local datastore without requiring network connectivity--allow-delegated - Uses local datastore and HTTP delegated publishers only (no DHT connectivity required)For self-hosting, you can run your own /routing/v1/ipns endpoint using someguy.
Default: ["auto"]
Type: array[string] (URLs or "auto")
Migration[!WARNING] DEPRECATED: Only applies to legacy migrations (repo versions <16). Modern repos (v16+) use embedded migrations. This section is optional and will not appear in new configurations.
Migration.DownloadSourcesDEPRECATED: Download sources for legacy migrations. Only "HTTPS" is supported.
Type: array[string] (optional)
Default: ["HTTPS"]
Migration.KeepDEPRECATED: Controls retention of legacy migration binaries. Options: "cache" (default), "discard", "keep".
Type: string (optional)
Default: "cache"
Mounts[!CAUTION] EXPERIMENTAL: This feature is disabled by default, requires an explicit opt-in with
ipfs mountoripfs daemon --mount.See fuse.md for setup instructions and platform-specific notes.
FUSE mount point configuration options.
All mounts expose the ipfs.cid extended attribute on files and directories, returning the CID of the underlying DAG node:
$ getfattr -n ipfs.cid /ipfs/bafybeiaysi4s6lnjev27ln5icwm6tueaw2vdykrtjkwiphwekaywqhcjze/wiki/Cat
# file: ipfs/bafybeiaysi4s6lnjev27ln5icwm6tueaw2vdykrtjkwiphwekaywqhcjze/wiki/Cat
ipfs.cid="bafybeihxislsmn7b2drh6m3vqz3ctcfae46al7ax3543umeso4f5jgij5e"
Mounts.IPFSMountpoint for /ipfs/.
Default: /ipfs
Type: string (filesystem path)
Mounts.IPNSMountpoint for /ipns/.
Default: /ipns
Type: string (filesystem path)
Mounts.MFSMountpoint for Mutable File System (MFS) behind the ipfs files API.
[!CAUTION]
- Write support is highly experimental and not recommended for mission-critical deployments.
- Avoid storing lazy-loaded datasets in MFS. Exposing a partially local, lazy-loaded DAG risks operating system search indexers crawling it, which may trigger unintended network prefetching of non-local DAG components.
Default: /mfs
Type: string (filesystem path)
Mounts.FuseAllowOtherSets the FUSE allow_other mount option, letting users other than the mounter access the mounted filesystem.
Default: false
Type: flag
Mounts.StoreMtimeWhen true, writable mounts (/ipns and /mfs) store the current time as mtime in UnixFS metadata when creating a file or opening it for writing. Setting mtime explicitly via touch works on both files and directories. This changes the resulting CID even when the file content is identical, because mtime is stored in the root block of the UnixFS DAG.
Most data on IPFS does not include mtime. When mtime is present in the UnixFS metadata, it is always shown in stat responses on all mounts, regardless of this flag. When absent, mtime is reported as zero (epoch).
Default: false
Type: flag
Mounts.StoreModeWhen true, writable mounts (/ipns and /mfs) accept chmod requests on both files and directories and persist POSIX permission bits in UnixFS metadata. This changes the resulting CID because mode is stored in the root block of the UnixFS DAG.
Most data on IPFS does not include mode. When mode is present in the UnixFS metadata, it is always shown in stat responses on all mounts, regardless of this flag. When absent, a default mode is used (files: 0644 on writable mounts, 0444 on /ipfs; directories: 0755 on writable mounts, 0555 on /ipfs).
Default: false
Type: flag
PinningPinning configures the options available for pinning content (i.e. keeping content longer-term instead of as temporarily cached storage).
Pinning.RemoteServicesRemoteServices maps a name for a remote pinning service to its configuration.
A remote pinning service is a remote service that exposes an API for managing that service's interest in long-term data storage.
The exposed API conforms to the specification defined at https://ipfs.github.io/pinning-services-api-spec/
Pinning.RemoteServices: APIContains information relevant to utilizing the remote pinning service
Example:
{
"Pinning": {
"RemoteServices": {
"myPinningService": {
"API" : {
"Endpoint" : "https://pinningservice.tld:1234/my/api/path",
"Key" : "someOpaqueKey"
}
}
}
}
}
Pinning.RemoteServices: API.EndpointThe HTTP(S) endpoint through which to access the pinning service
Example: "https://pinningservice.tld:1234/my/api/path"
Type: string
Pinning.RemoteServices: API.KeyThe key through which access to the pinning service is granted
Type: string
Pinning.RemoteServices: PoliciesContains additional opt-in policies for the remote pinning service.
Pinning.RemoteServices: Policies.MFSWhen this policy is enabled, it follows changes to MFS and updates the pin for MFS root on the configured remote service.
A pin request to the remote service is sent only when MFS root CID has changed
and enough time has passed since the previous request (determined by RepinInterval).
One can observe MFS pinning details by enabling debug via ipfs log level remotepinning/mfs debug and switching back to error when done.
Pinning.RemoteServices: Policies.MFS.EnabledControls if this policy is active.
Default: false
Type: bool
Pinning.RemoteServices: Policies.MFS.PinNameOptional name to use for a remote pin that represents the MFS root CID. When left empty, a default name will be generated.
Default: "policy/{PeerID}/mfs", e.g. "policy/12.../mfs"
Type: string
Pinning.RemoteServices: Policies.MFS.RepinIntervalDefines how often (at most) the pin request should be sent to the remote service.
If left empty, the default interval will be used. Values lower than 1m will be ignored.
Default: "5m"
Type: duration
ProvideConfigures how your node advertises content to make it discoverable by other peers.
What is providing? When your node stores content, it publishes provider records to the routing system announcing "I have this content". These records map CIDs to your peer ID, enabling content discovery across the network.
While designed to support multiple routing systems in the future, the current default configuration only supports providing to the Amino DHT.
Provide.EnabledControls whether Kubo provide and reprovide systems are enabled.
[!CAUTION] Disabling this will prevent other nodes from discovering your content. Your node will stop announcing data to the routing system, making it inaccessible unless peers connect to you directly.
Default: true
Type: flag
Provide.StrategyControls which CIDs are announced to the content routing system. Valid strategies are:
"all" - announce all CIDs of stored blocks"pinned" - only announce recursively pinned CIDs (ipfs pin add -r, both roots and child blocks)
"roots" - only announce the top-level root CID of explicitly pinned DAGs (ipfs pin add)
roots strategy will not announce child blocks.
It makes sense only for use cases where the entire DAG is fetched in full,
and a graceful resume does not have to be guaranteed: the lack of child
announcements means an interrupted retrieval won't be able to find
providers for the missing block in the middle of a file, unless the peer
happens to already be connected to a provider and asks for child CID over
bitswap. Does not traverse the DAG to discover sub-entity roots
(files within directories, HAMT shards, etc.). If you want that, use
"pinned+entities" instead."mfs" - announce only the local CIDs that are part of the MFS (ipfs files)
"pinned+mfs" - a combination of the pinned and mfs strategies.
pinned and then the locally available part of mfs.+unique and +entitiesAppend +unique or +entities to pinned, mfs, or pinned+mfs to optimize the reprovide cycle. Neither works with "all" or "roots".
+unique: uses a bloom filter to deduplicate CIDs across recursive
pins that share sub-DAGs. Without it, a node with 1000 pins sharing 99%
of their content re-traverses the shared blocks for every pin. With +unique,
shared subtrees are skipped, cutting traversal from
O(pins * total_blocks) to O(unique_blocks). This also cuts the number of
CIDs sent to the routing system when similar datasets are pinned multiple
times.+entities: announces only entity roots (file roots, directory roots,
HAMT shard nodes) instead of every block. Internal file chunks are not
announced. This significantly reduces the number of provider records for
repositories with large files while keeping all files and directories
discoverable. Implies +unique. Non-UnixFS content (e.g. dag-cbor) is
still fully announced.
Suggested configurations:
"pinned+mfs+unique": safe default for nodes with GC enabled, or desktop
users who don't want to announce all blocks cached in the local repository.
Handles pins of similar DAGs efficiently (e.g. versioned datasets where pins
are added and removed over time)."pinned+mfs+entities": same as above, but also skips internal file chunks
for even fewer provider records. Use when the +entities trade-off (no
chunk-level discoverability) is acceptable.Reproviding larger pinsets using the mfs, pinned, pinned+mfs or roots strategies requires additional memory, with an estimated ~1 GiB of RAM per 20 million CIDs. This is because the pinner snapshots the pin index into memory at the start of each reprovide cycle so that pin/unpin are not blocked while the DHT reprovider works over the snapshot.
With +unique or +entities, a bloom filter replaces the in-memory CID set, significantly reducing memory usage:
+unique bloom filter)+unique bloom filter)+unique bloom filter)The bloom auto-scales: the first cycle starts small and grows as needed; subsequent cycles size correctly from the previous cycle's count.
Strategy changes automatically clear the provide queue. When you change Provide.Strategy and restart Kubo, the provide queue is automatically cleared to ensure only content matching your new strategy is announced. You can also manually clear the queue using ipfs provide clear.
Default: "all"
Type: optionalString (unset for the default)
Provide.DHTConfiguration for providing data to Amino DHT peers.
Provider record lifecycle: On the Amino DHT, provider records expire after
amino.DefaultProvideValidity.
Your node must re-announce (reprovide) content periodically to keep it
discoverable. The Provide.DHT.Interval setting
controls this timing, with the default ensuring records refresh well before
expiration or negative churn effects kick in.
Two provider systems:
Sweep provider: Divides the DHT keyspace into regions and systematically sweeps through them over the reprovide interval. This batches CIDs allocated to the same DHT servers, dramatically reducing the number of DHT lookups and PUTs needed. Spreads work evenly over time with predictable resource usage.
Legacy provider: Processes each CID individually with separate DHT lookups. Works well for small content collections but struggles to complete reprovide cycles when managing thousands of CIDs.
Quick command-line monitoring: Use ipfs provide stat to view the current
state of the provider system. For real-time monitoring, run
watch ipfs provide stat --all --compact to see detailed statistics refreshed
continuously in a 2-column layout.
Long-term monitoring: For in-depth or long-term monitoring, metrics are
exposed at the Prometheus endpoint: {Addresses.API}/debug/metrics/prometheus
(default: http://127.0.0.1:5001/debug/metrics/prometheus). Different metrics
are available depending on whether you use legacy mode (SweepEnabled=false) or
sweep mode (SweepEnabled=true). See Provide metrics documentation
for details.
Debug logging: For troubleshooting, enable detailed logging by setting:
GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug
provider=debug enables generic logging (legacy provider and any non-dht operations)dht/provider=debug enables logging for the sweep providerProvide.DHT.IntervalSets how often to re-announce content to the DHT. Provider records on Amino DHT
expire after amino.DefaultProvideValidity.
Why this matters: The interval must be shorter than the expiration window to
ensure provider records refresh before they expire. The default value is
approximately half of amino.DefaultProvideValidity,
which accounts for network churn and ensures records stay alive without
overwhelming the network with unnecessary announcements.
With sweep mode enabled
(Provide.DHT.SweepEnabled): The system spreads
reprovide operations smoothly across this entire interval. Each keyspace region
is reprovided at scheduled times throughout the period, ensuring each region's
announcements complete before records expire.
With legacy mode: The system attempts to reprovide all CIDs as quickly as possible at the start of each interval. If reproviding takes longer than this interval (common with large datasets), the next cycle is skipped and provider records may expire.
"0" it will disable content reproviding to DHT.[!CAUTION] Disabling this will prevent other nodes from discovering your content via the DHT. Your node will stop announcing data to the DHT, making it inaccessible unless peers connect to you directly. Since provider records expire after
amino.DefaultProvideValidity, your content will become undiscoverable after this period.
Default: 22h
Type: optionalDuration (unset for the default)
Provide.DHT.MaxWorkersSets the maximum number of concurrent DHT provide operations.
When Provide.DHT.SweepEnabled is false (legacy mode):
0 allows unlimited provide workersWhen Provide.DHT.SweepEnabled is true:
DedicatedPeriodicWorkers and DedicatedBurstWorkers for task allocationIf the accelerated DHT client is enabled, each provide operation opens ~20 connections in parallel. With the standard DHT client (accelerated disabled), each provide opens between 20 and 60 connections, with at most 10 active at once. Provides complete more quickly when using the accelerated client. Be mindful of how many simultaneous connections this setting can generate.
[!CAUTION] For nodes without strict connection limits that need to provide large volumes of content, we recommend first trying
Provide.DHT.SweepEnabled=truefor efficient announcements. If announcements are still not fast enough, adjustProvide.DHT.MaxWorkers. As a last resort, consider enablingRouting.AcceleratedDHTClient=truebut be aware that it is very resource hungry.At the same time, mind that raising this value too high may lead to increased load. Proceed with caution, ensure proper hardware and networking are in place.
[!TIP] When
SweepEnabledis true: Users providing millions of CIDs or more should increase the worker count accordingly. Underprovisioning can lead to slow provides (burst workers) and inability to keep up with content reproviding (periodic workers). For nodes with sufficient resources (CPU, bandwidth, number of connections), dedicating1024for periodic workers and512for burst workers, and2048max workers should be adequate even for the largest users. The system will only use workers as needed - unused resources won't be consumed. Ensure you adjust the swarm connection manager and resource manager configuration accordingly. See Capacity Planning for more details.
Default: 16
Type: optionalInteger (non-negative; 0 means unlimited number of workers)
Provide.DHT.SweepEnabledEnables the sweep provider for efficient content announcements. When disabled,
the legacy boxo/provider is
used instead.
The legacy provider problem: The legacy system processes CIDs one at a
time, requiring a separate DHT lookup (10-20 seconds each) to find the 20
closest peers for each CID. This sequential approach typically handles less
than 10,000 CID over 22h (Provide.DHT.Interval). If
your node has more CIDs than can be reprovided within
Provide.DHT.Interval, provider records start expiring
after
amino.DefaultProvideValidity,
making content undiscoverable.
How sweep mode works: The sweep provider divides the DHT keyspace into
regions based on keyspace prefixes. It estimates the Amino DHT size, calculates
how many regions are needed (sized to contain at least 20 peers each), then
schedules region processing evenly across
Provide.DHT.Interval. When processing a region, it
discovers the peers in that region once, then sends all provider records for
CIDs allocated to those peers in a batch. This batching is the key efficiency:
instead of N lookups for N CIDs, the number of lookups is bounded by a constant
fraction of the Amino DHT size (e.g., ~3,000 lookups when there are ~10,000 DHT
servers), regardless of how many CIDs you're providing.
Efficiency gains: For a node providing 100,000 CIDs, sweep mode reduces lookups by 97% compared to legacy. The work spreads smoothly over time rather than completing in bursts, preventing resource spikes and duplicate announcements. Long-running nodes reprovide systematically just before records would expire, keeping content continuously discoverable without wasting bandwidth.
Implementation details: The sweep provider tracks CIDs in a persistent
keystore. New content added via StartProviding() enters the provide queue and
gets batched by keyspace region. The keystore is periodically refreshed at each
Provide.DHT.Interval with CIDs matching
Provide.Strategy to ensure only current content remains
scheduled. This handles cases where content is unpinned or removed.
Persistent reprovide cycle state: When Provide Sweep is enabled, the
reprovide cycle state is persisted to the datastore by default. On restart, Kubo
automatically resumes from where it left off. If the node was offline for an
extended period, all CIDs that haven't been reprovided within the configured
Provide.DHT.Interval are immediately queued for
reproviding. Additionally, the provide queue is persisted on shutdown and
restored on startup, ensuring no pending provide operations are lost. If you
don't want to keep the persisted provider state from a previous run, you can
disable this behavior by setting Provide.DHT.ResumeEnabled
to false.
<picture> <source media="(prefers-color-scheme: dark)" srcset="https://github.com/user-attachments/assets/f6e06b08-7fee-490c-a681-1bf440e16e27"> <source media="(prefers-color-scheme: light)" srcset="https://github.com/user-attachments/assets/e1662d7c-f1be-4275-a9ed-f2752fcdcabe"> </picture>The diagram compares performance patterns:
- Legacy mode: Sequential processing, one lookup per CID, struggles with large datasets
- Sweep mode: Smooth distribution over time, batched lookups by keyspace region, predictable resource usage
- Accelerated DHT: Hourly network crawls creating traffic spikes, high resource usage
Sweep mode achieves similar effectiveness to the Accelerated DHT client but with steady resource consumption.
For background on the sweep provider design and motivations, see Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.
You can compare the effectiveness of sweep mode vs legacy mode by monitoring the appropriate metrics (see Monitoring Provide Operations above).
[!NOTE] This is the default provider system as of Kubo v0.39. To use the legacy provider instead, set
Provide.DHT.SweepEnabled=false.
[!NOTE] When DHT routing is unavailable (e.g.,
Routing.Type=customwith only HTTP routers), the provider automatically falls back to the legacy provider regardless of this setting.
Default: true
Type: flag
Provide.DHT.ResumeEnabledControls whether the provider resumes from its previous state on restart. Only
applies when Provide.DHT.SweepEnabled is true.
When enabled (the default), the provider persists its reprovide cycle state and provide queue to the datastore, and restores them on restart. This ensures:
When disabled, the provider starts fresh on each restart, discarding any
previous reprovide cycle state and provide queue. On a fresh start, all CIDs
matching the Provide.Strategy will be provided ASAP (as
burst provides), and then keyspace regions are reprovided according to the
regular schedule starting from the beginning of the reprovide cycle.
[!NOTE] Disabling this option means the provider will provide all content matching your strategy on every restart (which can be resource-intensive for large datasets), then start from the beginning of the reprovide cycle. For nodes with large datasets or frequent restarts, keeping this enabled (the default) is recommended for better resource efficiency and more consistent reproviding behavior.
Default: true
Type: flag
Provide.DHT.DedicatedPeriodicWorkersNumber of workers dedicated to periodic keyspace region reprovides. Only
applies when Provide.DHT.SweepEnabled is true.
Among the Provide.DHT.MaxWorkers, this
number of workers will be dedicated to the periodic region reprovide only. The sum of
DedicatedPeriodicWorkers and DedicatedBurstWorkers should not exceed MaxWorkers.
Any remaining workers (MaxWorkers - DedicatedPeriodicWorkers - DedicatedBurstWorkers)
form a shared pool that can be used for either type of work as needed.
[!NOTE] If the provider system isn't able to keep up with reproviding all your content within the Provide.DHT.Interval, consider increasing this value.
Default: 2
Type: optionalInteger (0 means there are no dedicated workers, but the
operation can be performed by free non-dedicated workers)
Provide.DHT.DedicatedBurstWorkersNumber of workers dedicated to burst provides. Only applies when Provide.DHT.SweepEnabled is true.
Burst provides are triggered by:
ipfs routing provide)Provide.Strategy (blocks from ipfs add, bitswap, or trustless gateway requests)Having dedicated burst workers ensures that bulk operations (like adding many CIDs or reconnecting to the network) don't delay regular periodic reprovides, and vice versa.
Among the Provide.DHT.MaxWorkers, this
number of workers will be dedicated to burst provides only. In addition to
these, if there are available workers in the pool, they can also be used for
burst provides.
[!NOTE] If CIDs aren't provided quickly enough to your taste, and you can afford more CPU and bandwidth, consider increasing this value.
Default: 1
Type: optionalInteger (0 means there are no dedicated workers, but the
operation can be performed by free non-dedicated workers)
Provide.DHT.MaxProvideConnsPerWorkerMaximum number of connections that a single worker can use to send provider records over the network.
When reproviding CIDs corresponding to a keyspace region, the reprovider must send a provider record to the 20 closest peers to the CID (in XOR distance) for each CID belonging to this keyspace region.
The reprovider opens a connection to a peer from that region, sends it all its allocated provider records. Once done, it opens a connection to the next peer from that keyspace region until all provider records are assigned.
This option defines how many such connections can be open concurrently by a single worker.
[!NOTE] Increasing this value can speed up the provide operation, at the cost of opening more simultaneous connections to DHT servers. A keyspace typically has less than 60 peers, so you may hit a performance ceiling beyond which increasing this value has no effect.
Default: 20
Type: optionalInteger (non-negative)
Provide.DHT.KeystoreBatchSizeDuring the garbage collection, all keys stored in the Keystore are removed, and the keys are streamed from a channel to fill the Keystore again with up-to-date keys. Since a high number of CIDs to reprovide can easily fill up the memory, keys are read and written in batches to optimize for memory usage.
This option defines how many multihashes should be contained within a batch. A multihash is usually represented by 34 bytes.
Default: 16384 (~544 KiB per batch)
Type: optionalInteger (non-negative)
Provide.DHT.OfflineDelayThe SweepingProvider has 3 states: ONLINE, DISCONNECTED and OFFLINE. It
starts OFFLINE, and as the node bootstraps, it changes its state to ONLINE.
When the provider loses connection to all DHT peers, it switches to the
DISCONNECTED state. In this state, new provides will be added to the provide
queue, and provided as soon as the node comes back online.
After a node has been DISCONNECTED for OfflineDelay, it goes to OFFLINE
state. When OFFLINE, the provider drops the provide queue, and returns errors
to new provide requests. However, when OFFLINE the provider still adds the
keys to its state, so keys will eventually be provided in the
Provide.DHT.Interval after the provider comes back
ONLINE.
Default: 2h
Type: optionalDuration
Provide.BloomFPRateTarget false positive rate for the bloom filter used by the +unique and
+entities strategy modifiers and
the matching --fast-provide-dag walk. Expressed as 1/N (one false positive
per N lookups), so a higher value means a lower FP rate but more memory per
CID. Has no effect when Provide.Strategy does not include +unique or
+entities.
The bloom filter sizes itself from the previous reprovide cycle's CID count and the configured FP rate. The auto-scaling described in Memory during reprovide is unaffected; this setting only changes the bits-per-CID ratio of each bloom in the chain.
Memory tradeoff (approximate, before ipfs/bbloom's power-of-two rounding):
Provide.BloomFPRate | Approx. FP rate | Bytes per CID |
|---|---|---|
1000000 | 1 in 1M | ~3 |
| (default) | ~1 in 4.75M | ~4 |
10000000 | 1 in 10M | ~5 |
100000000 | 1 in 100M | ~6 |
A false positive causes the walker to skip a CID it has already been told
about; the skipped CID is provided in the next reprovide cycle (see
Provide.DHT.Interval). At the default rate, fewer
than ~21 CIDs per 100M are skipped per cycle.
The minimum accepted value is 1000000 (1 in 1M). Below that the bloom
filter becomes lossy enough to drop a meaningful fraction of CIDs from each
reprovide cycle.
Default: 4750000 (~1 false positive per 4.75M lookups, ~4 bytes per CID)
Type: optionalInteger
ProviderProvider.EnabledREMOVED
Replaced with Provide.Enabled.
Provider.StrategyREMOVED
This field was unused. Use Provide.Strategy instead.
Provider.WorkerCountREMOVED
Replaced with Provide.DHT.MaxWorkers.
PubsubPubsub configures Kubo's opt-in, opinionated libp2p pubsub instance.
To enable, set Pubsub.Enabled to true.
EXPERIMENTAL: This is an opt-in feature. Its primary use case is
IPNS over PubSub, which
enables real-time IPNS record propagation. See Ipns.UsePubsub
for details.
The ipfs pubsub commands can also be used for basic publish/subscribe
operations, but only if Kubo's built-in message validation (described below) is
acceptable for your use case.
Kubo's pubsub is optimized for IPNS. It uses opinionated message validation that may not fit all applications. If you need custom Message ID computation, different deduplication logic, or validation rules beyond what Kubo provides, consider building a dedicated pubsub node using go-libp2p-pubsub directly.
Kubo uses two layers of message deduplication to handle duplicate messages that may arrive via different network paths:
Layer 1: In-memory TimeCache (Message ID)
When a message arrives, Kubo computes its Message ID (hash of the message content) and checks an in-memory cache. If the ID was seen recently, the message is dropped. This cache is controlled by:
Pubsub.SeenMessagesTTL - how long Message IDs are remembered (default: 120s)Pubsub.SeenMessagesStrategy - whether TTL resets on each sightingThis cache is fast but limited: it only works within the TTL window and is cleared on node restart.
Layer 2: Persistent Seqno Validator (per-peer)
For stronger deduplication, Kubo tracks the maximum sequence number seen from each peer and persists it to the datastore. Messages with sequence numbers lower than the recorded maximum are rejected. This prevents replay attacks and handles message cycles in large networks where messages may take longer than the TimeCache TTL to propagate.
This layer survives node restarts. The state can be inspected or cleared using
ipfs pubsub reset (for testing/recovery only).
Pubsub.EnabledEnables the pubsub system.
Default: false
Type: flag
Pubsub.RouterSets the default router used by pubsub to route messages to peers. This can be one of:
"floodsub" - floodsub is a basic router that simply floods messages to all
connected peers. This router is extremely inefficient but very reliable."gossipsub" - gossipsub is a more advanced routing algorithm that will
build an overlay mesh from a subset of the links in the network.Default: "gossipsub"
Type: string (one of "floodsub", "gossipsub", or "" (apply default))
Pubsub.DisableSigningDisables message signing and signature verification.
FOR TESTING ONLY - DO NOT USE IN PRODUCTION
It is not safe to disable signing even if you don't care who sent the message because spoofed messages can be used to silence real messages by intentionally re-using the real message's message ID.
Default: false
Type: bool
Pubsub.SeenMessagesTTLControls the time window for the in-memory Message ID cache (Layer 1 deduplication). Messages with the same ID seen within this window are dropped.
A smaller value reduces memory usage but may cause more duplicates in networks with slow nodes. A larger value uses more memory but provides better duplicate detection within the time window.
Default: see TimeCacheDuration from go-libp2p-pubsub
Type: optionalDuration
Pubsub.SeenMessagesStrategyDetermines how the TTL countdown for the Message ID cache works.
last-seen - Sliding window: TTL resets each time the message is seen again.
Keeps frequently-seen messages in cache longer, preventing continued propagation.first-seen - Fixed window: TTL counts from first sighting only. Messages are
purged after the TTL regardless of how many times they're seen.Default: last-seen (see go-libp2p-pubsub)
Type: optionalString
PeeringConfigures the peering subsystem. The peering subsystem configures Kubo to connect to, remain connected to, and reconnect to a set of nodes. Nodes should use this subsystem to create "sticky" links between frequently useful peers to improve reliability.
Use-cases:
When a node is added to the set of peered nodes, Kubo will:
Peering can be asymmetric or symmetric:
Peering.PeersThe set of peers with which to peer.
{
"Peering": {
"Peers": [
{
"ID": "QmPeerID1",
"Addrs": ["/ip4/18.1.1.1/tcp/4001"]
},
{
"ID": "QmPeerID2",
"Addrs": ["/ip4/18.1.1.2/tcp/4001", "/ip4/18.1.1.2/udp/4001/quic-v1"]
}
]
}
...
}
Where ID is the peer ID and Addrs is a set of known addresses for the peer. If no addresses are specified, the Amino DHT will be queried.
Additional fields may be added in the future.
Default: empty.
Type: array[peering]
ReproviderReprovider.IntervalREMOVED
Replaced with Provide.DHT.Interval.
Reprovider.StrategyREMOVED
Replaced with Provide.Strategy.
RoutingContains options for content, peer, and IPNS routing mechanisms.
Routing.TypeControls how your node discovers content and peers on the network.
Production options:
auto (default): Uses both the public IPFS DHT (Amino) and HTTP routers
from Routing.DelegatedRouters for faster lookups.
Your node starts as a DHT client and automatically switches to server mode
when reachable from the public internet.
autoclient: Same as auto, but never runs a DHT server.
Use this if your node is behind a firewall or NAT.
dht: Uses only the Amino DHT (no HTTP routers). Automatically switches
between client and server mode based on reachability.
dhtclient: DHT-only, always running as a client. Lower resource usage.
dhtserver: DHT-only, always running as a server.
Only use this if your node is reachable from the public internet.
none: Disables all routing. You must manually connect to peers.
About DHT client vs server mode:
When the DHT is enabled, your node can operate as either a client or server.
In server mode, it queries other peers and responds to their queries - this helps
the network but uses more resources. In client mode, it only queries others without
responding, which is less resource-intensive. With auto or dht, your node starts
as a client and switches to server when it detects public reachability.
[!CAUTION]
Routing.TypeExperimental options:These modes are for research and testing only, not production use. They may change without notice between releases.
delegated: Uses only HTTP routers fromRouting.DelegatedRoutersand IPNS publishers fromIpns.DelegatedPublishers, without initializing the DHT. Useful when peer-to-peer connectivity is unavailable. Note: cannot provide content to the network (no DHT means no provider records).
custom: Disables all default routers. You define your own routing inRouting.Routers. See delegated-routing.md.
Default: auto
Type: optionalString (null/missing means the default)
Routing.DelegatedRoutersAn array of URL hostnames for delegated routers to be queried in addition to the Amino DHT when Routing.Type is set to auto (default) or autoclient.
These endpoints must support the Delegated Routing V1 HTTP API.
The special value "auto" uses delegated routers from AutoConf when enabled.
You can combine "auto" with custom URLs (e.g., ["auto", "https://custom.example.com"]) to query both the default delegated routers and your own endpoints. The first "auto" entry gets substituted with autoconf values, and other URLs are preserved.
[!TIP] Delegated routing allows IPFS implementations to offload tasks like content routing, peer routing, and naming to a separate process or server while also benefiting from HTTP caching.
One can run their own delegated router either by implementing the Delegated Routing V1 HTTP API themselves, or by using Someguy, a turn-key implementation that proxies requests to other routing systems. A public utility instance of Someguy is hosted at
https://delegated-ipfs.dev.
Default: ["auto"]
Type: array[string] (URLs or "auto")
Routing.AcceleratedDHTClientThis alternative Amino DHT client with a Full-Routing-Table strategy will do a complete scan of the DHT every hour and record all nodes found. Then when a lookup is tried instead of having to go through multiple Kad hops it is able to find the 20 final nodes by looking up the in-memory recorded network table.
This means sustained higher memory to store the routing table and extra CPU and network bandwidth for each network scan. However the latency of individual read/write operations should be ~10x faster and provide throughput up to 6 million times faster on larger datasets!
This is not compatible with Routing.Type custom. If you are using composable routers
you can configure this individually on each router.
When it is enabled:
Provide.DHT.SweepEnabled instead, which offers similar
benefits without the hourly traffic spikes.ipfs stats dht will default to showing information about the accelerated DHT client[!CAUTION]
Routing.AcceleratedDHTClientCaveats:
- Running the accelerated client likely will result in more resource consumption (connections, RAM, CPU, bandwidth)
- Users that are limited in the number of parallel connections their machines/networks can perform will be most affected
- The resource usage is not smooth as the client crawls the network in rounds and reproviding is similarly done in rounds
- Users who previously had a lot of content but were unable to advertise it on the network will see an increase in egress bandwidth as their nodes start to advertise all of their CIDs into the network. If you have lots of data entering your node that you don't want to advertise, consider using
Provide.*configuration to control which CIDs are reprovided.- Currently, the DHT is not usable for queries for the first 5-10 minutes of operation as the routing table is being prepared. This means operations like searching the DHT for particular peers or content will not work initially.
- You can see if the DHT has been initially populated by running
ipfs stats dht- Currently, the accelerated DHT client is not compatible with LAN-based DHTs and will not perform operations against them.
Default: false
Type: flag
Routing.LoopbackAddressesOnLanDHTEXPERIMENTAL: Routing.LoopbackAddressesOnLanDHT configuration may change in future release
Whether loopback addresses (e.g. 127.0.0.1) should not be ignored on the local LAN DHT.
Most users do not need this setting. It can be useful during testing, when multiple Kubo nodes run on the same machine but some of them do not have Discovery.MDNS.Enabled.
Default: false
Type: bool (missing means false)
Routing.IgnoreProvidersAn array of string-encoded PeerIDs. Any provider record associated to one of these peer IDs is ignored.
Apart from ignoring specific providers for reasons like misbehaviour etc. this setting is useful to ignore providers as a way to indicate preference, when the same provider is found under different peerIDs (i.e. one for HTTP and one for Bitswap retrieval).
[!TIP] This denylist operates on PeerIDs. To deny specific HTTP Provider URL, use
HTTPRetrieval.Denylistinstead.
Default: []
Type: array[string]
Routing.RoutersAlternative configuration used when Routing.Type=custom.
[!CAUTION] EXPERIMENTAL:
Routing.Routersis for research and testing only, not production use.
- The configuration format and behavior may change without notice between releases.
- Bugs and regressions may not be prioritized.
- HTTP-only configurations cannot reliably provide content. See delegated-routing.md.
Most users should use
Routing.Type=autoorautoclientwithRouting.DelegatedRouters.
Allows for replacing the default routing (Amino DHT) with alternative Router implementations.
The map key is a name of a Router, and the value is its configuration.
Default: {}
Type: object[string->object]
Routing.Routers.[name].Type⚠️ EXPERIMENTAL: For research and testing only. May change without notice.
It specifies the routing type that will be created.
Currently supported types:
http simple delegated routing based on HTTP protocol from IPIP-337dht provides decentralized routing based on libp2p's kad-dhtparallel and sequential: Helpers that can be used to run several routers sequentially or in parallel.Type: string
Routing.Routers.[name].Parameters⚠️ EXPERIMENTAL: For research and testing only. May change without notice.
Parameters needed to create the specified router. Supported params per router type:
HTTP:
Endpoint (mandatory): URL that will be used to connect to a specified router.MaxProvideBatchSize: This number determines the maximum amount of CIDs sent per batch. Servers might not accept more than 100 elements per batch. 100 elements by default.MaxProvideConcurrency: It determines the number of threads used when providing content. GOMAXPROCS by default.DHT:
"Mode": Mode used by the Amino DHT. Possible values: "server", "client", "auto""AcceleratedDHTClient": Set to true if you want to use the acceleratedDHT."PublicIPNetwork": Set to true to create a WAN DHT. Set to false to create a LAN DHT.Parallel:
Routers: A list of routers that will be executed in parallel:
Name:string: Name of the router. It should be one of the previously added to Routers list.Timeout:duration: Local timeout. It accepts strings compatible with Go time.ParseDuration(string) (10s, 1m, 2h). Time will start counting when this specific router is called, and it will stop when the router returns, or we reach the specified timeout.ExecuteAfter:duration: Providing this param will delay the execution of that router at the specified time. It accepts strings compatible with Go time.ParseDuration(string) (10s, 1m, 2h).IgnoreErrors:bool: It will specify if that router should be ignored if an error occurred.Timeout:duration: Global timeout. It accepts strings compatible with Go time.ParseDuration(string) (10s, 1m, 2h).Sequential:
Routers: A list of routers that will be executed in order:
Name:string: Name of the router. It should be one of the previously added to Routers list.Timeout:duration: Local timeout. It accepts strings compatible with Go time.ParseDuration(string). Time will start counting when this specific router is called, and it will stop when the router returns, or we reach the specified timeout.IgnoreErrors:bool: It will specify if that router should be ignored if an error occurred.Timeout:duration: Global timeout. It accepts strings compatible with Go time.ParseDuration(string).Default: {} (use the safe implicit defaults)
Type: object[string->string]
Routing.MethodsMethods:map will define which routers will be executed per method used when Routing.Type=custom.
[!CAUTION] EXPERIMENTAL:
Routing.Methodsis for research and testing only, not production use.
- The configuration format and behavior may change without notice between releases.
- Bugs and regressions may not be prioritized.
- HTTP-only configurations cannot reliably provide content. See delegated-routing.md.
Most users should use
Routing.Type=autoorautoclientwithRouting.DelegatedRouters.
The key will be the name of the method: "provide", "find-providers", "find-peers", "put-ipns", "get-ipns". All methods must be added to the list.
The value will contain:
RouterName:string: Name of the router. It should be one of the previously added to Routing.Routers list.Type: object[string->object]
Examples:
Complete example using 2 Routers, Amino DHT (LAN/WAN) and parallel.
$ ipfs config Routing.Type --json '"custom"'
$ ipfs config Routing.Routers.WanDHT --json '{
"Type": "dht",
"Parameters": {
"Mode": "auto",
"PublicIPNetwork": true,
"AcceleratedDHTClient": false
}
}'
$ ipfs config Routing.Routers.LanDHT --json '{
"Type": "dht",
"Parameters": {
"Mode": "auto",
"PublicIPNetwork": false,
"AcceleratedDHTClient": false
}
}'
$ ipfs config Routing.Routers.ParallelHelper --json '{
"Type": "parallel",
"Parameters": {
"Routers": [
{
"RouterName" : "LanDHT",
"IgnoreErrors" : true,
"Timeout": "3s"
},
{
"RouterName" : "WanDHT",
"IgnoreErrors" : false,
"Timeout": "5m",
"ExecuteAfter": "2s"
}
]
}
}'
ipfs config Routing.Methods --json '{
"find-peers": {
"RouterName": "ParallelHelper"
},
"find-providers": {
"RouterName": "ParallelHelper"
},
"get-ipns": {
"RouterName": "ParallelHelper"
},
"provide": {
"RouterName": "ParallelHelper"
},
"put-ipns": {
"RouterName": "ParallelHelper"
}
}'
SwarmOptions for configuring the swarm.
Swarm.AddrFiltersAn array of multiaddr netmasks. The libp2p connection gater refuses any connection (inbound or outbound) whose remote address matches an entry, before any handshake.
By default Kubo advertises every interface address, so without this list a node may dial private or non-routable addresses learned from other peers. Some hosting providers treat such dials as netscan abuse.
This is the dial-side filter: it controls which peers this node connects
to or accepts connections from. It does not affect what this node advertises
about itself. For the publish-side filter see
Addresses.NoAnnounce. The
server profile typically populates both fields together
so that a range is neither advertised nor dialed.
[!TIP] The
serverprofile populates this field with a set of private, local-only, and non-globally-reachable prefixes (RFC 1918 private, RFC 6598 CGNAT, ULA, link-local, and others). See theserverprofile section for the full list and for optional entries operators may add manually.
Default: []
Type: array[string]
Swarm.DisableBandwidthMetricsA boolean value that when set to true, will cause ipfs to not keep track of bandwidth metrics. Disabling bandwidth metrics can lead to a slight performance improvement, as well as a reduction in memory usage.
Default: false
Type: bool
Swarm.DisableNatPortMapDisable automatic NAT port forwarding (turn off UPnP).
When not disabled (default), Kubo asks NAT devices (e.g., routers), to open up an external port and forward it to the port Kubo is running on. When this works (i.e., when your router supports NAT port forwarding), it makes the local Kubo node accessible from the public internet.
Default: false
Type: bool
Swarm.EnableHolePunchingEnable hole punching for NAT traversal when port forwarding is not possible.
When enabled, Kubo will coordinate with the counterparty using
a relayed connection,
to upgrade to a direct connection
through a NAT/firewall whenever possible.
This feature requires Swarm.RelayClient.Enabled to be set to true.
Default: true
Type: flag
Swarm.EnableAutoRelayREMOVED
See Swarm.RelayClient instead.
Swarm.RelayClientConfiguration options for the relay client to use relay services.
Default: {}
Type: object
Swarm.RelayClient.EnabledEnables "automatic relay user" mode for this node.
Your node will automatically use public relays from the network if it detects
that it cannot be reached from the public internet (e.g., it's behind a
firewall) and get a /p2p-circuit address from a public relay.
Default: true
Type: flag
Swarm.RelayClient.StaticRelaysYour node will use these statically configured relay servers instead of discovering public relays (Circuit Relay v2) from the network.
Default: []
Type: array[string]
Swarm.RelayServiceConfiguration options for the relay service that can be provided to other peers on the network (Circuit Relay v2).
Default: {}
Type: object
Swarm.RelayService.EnabledEnables providing /p2p-circuit v2 relay service to other peers on the network.
NOTE: This is the service/server part of the relay system.
Disabling this will prevent this node from running as a relay server.
Use Swarm.RelayClient.Enabled for turning your node into a relay user.
Default: true
Type: flag
Swarm.RelayService.LimitLimits are applied to every relayed connection.
Default: {}
Type: object[string -> string]
Swarm.RelayService.ConnectionDurationLimitTime limit before a relayed connection is reset.
Default: "2m"
Type: duration
Swarm.RelayService.ConnectionDataLimitLimit of data relayed (in each direction) before a relayed connection is reset.
Default: 131072 (128 kb)
Type: optionalInteger
Swarm.RelayService.ReservationTTLDuration of a new or refreshed reservation.
Default: "1h"
Type: duration
Swarm.RelayService.MaxReservationsMaximum number of active relay slots.
Default: 128
Type: optionalInteger
Swarm.RelayService.MaxCircuitsMaximum number of open relay connections for each peer.
Default: 16
Type: optionalInteger
Swarm.RelayService.BufferSizeSize of the relayed connection buffers.
Default: 2048
Type: optionalInteger
Swarm.RelayService.MaxReservationsPerPeerREMOVED in kubo 0.32 due to go-libp2p#2974
Swarm.RelayService.MaxReservationsPerIPMaximum number of reservations originating from the same IP.
Default: 8
Type: optionalInteger
Swarm.RelayService.MaxReservationsPerASNMaximum number of reservations originating from the same ASN.
Default: 32
Type: optionalInteger
Swarm.EnableRelayHopREMOVED
Replaced with Swarm.RelayService.Enabled.
Swarm.DisableRelayREMOVED
Set Swarm.Transports.Network.Relay to false instead.
Swarm.EnableAutoNATServiceREMOVED
Please use AutoNAT.ServiceMode.
Swarm.ConnMgrThe connection manager determines which and how many connections to keep and can be configured to keep. Kubo currently supports two connection managers:
By default, this section is empty and the implicit defaults defined below are used.
Swarm.ConnMgr.TypeSets the type of connection manager to use, options are: "none" (no connection
management) and "basic".
Default: "basic".
Type: optionalString (default when unset or empty)
The basic connection manager uses a "high water", a "low water", and internal
scoring to periodically close connections to free up resources. When a node
using the basic connection manager reaches HighWater idle connections, it
will close the least useful ones until it reaches LowWater idle
connections. The process of closing connections happens every SilencePeriod.
The connection manager considers a connection idle if:
GracePeriod.Example:
{
"Swarm": {
"ConnMgr": {
"Type": "basic",
"LowWater": 100,
"HighWater": 200,
"GracePeriod": "30s",
"SilencePeriod": "10s"
}
}
}
Swarm.ConnMgr.LowWaterLowWater is the number of connections that the basic connection manager will trim down to.
Default: 32
Type: optionalInteger
Swarm.ConnMgr.HighWaterHighWater is the number of connections that, when exceeded, will trigger a connection GC operation. Note: protected/recently formed connections don't count towards this limit.
Default: 96
Type: optionalInteger
Swarm.ConnMgr.GracePeriodGracePeriod is a time duration that new connections are immune from being closed by the connection manager.
Default: "20s"
Type: optionalDuration
Swarm.ConnMgr.SilencePeriodSilencePeriod is the time duration between connection manager runs, when connections that are idle are closed.
Default: "10s"
Type: optionalDuration
Swarm.ResourceMgrLearn more about Kubo's usage of libp2p Network Resource Manager in the dedicated resource management docs.
Swarm.ResourceMgr.EnabledEnables the libp2p Resource Manager using limits based on the defaults and/or other configuration as discussed in libp2p resource management.
Default: true
Type: flag
Swarm.ResourceMgr.MaxMemoryThis is the max amount of memory to allow go-libp2p to use.
libp2p's resource manager will prevent additional resource creation while this limit is reached. This value is also used to scale the limit on various resources at various scopes when the default limits (discussed in libp2p resource management) are used. For example, increasing this value will increase the default limit for incoming connections.
It is possible to inspect the runtime limits via ipfs swarm resources --help.
[!IMPORTANT]
Swarm.ResourceMgr.MaxMemoryis the memory limit for go-libp2p networking stack alone, and not for entire Kubo or Bitswap.To set memory limit for the entire Kubo process, use
GOMEMLIMITenvironment variable which all Go programs recognize, and then setSwarm.ResourceMgr.MaxMemoryto less than your customGOMEMLIMIT.
Default: [TOTAL_SYSTEM_MEMORY]/2
Type: optionalBytes
Swarm.ResourceMgr.MaxFileDescriptorsThis is the maximum number of file descriptors to allow libp2p to use. libp2p's resource manager will prevent additional file descriptor consumption while this limit is reached.
This param is ignored on Windows.
Default [TOTAL_SYSTEM_FILE_DESCRIPTORS]/2
Type: optionalInteger
Swarm.ResourceMgr.AllowlistA list of [multiaddrs][libp2p-multiaddrs] that can bypass normal system limits (but are still limited by the allowlist scope). Convenience config around go-libp2p-resource-manager#Allowlist.Add.
Default: []
Type: array[string] (multiaddrs)
Swarm.TransportsConfiguration section for libp2p transports. An empty configuration will apply the defaults.
Swarm.Transports.NetworkConfiguration section for libp2p network transports. Transports enabled in
this section will be used for dialing. However, to receive connections on these
transports, multiaddrs for these transports must be added to Addresses.Swarm.
Supported transports are: QUIC, TCP, WS, Relay, WebTransport and WebRTCDirect.
[!CAUTION] SECURITY CONSIDERATIONS FOR NETWORK TRANSPORTS
Enabling network transports allows your node to accept connections from the internet. Ensure your firewall rules and
Addresses.Swarmconfiguration align with your security requirements. See Security section for network exposure considerations.
Each field in this section is a flag.
Swarm.Transports.Network.TCPTCP is a simple and widely deployed transport, it should be compatible with most implementations and network configurations. TCP doesn't directly support encryption and/or multiplexing, so libp2p will layer a security & multiplexing transport over it.
Default: Enabled
Type: flag
Listen Addresses:
Swarm.Transports.Network.WebsocketWebsocket is a transport usually used to connect to non-browser-based IPFS nodes from browser-based js-ipfs nodes.
While it's enabled by default for dialing, Kubo doesn't listen on this transport by default.
Default: Enabled
Type: flag
Listen Addresses:
Swarm.Transports.Network.QUICQUIC is the most widely used transport by Kubo nodes. It is a UDP-based transport with built-in encryption and multiplexing. The primary benefits over TCP are:
Default: Enabled
Type: flag
Listen Addresses:
/ip4/0.0.0.0/udp/4001/quic-v1 (default)/ip6/::/udp/4001/quic-v1 (default)Swarm.Transports.Network.RelayLibp2p Relay proxy
transport that forms connections by hopping between multiple libp2p nodes.
Allows IPFS node to connect to other peers using their /p2p-circuit
[multiaddrs][libp2p-multiaddrs]. This transport is primarily useful for bypassing firewalls and
NATs.
See also:
Swarm.RelayClient.Enabled for getting a public/p2p-circuit address when behind a firewall.Swarm.EnableHolePunching for direct connection upgrade through relaySwarm.RelayService.Enabled for becoming a
limited relay for other peersDefault: Enabled
Type: flag
Listen Addresses:
Swarm.Transports.Network.WebTransportA new feature of go-libp2p
is the WebTransport transport.
This is a spiritual descendant of WebSocket but over HTTP/3.
Since this runs on top of HTTP/3 it uses QUIC under the hood.
We expect it to perform worst than QUIC because of the extra overhead,
this transport is really meant at agents that cannot do TCP or QUIC (like browsers).
WebTransport is a new transport protocol currently under development by the IETF and the W3C, and already implemented by Chrome. Conceptually, it’s like WebSocket run over QUIC instead of TCP. Most importantly, it allows browsers to establish (secure!) connections to WebTransport servers without the need for CA-signed certificates, thereby enabling any js-libp2p node running in a browser to connect to any kubo node, with zero manual configuration involved.
The previous alternative is websocket secure, which require installing a reverse proxy and TLS certificates manually.
Default: Enabled
Type: flag
Listen Addresses:
/ip4/0.0.0.0/udp/4001/quic-v1/webtransport (default)/ip6/::/udp/4001/quic-v1/webtransport (default)Swarm.Transports.Network.WebRTCDirectWebRTC Direct is a transport protocol that provides another way for browsers to connect to the rest of the libp2p network. WebRTC Direct allows for browser nodes to connect to other nodes without special configuration, such as TLS certificates. This can be useful for browser nodes that do not yet support WebTransport, which is still relatively new and has known issues.
Enabling this transport allows Kubo node to act on /udp/4001/webrtc-direct
listeners defined in Addresses.Swarm, Addresses.Announce or
Addresses.AppendAnnounce.
[!NOTE] WebRTC Direct is browser-to-node. It cannot be used to connect a browser node to a node that is behind a NAT or firewall (without UPnP port mapping). The browser-to-private requires using normal WebRTC, which is currently being worked on in go-libp2p#2009.
Default: Enabled
Type: flag
Listen Addresses:
/ip4/0.0.0.0/udp/4001/webrtc-direct (default)/ip6/::/udp/4001/webrtc-direct (default)Swarm.Transports.SecurityConfiguration section for libp2p security transports. Transports enabled in this section will be used to secure unencrypted connections.
This does not concern all the QUIC transports which use QUIC's builtin encryption.
Security transports are configured with the priority type.
When establishing an outbound connection, Kubo will try each security transport in priority order (lower first), until it finds a protocol that the receiver supports. When establishing an inbound connection, Kubo will let the initiator choose the protocol, but will refuse to use any of the disabled transports.
Supported transports are: TLS (priority 100) and Noise (priority 200).
No default priority will ever be less than 100. Lower values have precedence.
Swarm.Transports.Security.TLSTLS (1.3) is the default security transport as of Kubo 0.5.0. It's also the most scrutinized and trusted security transport.
Default: 100
Type: priority
Swarm.Transports.Security.SECIOREMOVED: support for SECIO has been removed. Please remove this option from your config.
Swarm.Transports.Security.NoiseNoise is slated to replace TLS as the cross-platform, default libp2p protocol due to ease of implementation. It is currently enabled by default but with low priority as it's not yet widely supported.
Default: 200
Type: priority
Swarm.Transports.MultiplexersConfiguration section for libp2p multiplexer transports. Transports enabled in this section will be used to multiplex duplex connections.
This does not concern all the QUIC transports which use QUIC's builtin muxing.
Multiplexer transports are configured the same way security transports are, with
the priority type. Like with security transports, the initiator gets their
first choice.
Supported transport is only: Yamux (priority 100)
No default priority will ever be less than 100.
Swarm.Transports.Multiplexers.YamuxYamux is the default multiplexer used when communicating between Kubo nodes.
Default: 100
Type: priority
Swarm.Transports.Multiplexers.MplexREMOVED: See https://github.com/ipfs/kubo/issues/9958
Support for Mplex has been removed from Kubo and go-libp2p. Please remove this option from your config.
DNSOptions for configuring DNS resolution for DNSLink and /dns* [Multiaddrs][libp2p-multiaddrs] (including peer addresses discovered via DHT or delegated routing).
DNS.ResolversMap of FQDNs to custom resolver URLs.
This allows for overriding the default DNS resolver provided by the operating system, and using different resolvers per domain or TLD (including ones from alternative, non-ICANN naming systems).
Example:
{
"DNS": {
"Resolvers": {
"eth.": "https://dns.eth.limo/dns-query",
"crypto.": "https://resolver.unstoppable.io/dns-query",
"libre.": "https://ns1.iriseden.fr/dns-query",
".": "https://cloudflare-dns.com/dns-query"
}
}
}
Be mindful that:
https:// URLs for DNS over HTTPS (DoH) endpoints are supported as values.. as illustrated above."auto" uses DNS resolvers from AutoConf when enabled. For example: {".": "auto"} uses any custom DoH resolver (global or per TLD) provided by AutoConf system.AutoTLS.SkipDNSLookup is enabled (default), domains matching AutoTLS.DomainSuffix (default: libp2p.direct) are resolved locally by parsing the IP directly from the hostname. Set AutoTLS.SkipDNSLookup=false to force network DNS lookups for these domains.Default: {".": "auto"}
Type: object[string -> string]
DNS.MaxCacheTTLMaximum duration for which entries are valid in the DoH cache.
This allows you to cap the Time-To-Live suggested by the DNS response (RFC2181).
If present, the upper bound is applied to DoH resolvers in DNS.Resolvers.
Note: this does NOT work with Go's default DNS resolver. To make this a global setting, add a . entry to DNS.Resolvers first.
Examples:
"1m" DNS entries are kept for 1 minute or less."0s" DNS entries expire as soon as they are retrieved.Default: Respect DNS Response TTL
Type: optionalDuration
HTTPRetrievalHTTPRetrieval is configuration for pure HTTP retrieval based on Trustless HTTP Gateways'
Block Responses (application/vnd.ipld.raw)
which can be used in addition to or instead of retrieving blocks with Bitswap over Libp2p.
Default: {}
Type: object
HTTPRetrieval.EnabledControls whether HTTP-based block retrieval is enabled.
When enabled, Kubo will act on /tls/http (HTTP/2) providers (Trustless HTTP Gateways) returned by the Routing.DelegatedRouters
to perform pure HTTP block retrievals
(/ipfs/cid?format=raw, Accept: application/vnd.ipld.raw)
alongside Bitswap over Libp2p.
HTTP requests for application/vnd.ipld.raw will be made instead of Bitswap when a peer has a /tls/http multiaddr
and the HTTPS server returns HTTP 200 for the probe path.
[!IMPORTANT] This feature is relatively new. Please report any issues via Github.
Important notes:
- TLS and HTTP/2 are required. For privacy reasons, and to maintain feature-parity with browsers, unencrypted
http://providers are ignored and not used.- This feature works in the same way as Bitswap: connected HTTP-peers receive optimistic block requests even for content that they are not announcing.
- For performance reasons, and to avoid loops, the HTTP client does not follow redirects. Providers should keep announcements up to date.
- IPFS ecosystem is working towards supporting HTTP providers on Amino DHT. Currently, HTTP providers are mostly limited to results from
Routing.DelegatedRoutersendpoints and requiresRouting.Type=auto|autoclient.
Default: true
Type: flag
HTTPRetrieval.AllowlistOptional list of hostnames for which HTTP retrieval is allowed for. If this list is not empty, only hosts matching these entries will be allowed for HTTP retrieval.
[!TIP] To limit HTTP retrieval to a provider at
/dns4/example.com/tcp/443/tls/http(which would serveHEAD|GET https://example.com/ipfs/cid?format=raw), set this to["example.com"]
Default: []
Type: array[string]
HTTPRetrieval.DenylistOptional list of hostnames for which HTTP retrieval is not allowed. Denylist entries take precedence over Allowlist entries.
[!TIP] This denylist operates on HTTP endpoint hostnames. To deny specific PeerID, use
Routing.IgnoreProvidersinstead.
Default: []
Type: array[string]
HTTPRetrieval.NumWorkersThe number of worker goroutines to use for concurrent HTTP retrieval operations. This setting controls the level of parallelism for HTTP-based block retrieval operations. Higher values can improve performance when retrieving many blocks but may increase resource usage.
Default: 16
Type: optionalInteger
HTTPRetrieval.MaxBlockSizeSets the maximum size of a block that the HTTP retrieval client will accept.
[!NOTE] This setting is a security feature designed to protect Kubo from malicious providers who might send excessively large or invalid data. Increasing this value allows Kubo to retrieve larger blocks from compatible HTTP providers, but doing so reduces interoperability with Bitswap, and increases potential security risks.
Learn more: Supporting Large IPLD Blocks: Why block limits?
Default: 2MiB (matching Bitswap size limit)
Type: optionalString
HTTPRetrieval.TLSInsecureSkipVerifyDisables TLS certificate validation. Allows making HTTPS connections to HTTP/2 test servers with self-signed TLS certificates. Only for testing, do not use in production.
Default: false
Type: flag
ImportOptions to configure the default parameters used for ingesting data, in commands such as ipfs add or ipfs block put. All affected commands are detailed per option.
These options implement IPIP-499: UnixFS CID Profiles for reproducible CID generation across IPFS implementations. Instead of configuring individual options, you can apply a predefined profile with ipfs config profile apply <profile-name>. See Profiles for available options like unixfs-v1-2025.
Note that using CLI flags will override the options defined here.
Import.CidVersionThe default CID version. Commands affected: ipfs add.
Must be either 0 or 1. CIDv0 uses SHA2-256 only, while CIDv1 supports multiple hash functions.
Default: 0
Type: optionalInteger
Import.UnixFSRawLeavesThe default UnixFS raw leaves option. Commands affected: ipfs add, ipfs files write.
Default: false if CidVersion=0; true if CidVersion=1
Type: flag
Import.UnixFSChunkerThe default UnixFS chunker. Commands affected: ipfs add.
Valid formats:
size-<bytes> - fixed size chunkerrabin-<min>-<avg>-<max> - rabin fingerprint chunkerbuzhash - buzhash chunkerThe maximum accepted value for size-<bytes> and rabin max parameter is
2MiB - 256 bytes (2096896 bytes). The 256-byte overhead budget is reserved
for protobuf/UnixFS framing so that serialized blocks stay within the 2MiB
block size limit defined by the
bitswap spec.
The buzhash chunker uses a fixed internal maximum of 512KiB and is not
affected by this limit.
Only the fixed-size chunker (size-<bytes>) guarantees that the same data
will always produce the same CID. The rabin and buzhash chunkers may
change their internal parameters in a future release.
Default: size-262144
Type: optionalString
Import.HashFunctionThe default hash function. Commands affected: ipfs add, ipfs block put, ipfs dag put.
Must be a valid multihash name (e.g., sha2-256, blake3) and must be allowed for use in IPFS according to security constraints.
Run ipfs cid hashes --supported to see the full list of allowed hash functions.
Default: sha2-256
Type: optionalString
Import.FastProvideRootImmediately provide root CIDs to the routing system in addition to the regular provide queue.
This complements the reprovide system: fast-provide handles the urgent case (root CIDs that users share and reference), while the reprovide cycle provides all blocks according to the Provide.Strategy over time.
When disabled, only the reprovide cycle handles content announcement.
Applies to ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update. Can be overridden per-command with the --fast-provide-root flag.
Default: true
Type: flag
Import.FastProvideDAGWalk and provide the full DAG immediately after content is added or pinned, using the active Provide.Strategy to determine scope.
When enabled with +unique, the DAG walk deduplicates via a bloom filter. When enabled with +entities, only entity roots (files, directories, HAMT shards) are provided.
When disabled (default), only the root CID is provided immediately (via Import.FastProvideRoot) and child blocks are deferred to the reprovide cycle.
Applies to ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update. Can be overridden per-command with the --fast-provide-dag flag. Has no effect when Provide.Strategy=all (the blockstore already provides every block on write).
Default: false
Type: flag
Import.FastProvideWaitWait for the immediate provide to complete before returning.
When enabled, the command blocks until the provide completes, ensuring guaranteed discoverability before returning. When disabled (default), the provide happens asynchronously in the background without blocking the command. Applies to both Import.FastProvideRoot and Import.FastProvideDAG.
Use this when you need certainty that content is discoverable before the command returns (e.g., sharing a link immediately after adding).
Applies to ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update. Can be overridden per-command with the --fast-provide-wait flag.
Ignored when DHT is not available for routing (e.g., Routing.Type=none or delegated-only configurations).
Default: false
Type: flag
Import.BatchMaxNodesThe maximum number of nodes in a write-batch. The total size of the batch is limited by BatchMaxnodes and BatchMaxSize.
Increasing this will batch more items together when importing data with ipfs dag import, which can speed things up.
Must be positive (> 0). Setting to 0 would cause immediate batching after each node, which is inefficient.
Default: 128
Type: optionalInteger
Import.BatchMaxSizeThe maximum size of a single write-batch (computed as the sum of the sizes of the blocks). The total size of the batch is limited by BatchMaxnodes and BatchMaxSize.
Increasing this will batch more items together when importing data with ipfs dag import, which can speed things up.
Must be positive (> 0). Setting to 0 would cause immediate batching after any data, which is inefficient.
Default: 20971520 (20MiB)
Type: optionalInteger
Import.UnixFSFileMaxLinksThe maximum number of links that a node part of a UnixFS File can have when building the DAG while importing.
This setting controls both the fanout in files that are chunked into several blocks and grouped as a Unixfs (dag-pb) DAG.
Must be positive (> 0). Zero or negative values would break file DAG construction.
Default: 174
Type: optionalInteger
Import.UnixFSDirectoryMaxLinksThe maximum number of links that a node part of a UnixFS basic directory can have when building the DAG while importing.
This setting controls both the fanout for basic, non-HAMT folder nodes. It sets a limit after which directories are converted to a HAMT-based structure.
When unset (0), no limit exists for children. Directories will be converted to HAMTs based on their estimated size only.
This setting will cause basic directories to be converted to HAMTs when they
exceed the maximum number of children. This happens transparently during the
add process. The fanout of HAMT nodes is controlled by MaxHAMTFanout.
Must be non-negative (>= 0). Zero means no limit, negative values are invalid.
Commands affected: ipfs add
Default: 0 (no limit, because Import.UnixFSHAMTDirectorySizeThreshold triggers controls when to switch to HAMT sharding when a directory grows too big)
Type: optionalInteger
Import.UnixFSHAMTDirectoryMaxFanoutThe maximum number of children that a node part of a UnixFS HAMT directory (aka sharded directory) can have.
HAMT directories have unlimited children and are used when basic directories
become too big or reach MaxLinks. A HAMT is a structure made of UnixFS
nodes that store the list of elements in the folder. This option controls the
maximum number of children that the HAMT nodes can have.
According to the UnixFS specification, this value must be a power of 2, between 8 (for byte-aligned bitfields) and 1024 (to prevent denial-of-service attacks).
Commands affected: ipfs add, ipfs daemon (globally overrides boxo/ipld/unixfs/io.DefaultShardWidth)
Default: 256
Type: optionalInteger
Import.UnixFSHAMTDirectorySizeThresholdThe sharding threshold to decide whether a basic UnixFS directory should be sharded (converted into HAMT Directory) or not.
This value is not strictly related to the size of the UnixFS directory block and any increases in the threshold should come with being careful that block sizes stay under 2MiB in order for them to be reliably transferable through the networking stack. At the time of writing this, IPFS peers on the public swarm tend to ignore requests for blocks bigger than 2MiB.
Uses implementation from boxo/ipld/unixfs/io/directory, where the size is not
the exact block size of the encoded directory but just the estimated size
based byte length of DAG-PB Links names and CIDs.
Setting to 1B is functionally equivalent to always using HAMT (useful in testing).
Commands affected: ipfs add, ipfs daemon (globally overrides boxo/ipld/unixfs/io.HAMTShardingSize)
Default: 256KiB (may change, inspect DefaultUnixFSHAMTDirectorySizeThreshold to confirm)
Type: optionalBytes
Import.UnixFSHAMTDirectorySizeEstimationControls how directory size is estimated when deciding whether to switch from a basic UnixFS directory to HAMT sharding.
Accepted values:
links (default): Legacy estimation using sum of link names and CID byte lengths.block: Full serialized dag-pb block size for accurate threshold decisions.disabled: Disable HAMT sharding entirely (directories always remain basic).The block estimation is recommended for new profiles as it provides more
accurate threshold decisions and better cross-implementation consistency.
See IPIP-499 for more details.
Commands affected: ipfs add
Default: links
Type: optionalString
Import.UnixFSDAGLayoutControls the DAG layout used when chunking files.
Accepted values:
balanced (default): Balanced DAG layout with uniform leaf depth.trickle: Trickle DAG layout optimized for streaming.Commands affected: ipfs add
Default: balanced
Type: optionalString
VersionOptions to configure agent version announced to the swarm, and leveraging other peers version for detecting when there is time to update.
Version.AgentSuffixOptional suffix to the AgentVersion presented by ipfs id and exposed via libp2p identify protocol.
The value from config takes precedence over value passed via ipfs daemon --agent-version-suffix.
[!NOTE] Setting a custom version suffix helps with ecosystem analysis, such as Amino DHT reports published at https://stats.ipfs.network
Default: "" (no suffix, or value from ipfs daemon --agent-version-suffix=)
Type: optionalString
Version.SwarmCheckEnabledObserve the AgentVersion of swarm peers and log warning when
SwarmCheckPercentThreshold of peers runs version higher than this node.
Default: true
Type: flag
Version.SwarmCheckPercentThresholdControl the percentage of kubo/ peers running new version required to
trigger update warning.
Default: 5
Type: optionalInteger (1-100)
Configuration profiles allow to tweak configuration quickly. Profiles can be
applied with the --profile flag to ipfs init or with the ipfs config profile apply command. When a profile is applied a backup of the configuration file
will be created in $IPFS_PATH.
Configuration profiles can be applied additively. For example, both the unixfs-v1-2025 and lowpower profiles can be applied one after the other.
The available configuration profiles are listed below. You can also find them
documented in ipfs config profile --help.
server profileThe server profile hardens a node for public-internet operation. Recommended
on machines with public IPv4 addresses (no NAT, no uPnP) at providers that
interpret local IPFS discovery and traffic as netscan abuse
(example).
Applying it:
Discovery.MDNS,Addresses.NoAnnounce (do not advertise) and
Swarm.AddrFilters (do not dial or accept).The prefix list comes from the IANA IPv4 and IPv6 Special-Purpose Address Registries per RFC 6890, covering entries marked "Not Globally Reachable."
The filters apply only at the libp2p swarm layer. The HTTP
Addresses.API and Addresses.Gateway
listeners keep working over loopback.
server profile| Multiaddr | Description | Reference |
|---|---|---|
/ip4/10.0.0.0/ipcidr/8 | Private-use | RFC 1918 |
/ip4/100.64.0.0/ipcidr/10 | Shared address space (CGNAT) | RFC 6598 |
/ip4/127.0.0.0/ipcidr/8 | Loopback | RFC 1122 §3.2.1.3 |
/ip4/169.254.0.0/ipcidr/16 | Link-local | RFC 3927 |
/ip4/172.16.0.0/ipcidr/12 | Private-use | RFC 1918 |
/ip4/192.0.0.0/ipcidr/24 | IETF protocol assignments | RFC 6890 |
/ip4/192.0.2.0/ipcidr/24 | TEST-NET-1 (documentation) | RFC 5737 |
/ip4/192.168.0.0/ipcidr/16 | Private-use | RFC 1918 |
/ip4/198.18.0.0/ipcidr/15 | Benchmarking | RFC 2544 |
/ip4/198.51.100.0/ipcidr/24 | TEST-NET-2 (documentation) | RFC 5737 |
/ip4/203.0.113.0/ipcidr/24 | TEST-NET-3 (documentation) | RFC 5737 |
/ip4/240.0.0.0/ipcidr/4 | Reserved (covers broadcast 255.255.255.255/32) | RFC 1112 §4 |
server profile| Multiaddr | Description | Reference |
|---|---|---|
/ip6/::/ipcidr/3 | IANA-reserved 0000::/3 (catches unallocated leaks like 1e::/16) | RFC 4291 §2.4 |
/ip6/::1/ipcidr/128 | Loopback | RFC 4291 §2.4 |
/ip6/100::/ipcidr/64 | Discard-only | RFC 6666 |
/ip6/2001:2::/ipcidr/48 | Benchmarking | RFC 5180 |
/ip6/2001:db8::/ipcidr/32 | Documentation | RFC 3849 |
/ip6/fc00::/ipcidr/7 | Unique local addresses (ULA) | RFC 4193 |
/ip6/fe80::/ipcidr/10 | Link-local unicast | RFC 4291 |
If you need peering over one of the prefixes above, remove that entry from
Swarm.AddrFilters and
Addresses.NoAnnounce after applying the profile.
Or skip the profile and populate those fields manually.
| Scenario | Remove |
|---|---|
LAN peering over 10.0.0.0/8 | /ip4/10.0.0.0/ipcidr/8 |
LAN peering over 172.16.0.0/12 | /ip4/172.16.0.0/ipcidr/12 |
LAN peering over 192.168.0.0/16 | /ip4/192.168.0.0/ipcidr/16 |
Tailscale or other CGNAT overlay (100.64.0.0/10) | /ip4/100.64.0.0/ipcidr/10 |
| IPv6 ULA overlay (WireGuard, Tailscale, Nebula, ZeroTier, cjdns) | /ip6/fc00::/ipcidr/7 |
| Link-local IPv6 peering | /ip6/fe80::/ipcidr/10 |
Multiple daemons peering over 127.0.0.1 | /ip4/127.0.0.0/ipcidr/8 |
Multiple daemons peering over IPv6 loopback ::1 | /ip6/::1/ipcidr/128 and /ip6/::/ipcidr/3 |
Yggdrasil mesh peering (200::/8, 300::/8) | /ip6/::/ipcidr/3 |
NAT64 (64:ff9b::/96) reachability | /ip6/::/ipcidr/3 |
/ip6/::/ipcidr/3Added after bogus IPv6 prefixes such as 1e::/16 (unallocated space
inside 0000::/3) started leaking into DHT self-records from public
Kubo nodes with go-libp2p v0.47. See
go-libp2p#3460.
Most overlay networks (WireGuard, Tailscale, Nebula, ZeroTier,
cjdns) use ULA fc00::/7 and are blocked by the separate
/ip6/fc00::/ipcidr/7 entry, not by this one. The notable exception is
Yggdrasil, which uses 0200::/7 inside 0000::/3.
NAT64 translators rarely emit 64:ff9b:: (RFC 6052) or
64:ff9b:1::/48 (RFC 8215) as a source address, so the rule's
announce-side impact on NAT64 deployments is typically none. Removal is
warranted only if a 64:ff9b:: address is bound directly to a node
interface.
randomports profileUse a random port number for the incoming swarm connections. Used for testing.
default-datastore profileConfigures the node to use the default datastore (flatfs).
Read the "flatfs" profile description for more information on this datastore.
This profile may only be applied when first initializing the node.
local-discovery profileEnables local Discovery.MDNS (enabled by default).
Useful to re-enable local discovery after it's disabled by another profile (e.g., the server profile).
test profile
Reduces external interference of IPFS daemon, this is useful when using the daemon in test environments.
default-networking profileRestores default network settings. Inverse profile of the test profile.
autoconf-on profileSafe default for joining the public IPFS Mainnet swarm with automatic configuration. Can also be used with custom AutoConf.URL for other networks.
autoconf-off profileDisables AutoConf and clears all networking fields for manual configuration. Use this for private networks or when you want explicit control over all endpoints.
flatfs profileConfigures the node to use the flatfs datastore. Flatfs is the default, most battle-tested and reliable datastore.
You should use this datastore if:
--nocopy.[!WARNING] This profile may only be applied when first initializing the node via
ipfs init --profile flatfs
[!NOTE] See caveats and configuration options at
datastores.md#flatfs
flatfs-measure profileConfigures the node to use the flatfs datastore with metrics. This is the same as flatfs profile with the addition of the measure datastore wrapper.
pebbleds profileConfigures the node to use the pebble high-performance datastore.
Pebble is a LevelDB/RocksDB inspired key-value store focused on performance and internal usage by CockroachDB. You should use this datastore if:
[!WARNING] This profile may only be applied when first initializing the node via
ipfs init --profile pebbleds
[!NOTE] See other caveats and configuration options at
datastores.md#pebbleds
pebbleds-measure profileConfigures the node to use the pebble datastore with metrics. This is the same as pebbleds profile with the addition of the measure datastore wrapper.
badgerds profileConfigures the node to use the legacy badgerv1 datastore.
[!CAUTION] Badger v1 datastore is deprecated and will be removed in a future Kubo release.
This is based on very old badger 1.x, which has not been maintained by its upstream maintainers for years and has known bugs (startup timeouts, shutdown hangs, file descriptor exhaustion, and more). Do not use it for new deployments.
To migrate: create a new
IPFS_PATHwithflatfs(ipfs init --profile=flatfs), move pinned data viaipfs dag export/importoripfs pin ls -t recursive|add, and decommission the old badger-based node. When it comes to block storage, use experimentalpebbledsonly if you are sure modernflatfsdoes not serve your use case (most users will be perfectly fine withflatfs, it is also possible to keepflatfsfor blocks and replaceleveldbwithpebbleif preferred overleveldb).
Also, be aware that:
--enable-gc, you plan on storing very little data in
your IPFS node, and disk usage is more critical than performance, consider using
flatfs.[!WARNING] This profile may only be applied when first initializing the node via
ipfs init --profile badgerds
[!NOTE] See other caveats and configuration options at
datastores.md#badgerds
badgerds-measure profileConfigures the node to use the legacy badgerv1 datastore with metrics. This is the same as badgerds profile with the addition of the measure datastore wrapper. This profile will be removed in a future Kubo release.
lowpower profileReduces daemon overhead on the system by disabling optional swarm services.
Routing.Type set to autoclient (no DHT server, only client).Swarm.ConnMgr set to maintain minimum number of p2p connections at a time.AutoNAT.Swam.RelayService.[!NOTE] This profile is provided for legacy reasons. With modern Kubo setting the above should not be necessary.
announce-off profileDisables Provide system (and announcing to Amino DHT).
[!CAUTION] The main use case for this is setups with manual Peering.Peers config. Data from this node will not be announced on the DHT. This will make DHT-based routing an data retrieval impossible if this node is the only one hosting it, and other peers are not already connected to it.
announce-on profile(Re-)enables Provide system (reverts announce-off profile).
unixfs-v0-2015 profileLegacy UnixFS import profile for backward-compatible CID generation. Produces CIDv0 with no raw leaves, sha2-256, 256 KiB chunks, and link-based HAMT size estimation.
See https://github.com/ipfs/kubo/blob/master/config/profile.go for exact Import.* settings.
[!NOTE] Use only when legacy CIDs are required. For new projects, use
unixfs-v1-2025.See IPIP-499 for more details.
legacy-cid-v0 profileAlias for unixfs-v0-2015 profile.
unixfs-v1-2025 profileRecommended UnixFS import profile for cross-implementation CID determinism. Uses CIDv1, raw leaves, sha2-256, 1 MiB chunks, 1024 links per file node, 256 HAMT fanout, and block-based size estimation for HAMT threshold.
See https://github.com/ipfs/kubo/blob/master/config/profile.go for exact Import.* settings.
[!NOTE] This profile ensures CID consistency across different IPFS implementations.
See IPIP-499 for more details.
This section provides an overview of security considerations for configurations that expose network services.
Several configuration options expose TCP or UDP ports that can make your Kubo node accessible from the network:
Addresses.API - Exposes the admin RPC API (default: localhost:5001)Addresses.Gateway - Exposes the HTTP gateway (default: localhost:8080)Addresses.Swarm - Exposes P2P connectivity (default: 0.0.0.0:4001, both UDP and TCP)Swarm.Transports.Network - Controls which P2P transport protocols are enabled over TCP and UDPAddresses.API) bound to localhost unless authentication (API.Authorizations) is configuredGateway.NoFetch to prevent arbitrary CID retrieval if Kubo is acting as a public gateway available to anyoneAddresses.Swarm is special - all incoming traffic to swarm ports should be allowed to ensure proper P2P connectivityAddresses.NoAnnounce, Addresses.Announce, and Addresses.AppendAnnounceserver profile for production deploymentsThis document refers to the standard JSON types (e.g., null, string,
number, etc.), as well as a few custom types, described below.
flagFlags allow enabling and disabling features. However, unlike simple booleans,
they can also be null (or omitted) to indicate that the default value should
be chosen. This makes it easier for Kubo to change the defaults in the
future unless the user explicitly sets the flag to either true (enabled) or
false (disabled). Flags have three possible states:
null or missing (apply the default value).true (enabled)false (disabled)priorityPriorities allow specifying the priority of a feature/protocol and disabling the feature/protocol. Priorities can take one of the following values:
null/missing (apply the default priority, same as with flags)false (disabled)1 - 2^63 (priority, lower is preferred)stringsStrings is a special type for conveniently specifying a single string, an array of strings, or null:
null"a single string"["an", "array", "of", "strings"]durationDuration is a type for describing lengths of time, using the same format go
does (e.g, "1d2h4m40.01s").
optionalIntegerOptional integers allow specifying some numerical value which has an implicit default when missing from the config file:
null/missing will apply the default value defined in Kubo sources (.WithDefault(value))-2^63 and 2^63-1 (i.e. -9223372036854775808 to 9223372036854775807)optionalBytesOptional Bytes allow specifying some number of bytes which has an implicit default when missing from the config file:
null/missing (apply the default value defined in Kubo sources)1048576 for 1MiB)optionalStringOptional strings allow specifying some string value which has an implicit default when missing from the config file:
null/missing will apply the default value defined in Kubo sources (.WithDefault("value"))optionalDurationOptional durations allow specifying some duration value which has an implicit default when missing from the config file:
null/missing will apply the default value defined in Kubo sources (.WithDefault("1h2m3s"))"1d2h4m40.01s").