docs/changelogs/v0.35.md
<a href="http://ipshipyard.com/"></a>
This release was brought to you by the Shipyard team.
Reprovider.Strategy for MFSDatastore Metrics Now Opt-InBitswap configuration optionsRouting configuration optionsThis release brings significant UX and performance improvements to data onboarding, provisioning, and retrieval systems.
New configuration options let you customize the shape of UnixFS DAGs generated during the data import, control the scope of DAGs announced on the Amino DHT, select which delegated routing endpoints are queried, and choose whether to enable HTTP retrieval alongside Bitswap over Libp2p.
Continue reading for more details.
This release adds experimental support for retrieving blocks directly over HTTPS (HTTP/2), complementing the existing Bitswap over Libp2p.
The opt-in client enables Kubo to use delegated routing results with /tls/http multiaddrs, connecting to HTTPS servers that support Trustless HTTP Gateway's Block Responses (?format=raw, application/vnd.ipld.raw). Fetching blocks via HTTPS (HTTP/2) simplifies infrastructure and reduces costs for storage providers by leveraging HTTP caching and CDNs.
To enable this feature for testing and feedback, set:
$ ipfs config --json HTTPRetrieval.Enabled true
See HTTPRetrieval for more details.
Reprovider.Strategy for MFSThe Mutable File System (MFS) in Kubo is a UnixFS filesystem managed with ipfs files commands. It supports familiar file operations like cp and mv within a folder-tree structure, automatically updating a MerkleDAG and a "root CID" that reflects the current MFS state. Files in MFS are protected from garbage collection, offering a simpler alternative to ipfs pin. This makes it a popular choice for tools like IPFS Desktop and the WebUI.
Previously, the pinned reprovider strategy required manual pin management: each dataset update meant pinning the new version and unpinning the old one. Now, new strategiesβmfs and pinned+mfsβlet users limit announcements to data explicitly placed in MFS. This simplifies updating datasets and announcing only the latest version to the Amino DHT.
Users relying on the pinned strategy can switch to pinned+mfs and use MFS alone to manage updates and announcements, eliminating the need for manual pinning and unpinning. We hope this makes it easier to publish just the data that matters to you.
See Reprovider.Strategy for more details.
The MFS root (filesystem behind the ipfs files API) is now available as a read/write FUSE mount point at Mounts.MFS. This filesystem is mounted in the same way as Mounts.IPFS and Mounts.IPNS when running ipfs mount or ipfs daemon --mount.
Note that the operations supported by the MFS FUSE mountpoint are limited, since MFS doesn't store file attributes.
See Mounts and docs/fuse.md for more details.
The WebUI, accessible at http://127.0.0.1:5001/webui/, now includes support for the grid view on the Files screen:
This release advances CIDv1 support by introducing fine-grained control over UnixFS DAG shaping during data ingestion with the ipfs add command.
Wider DAG trees (more links per node, higher fanout, larger thresholds) are beneficial for large files and directories with many files, reducing tree depth and lookup latency in high-latency networks, but they increase node size, straining memory and CPU on resource-constrained devices. Narrower trees (lower link count, lower fanout, smaller thresholds) are preferable for smaller directories, frequent updates, or low-power clients, minimizing overhead and ensuring compatibility, though they may increase traversal steps for very large datasets.
Kubo now allows users to act on these tradeoffs and customize the width of the DAG created by ipfs add command.
ipfs add OptionsThree new options allow you to override default settings for specific import operations:
--max-file-links: Sets the maximum number of child links for a single file chunk.--max-directory-links: Defines the maximum number of child entries in a "basic" (single-chunk) directory.
Import.UnixFSHAMTDirectorySizeThreshold are converted to HAMT-based (sharded across multiple blocks) structures.--max-hamt-fanout: Specifies the maximum number of child nodes for HAMT internal structures.Import.* ConfigurationYou can set default values for these options using the following configuration settings:
Import.UnixFSFileMaxLinksImport.UnixFSDirectoryMaxLinksImport.UnixFSHAMTDirectoryMaxFanoutImport.UnixFSHAMTDirectorySizeThresholdImport ProfilesThe release updated configuration profiles to incorporate these new Import.* settings:
test-cid-v1 now includes current defaults as explicit Import.UnixFSFileMaxLinks=174, Import.UnixFSDirectoryMaxLinks=0, Import.UnixFSHAMTDirectoryMaxFanout=256 and Import.UnixFSHAMTDirectorySizeThreshold=256KiBtest-cid-v1-wide adopts experimental directory DAG-shaping defaults, increasing the maximum file DAG width from 174 to 1024, HAMT fanout from 256 to 1024, and raising the HAMT directory sharding threshold from 256KiB to 1MiB, aligning with 1MiB file chunks.
[!TIP] Apply one of CIDv1 test profiles with
ipfs config profile apply test-cid-v1[-wide].
Datastore Metrics Now Opt-InTo reduce overhead in the default configuration, datastore metrics are no longer enabled by default when initializing a Kubo repository with ipfs init.
Metrics prefixed with <dsname>_datastore (e.g., flatfs_datastore_..., leveldb_datastore_...) are not exposed unless explicitly enabled. For a complete list of affected default metrics, refer to prometheus_metrics_added_by_measure_profile.
Convenience opt-in profiles can be enabled at initialization time with ipfs init --profile: flatfs-measure, pebbleds-measure, badgerds-measure
It is also possible to manually add the measure wrapper. See examples in Datastore.Spec documentation.
This Kubo release significantly improves both the speed of ingesting data via ipfs add and announcing newly produced CIDs to Amino DHT.
ipfs add in online modeAdding a large directory of data when ipfs daemon was running in online mode took a long time. A significant amount of this time was spent writing to and reading from the persisted provider queue. Due to this, many users had to shut down the daemon and perform data import in offline mode. This release fixes this known limitation, significantly improving the speed of ipfs add.
[!IMPORTANT] Performing
ipfs addof 10GiB file would take about 30 minutes. Now it takes close to 30 seconds.
Kubo v0.34:
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-100M > /dev/null
100.00 MiB / 100.00 MiB [=====================================================================] 100.00%
real 0m6.464s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-1G > /dev/null
1000.00 MiB / 1000.00 MiB [===================================================================] 100.00%
real 1m10.542s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-10G > /dev/null
10.00 GiB / 10.00 GiB [=======================================================================] 100.00%
real 24m5.744s
Kubo v0.35:
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-100M > /dev/null
100.00 MiB / 100.00 MiB [=====================================================================] 100.00%
real 0m0.326s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-1G > /dev/null
1.00 GiB / 1.00 GiB [=========================================================================] 100.00%
real 0m2.819s
$ time kubo/cmd/ipfs/ipfs add -r /tmp/testfiles-10G > /dev/null
10.00 GiB / 10.00 GiB [=======================================================================] 100.00%
real 0m28.405s
From kubo v0.33.0,
Bitswap stopped advertising newly added and received blocks to the DHT. Since
then boxo/provider is responsible for the first time provide and the recurring reprovide logic. Prior
to v0.35.0, provides and reprovides were handled together in batches, leading
to delays in initial advertisements (provides).
Provides and Reprovides now have separate queues, allowing for immediate provide of new CIDs and optimised batching of reprovides.
Provider configuration optionsThis change introduces a new configuration options:
Provider.Enabled is a global flag for disabling both Provider and Reprovider systems (announcing new/old CIDs to amino DHT).Provider.WorkerCount for limiting the number of concurrent provide operations, allows for fine-tuning the trade-off between announcement speed and system load when announcing new CIDs.Experimental.StrategicProviding. Superseded by Provider.Enabled, Reprovider.Interval and Reprovider.Strategy.[!TIP] Users who need to provide large volumes of content immediately should consider setting
Routing.AcceleratedDHTClienttotrue. If that is not enough, consider adjustingProvider.WorkerCountto a higher value.
ipfs stats providerSince the ipfs stats provider command was displaying statistics for both
provides and reprovides, this command isn't relevant anymore after separating
the two queues.
The successor command is ipfs stats reprovide, showing the same statistics,
but for reprovides only.
[!NOTE]
ipfs stats providerstill works, but is marked as deprecated and will be removed in a future release. Be mindful that the command provides only statistics about reprovides (similar toipfs stats reprovide) and not the new provide queue (this will be fixed as a part of wider refactor planned for a future release).
Bitswap configuration optionsBitswap.Libp2pEnabled determines whether Kubo will use Bitswap over libp2p (both client and server).Bitswap.ServerEnabled controls whether Kubo functions as a Bitswap server to host and respond to block requests.Internal.Bitswap.ProviderSearchMaxResults for adjusting the maximum number of providers bitswap client should aim at before it stops searching for new ones.Routing configuration optionsRouting.IgnoreProviders allows ignoring specific peer IDs when returned by the content routing system as providers of content.
HTTPRetrieval.Enabled in setups where Bitswap over Libp2p and HTTP retrieval is served under different PeerIDs.Routing.DelegatedRouters allows customizing HTTP routers used by Kubo when Routing.Type is set to auto or autoclient.
[!TIP]
For example, to use Pinata's routing endpoint in addition to IPNI at
cid.contact:console$ ipfs config --json Routing.DelegatedRouters '["https://cid.contact","https://indexer.pinata.cloud"]'
This Kubo release provides node operators with more control over Pebble's FormatMajorVersion. This allows testing a new Kubo release without automatically migrating Pebble datastores, keeping the ability to switch back to older Kubo.
When IPFS is initialized to use the pebbleds datastore (opt-in via ipfs init --profile=pebbleds), the latest pebble database format is configured in the pebble datastore config as "formatMajorVersion". Setting this in the datastore config prevents automatically upgrading to the latest available version when Kubo is upgraded. If a later version becomes available, the Kubo daemon prints a startup message to indicate this. The user can them update the config to use the latest format when they are certain a downgrade will not be necessary.
Without the "formatMajorVersion" in the pebble datastore config, the database format is automatically upgraded to the latest version. If this happens, then it is possible a downgrade back to the previous version of Kubo will not work if new format is not compatible with the pebble datastore in the previous version of Kubo.
When installing a new version of Kubo when "formatMajorVersion" is configured, automatic repository migration (ipfs daemon with --migrate=true) does not upgrade this to the latest available version. This is done because a user may have reasons not to upgrade the pebble database format, and may want to be able to downgrade Kubo if something else is not working in the new version. If the configured pebble database format in the old Kubo is not supported in the new Kubo, then the configured version must be updated and the old Kubo run, before installing the new Kubo.
See other caveats and configuration options at kubo/docs/datastores.md#pebbleds
The environment-variables.md was extended with two new features:
When stderr and/or stdout options are configured or specified by the GOLOG_OUTPUT environ variable, log only to the output(s) specified. For example:
GOLOG_OUTPUT="stderr" logs only to stderrGOLOG_OUTPUT="stdout" logs only to stdoutGOLOG_OUTPUT="stderr+stdout" logs to both stderr and stdoutThe environment variable IPFS_WAIT_REPO_LOCK specifies the amount of time to wait for the repo lock. Set the value of this variable to a string that can be parsed as a golang time.Duration. For example:
IPFS_WAIT_REPO_LOCK="15s"
If the lock cannot be acquired because someone else has the lock, and IPFS_WAIT_REPO_LOCK is set to a valid value, then acquiring the lock is retried every second until the lock is acquired or the specified wait time has elapsed.
boxo to v0.30.0ipfs-webui to v4.7.0go-ds-pebble to v0.5.0
pebble to v2.0.3go-libp2p-pubsub to v0.13.1go-libp2p-kad-dht to v0.33.1 (incl. v0.33.0, v0.32.0, v0.31.0)go-log to v2.6.0p2p-forge/client to v0.5.1Provider.Enabled flag (#10804) (ipfs/kubo#10804)FormatMajorVersion (#10789) (ipfs/kubo#10789)Provider.WorkerCount and stats reprovide (#10779) (ipfs/kubo#10779)ipfs add and Import options for controlling UnixFS DAG Width (#10774) (ipfs/kubo#10774)| Contributor | Commits | Lines Β± | Files Changed |
|---|---|---|---|
| Hector Sanjuan | 16 | +2662/-590 | 71 |
| Guillaume Michel | 27 | +1339/-714 | 69 |
| Andrew Gillis | 22 | +1056/-377 | 54 |
| Sergey Gorbunov | 1 | +962/-42 | 26 |
| Marcin Rataj | 19 | +714/-133 | 47 |
| IGP | 2 | +419/-35 | 11 |
| GITSRC | 1 | +90/-1 | 3 |
| guillaumemichel | 1 | +21/-43 | 1 |
| blockchainluffy | 1 | +27/-26 | 8 |
| web3-bot | 9 | +21/-22 | 13 |
| VersaliX | 1 | +31/-2 | 4 |
| gammazero | 5 | +18/-5 | 5 |
| Hlib Kanunnikov | 1 | +14/-4 | 1 |
| diogo464 | 1 | +6/-7 | 1 |
| Asutorufa | 2 | +7/-1 | 2 |
| Russell Dempsey | 1 | +6/-1 | 1 |
| Steven Allen | 1 | +1/-5 | 1 |
| Michael Vorburger | 2 | +3/-3 | 2 |
| Aayush Rajasekaran | 1 | +2/-2 | 1 |
| sukun | 1 | +1/-1 | 1 |