docs/experimental-features.md
This document contains a list of experimental features in Kubo. These features, commands, and APIs aren't mature, and you shouldn't rely on them. Once they reach maturity, there's going to be mention in the changelog and release posts. If they don't reach maturity, the same applies, and their code is removed.
Subscribe to https://github.com/ipfs/kubo/issues/3397 to get updates.
When you add a new experimental feature to kubo or change an experimental feature, you MUST please make a PR updating this document, and link the PR in the above issue.
Allows files to be added with no formatting in the leaf nodes of the graph.
Stable but not used by default.
0.4.5
Use --raw-leaves flag when calling ipfs add. This will save some space when adding files.
Enabling this feature by default will change the CIDs (hashes) of all newly imported files and will prevent newly imported files from deduplicating against previously imported files. While we do intend on enabling this by default, we plan on doing so once we have a large batch of "hash-changing" features we can enable all at once.
Allows files to be added without duplicating the space they take up on disk.
Experimental.
0.4.7
[!WARNING] SECURITY CONSIDERATION
This feature provides the IPFS
addcommand with access to the local filesystem. Consequently, any user with access to CLI or the HTTP/v0/addRPC API can read files from the local filesystem with the same permissions as the Kubo daemon. If you enable this, secure your RPC API usingAPI.Authorizationsor custom auth middleware.
Modify your ipfs config:
ipfs config --json Experimental.FilestoreEnabled true
Then restart your IPFS node to reload your config.
Finally, when adding files with ipfs add, pass the --nocopy flag to use the filestore instead of copying the files into your local IPFS repo.
Allows ipfs to retrieve blocks contents via a URL instead of storing it in the datastore
Experimental.
v0.4.17
[!WARNING] SECURITY CONSIDERATION
This feature provides the IPFS
addCLI command with access to the local filesystem. Consequently, any user with access to the CLI or HTTP/v0/addRPC API can read files from the local filesystem with the same permissions as the Kubo daemon. If you enable this, secure your RPC API usingAPI.Authorizationsor custom auth middleware.
Modify your ipfs config:
ipfs config --json Experimental.UrlstoreEnabled true
And then add a file at a specific URL using ipfs urlstore add <url>
It allows ipfs to only connect to other peers who have a shared secret key.
Stable but not quite ready for prime-time.
[!WARNING] Limited to TCP transport, comes with overhead of double-encryption. See details below.
0.4.7
Generate a pre-shared-key using ipfs-swarm-key-gen):
go install github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen@latest
ipfs-swarm-key-gen > ~/.ipfs/swarm.key
To join a given private network, get the key file from someone in the network
and save it to ~/.ipfs/swarm.key (If you are using a custom $IPFS_PATH, put
it in there instead).
When using this feature, you will not be able to connect to the default bootstrap nodes (Since we aren't part of your private network) so you will need to set up your own bootstrap nodes.
First, to prevent your node from even trying to connect to the default bootstrap nodes, run:
ipfs bootstrap rm --all
Then add your own bootstrap peers with:
ipfs bootstrap add <multiaddr>
For example:
ipfs bootstrap add /ip4/104.236.76.40/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64
Bootstrap nodes are no different from all other nodes in the network apart from the function they serve.
To be extra cautious, You can also set the LIBP2P_FORCE_PNET environment
variable to 1 to force the usage of private networks. If no private network is
configured, the daemon will fail to start.
Allows tunneling of TCP connections through libp2p streams, similar to SSH port
forwarding (ssh -L).
Experimental, will be stabilized in 0.6.0
0.4.10
[!WARNING] SECURITY CONSIDERATION
This feature provides CLI and HTTP RPC user with ability to set up port forwarding for all localhost and LAN ports. If you enable this and plan to expose CLI or HTTP RPC to other users or machines, secure RPC API using
API.Authorizationsor custom auth middleware.
> ipfs config --json Experimental.Libp2pStreamMounting true
See docs/p2p-tunnels.md for usage examples, foreground mode, and systemd integration.
ipfs p2p forward modePeering.Peers, see kubo#5460Allows proxying of HTTP requests over p2p streams. This allows serving any standard HTTP app over p2p streams.
Experimental
0.4.19
[!WARNING] SECURITY CONSIDERATION
This feature provides CLI and HTTP RPC user with ability to set up HTTP forwarding for all localhost and LAN ports. If you enable this and plan to expose CLI or HTTP RPC to other users or machines, secure RPC API using
API.Authorizationsor custom auth middleware.
The p2p command needs to be enabled in the config:
> ipfs config --json Experimental.Libp2pStreamMounting true
On the client, the p2p HTTP proxy needs to be enabled in the config:
> ipfs config --json Experimental.P2pHttpProxy true
Netcat example:
First, pick a protocol name for your application. Think of the protocol name as
a port number, just significantly more user-friendly. In this example, we're
going to use /http.
Setup:
$SERVER_IDOn the "server" node:
First, start your application and have it listen for TCP connections on
port $APP_PORT.
Then, configure the p2p listener by running:
> ipfs p2p listen --allow-custom-protocol /http /ip4/127.0.0.1/tcp/$APP_PORT
This will configure IPFS to forward all incoming /http streams to
127.0.0.1:$APP_PORT (opening a new connection to 127.0.0.1:$APP_PORT per incoming stream.
On the "client" node:
Next, have your application make a http request to 127.0.0.1:8080/p2p/$SERVER_ID/http/$FORWARDED_PATH. This
connection will be forwarded to the service running on 127.0.0.1:$APP_PORT on
the remote machine (which needs to be a http server!) with path $FORWARDED_PATH. You can test it with netcat:
On "server" node:
> echo -e "HTTP/1.1 200\nContent-length: 11\n\nIPFS rocks!" | nc -l -p $APP_PORT
On "client" node:
> curl http://localhost:8080/p2p/$SERVER_ID/http/
You should now see the resulting HTTP response: IPFS rocks!
We also support the use of protocol names of the form /x/$NAME/http where $NAME doesn't contain any "/"'s
FUSE makes it possible to mount /ipfs, /ipns and /mfs namespaces in your OS,
allowing arbitrary apps access to IPFS using standard filesystem operations.
It is considered EXPERIMENTAL due to limited support on some platforms.
See fuse.md for setup instructions and details.
0.4.11
Experimental
Plugins allow adding functionality without the need to recompile the daemon.
See Plugin docs
0.4.8:
Experimental.ShardingEnabled which enabled sharding globally.0.11.0 :
Experimental.ShardingEnabledReplaced by autosharding.
The Experimental.ShardingEnabled config field is no longer used, please remove it from your configs.
kubo now automatically shards when directory block is bigger than 256KB, ensuring every block is small enough to be exchanged with other peers
Specification: IPNS PubSub Router
0.4.14 :
0.5.0 :
0.11.0 :
Ipns.UsePubsub flag in config0.40.0 :
Experimental, default-disabled.
Utilizes pubsub for publishing IPNS records in real time.
When it is enabled:
Both the publisher and the resolver nodes need to have the feature enabled for it to work effectively.
Run your daemon with the --enable-namesys-pubsub flag
or modify your ipfs config and restart the daemon:
ipfs config --json Ipns.UsePubsub true
NOTE:
--enable-namesys-pubsub CLI flag overrides Ipns.UsePubsub config.Experimental, disabled by default.
Automatically discovers relays and advertises relay addresses when the node is behind an impenetrable NAT.
Modify your ipfs config:
ipfs config --json Swarm.RelayClient.Enabled true
Experimental.StrategicProviding was removed in Kubo v0.35.
Replaced by Provide.Enabled and Provide.Strategy.
Removed, no plans to reintegrate either as experimental or stable feature.
Trustless Gateway over Libp2p should be easier to use for unixfs usecases and support basic wildcard car streams for non unixfs.
See https://github.com/ipfs/kubo/pull/9747 for more information.
Stable, enabled by default
Noise libp2p transport based on the Noise Protocol Framework. While TLS remains the default transport in Kubo, Noise is easier to implement and is thus the "interop" transport between IPFS and libp2p implementations.
0.20.0
Experimental, disabled by default.
When the Amino DHT client tries to store a provider in the DHT, it typically searches for the 20 peers that are closest to the target key. However, this process can be time-consuming, as the search terminates only after no closer peers are found among the three currently (during the query) known closest ones. In cases where these closest peers are slow to respond (which often happens if they are located at the edge of the DHT network), the query gets blocked by the slowest peer.
To address this issue, the OptimisticProvide feature can be enabled. This feature allows the client to estimate the
network size and determine how close a peer likely needs to be to the target key to be within the 20 closest peers.
While searching for the closest peers in the DHT, the client will optimistically store the provider record with peers
and abort the query completely when the set of currently known 20 closest peers are also likely the actual 20 closest
ones. This heuristic approach can significantly speed up the process, resulting in a speed improvement of 2x to >10x.
When it is enabled:
ipfs routing provideTradeoffs
There are now the classic client, the accelerated DHT client, and optimistic provide that improve the provider process. There are different trade-offs with all of them. The accelerated DHT client is still faster to provide large amounts of provider records at the cost of high resource requirements. Optimistic provide doesn't have the high resource requirements but might not choose optimal peers and is not as fast as the accelerated client, but still much faster than the classic client.
Caveats:
OptimisticProvideJobsPoolSize setting. Currently,
this is set to 60. This means that at most 60 parallel background requests are allowed to be in-flight. If this
limit is exceeded optimistic provide will block until all 20 provider records are written. This is still 2x faster
than the classic approach but not as fast as returning early which yields >10x speed-ups.For more information, see:
To enable:
ipfs config --json Experimental.OptimisticProvide true
If you want to change the OptimisticProvideJobsPoolSize setting from its default of 60:
ipfs config --json Experimental.OptimisticProvideJobsPoolSize 120
0.23.0
Experimental, disabled by default.
Enables serving a subset of the IPFS HTTP Gateway semantics over libp2p /http/1.1 protocol.
Notes:
/ipfs resources (no /ipns atm)application/vnd.ipld.raw and
application/vnd.ipld.car (from Trustless Gateway Specification,
where data integrity can be verified).Gateway.NoFetch)/) of the
libp2p /http/1.1 protocol, that is subject to change.
.well-known/libp2p resource specified in the
http+libp2p specification
/http/1.1 listener is
/ipfs/gateway, as noted in
ipfs/specs#434.Modify your ipfs config:
ipfs config --json Experimental.GatewayOverLibp2p true
This feature now lives at Routing.AcceleratedDHTClient.