deploy/README.md
This section provides a manual deployment guide for setting up a six-node cluster with the cluster ID stage.
| Node | OS | IP | Memory | SSD | RDMA |
|---|---|---|---|---|---|
| meta | Ubuntu 22.04 | 192.168.1.1 | 128GB | - | RoCE |
| storage1 | Ubuntu 22.04 | 192.168.1.2 | 512GB | 14TB × 16 | RoCE |
| storage2 | Ubuntu 22.04 | 192.168.1.3 | 512GB | 14TB × 16 | RoCE |
| storage3 | Ubuntu 22.04 | 192.168.1.4 | 512GB | 14TB × 16 | RoCE |
| storage4 | Ubuntu 22.04 | 192.168.1.5 | 512GB | 14TB × 16 | RoCE |
| storage5 | Ubuntu 22.04 | 192.168.1.6 | 512GB | 14TB × 16 | RoCE |
RDMA Configuration
- Assign IP addresses to RDMA NICs. Multiple RDMA NICs (InfiniBand or RoCE) are supported on each node.
- Check RDMA connectivity between nodes using
ib_write_bw.
In production environment, it is recommended to install FoundationDB and ClickHouse on dedicated nodes.
| Service | Node |
|---|---|
| ClickHouse | meta |
| FoundationDB | meta |
FoundationDB
- Ensure that the version of FoundationDB client matches the server version, or copy the corresponding version of
libfdb_c.soto maintain compatibility.- Find the
fdb.clusterfile andlibfdb_c.soat/etc/foundationdb/fdb.cluster,/usr/lib/libfdb_c.soon nodes with FoundationDB installed.
Follow the instructions to build 3FS. Binaries can be found in build/bin.
The following steps show how to install 3FS services in /opt/3fs/bin and the config files in /opt/3fs/etc.
| Service | Binary | Config files | NodeID | Node |
|---|---|---|---|---|
| monitor | monitor_collector_main | monitor_collector_main.toml | - | meta |
| admin_cli | admin_cli | admin_cli.toml | ||
| fdb.cluster | - | meta | ||
| storage1 | ||||
| storage2 | ||||
| storage3 | ||||
| storage4 | ||||
| storage5 | ||||
| mgmtd | mgmtd_main | mgmtd_main_launcher.toml | ||
| mgmtd_main.toml | ||||
| mgmtd_main_app.toml | ||||
| fdb.cluster | 1 | meta | ||
| meta | meta_main | meta_main_launcher.toml | ||
| meta_main.toml | ||||
| meta_main_app.toml | ||||
| fdb.cluster | 100 | meta | ||
| storage | storage_main | storage_main_launcher.toml | ||
| storage_main.toml | ||||
| storage_main_app.toml | 10001~10005 | storage1 | ||
| storage2 | ||||
| storage3 | ||||
| storage4 | ||||
| storage5 | ||||
| client | hf3fs_fuse_main | hf3fs_fuse_main_launcher.toml | ||
| hf3fs_fuse_main.toml | - | meta |
Import the SQL file into ClickHouse:
clickhouse-client -n < ~/3fs/deploy/sql/3fs-monitor.sql
Install monitor_collector service on the meta node.
monitor_collector_main to /opt/3fs/bin and config files to /opt/3fs/etc, and create log directory /var/log/3fs.
mkdir -p /opt/3fs/{bin,etc}
mkdir -p /var/log/3fs
cp ~/3fs/build/bin/monitor_collector_main /opt/3fs/bin
cp ~/3fs/configs/monitor_collector_main.toml /opt/3fs/etc
monitor_collector_main.toml to add a ClickHouse connection:
[server.monitor_collector.reporter]
type = 'clickhouse'
[server.monitor_collector.reporter.clickhouse]
db = '3fs'
host = '<CH_HOST>'
passwd = '<CH_PASSWD>'
port = '<CH_PORT>'
user = '<CH_USER>'
cp ~/3fs/deploy/systemd/monitor_collector_main.service /usr/lib/systemd/system
systemctl start monitor_collector_main
Note that
- Multiple instances of monitor services can be deployed behind a virtual IP address to share the traffic.
- Other services communicate with the monitor service over a TCP connection.
Install admin_cli on all nodes.
admin_cli to /opt/3fs/bin and config files to /opt/3fs/etc.
mkdir -p /opt/3fs/{bin,etc}
rsync -avz meta:~/3fs/build/bin/admin_cli /opt/3fs/bin
rsync -avz meta:~/3fs/configs/admin_cli.toml /opt/3fs/etc
rsync -avz meta:/etc/foundationdb/fdb.cluster /opt/3fs/etc
admin_cli.toml to set cluster_id and clusterFile:
cluster_id = "stage"
[fdb]
clusterFile = '/opt/3fs/etc/fdb.cluster'
The full help documentation for admin_cli can be displayed by running the following command:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml help
Install mgmtd service on meta node.
Copy mgmtd_main to /opt/3fs/bin and config files to /opt/3fs/etc.
cp ~/3fs/build/bin/mgmtd_main /opt/3fs/bin
cp ~/3fs/configs/{mgmtd_main.toml,mgmtd_main_launcher.toml,mgmtd_main_app.toml} /opt/3fs/etc
Update config files:
node_id = 1 in mgmtd_main_app.toml.mgmtd_main_launcher.toml to set the cluster_id and clusterFile:cluster_id = "stage"
[fdb]
clusterFile = '/opt/3fs/etc/fdb.cluster'
mgmtd_main.toml:[common.monitor.reporters.monitor_collector]
remote_ip = "192.168.1.1:10000"
Initialize the cluster:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml "init-cluster --mgmtd /opt/3fs/etc/mgmtd_main.toml 1 1048576 16"
The parameters of admin_cli:
1the chain table ID1048576the chunk size in bytes16the file strip size
Run help init-cluster for full documentation.
Start mgmtd service:
cp ~/3fs/deploy/systemd/mgmtd_main.service /usr/lib/systemd/system
systemctl start mgmtd_main
Run list-nodes command to check if the cluster has been successfully initialized:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "list-nodes"
If multiple instances of mgmtd services deployed, one of the mgmtd services is elected as the primary; others are secondaries. Automatic failover occurs when the primary fails.
Install meta service on meta node.
meta_main to /opt/3fs/bin and config files to /opt/3fs/etc.
cp ~/3fs/build/bin/meta_main /opt/3fs/bin
cp ~/3fs/configs/{meta_main_launcher.toml,meta_main.toml,meta_main_app.toml} /opt/3fs/etc
node_id = 100 in meta_main_app.toml.cluster_id, clusterFile and mgmtd address in meta_main_launcher.toml:cluster_id = "stage"
[mgmtd_client]
mgmtd_server_addresses = ["RDMA://192.168.1.1:8000"]
meta_main.toml.[server.mgmtd_client]
mgmtd_server_addresses = ["RDMA://192.168.1.1:8000"]
[common.monitor.reporters.monitor_collector]
remote_ip = "192.168.1.1:10000"
[server.fdb]
clusterFile = '/opt/3fs/etc/fdb.cluster'
admin_cli to upload the config file to mgmtd:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "set-config --type META --file /opt/3fs/etc/meta_main.toml"
cp ~/3fs/deploy/systemd/meta_main.service /usr/lib/systemd/system
systemctl start meta_main
list-nodes command to check if meta service has joined the cluster:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "list-nodes"
If multiple instances of meta services deployed, meta requests will be evenly distributed to all instances.
Install storage service on storage node.
/storage/data{1..16}, then create data directories /storage/data{1..16}/3fs and log directory /var/log/3fs.
mkdir -p /storage/data{1..16}
mkdir -p /var/log/3fs
for i in {1..16};do mkfs.xfs -L data${i} -s size=4096 /dev/nvme${i}n1;mount -o noatime,nodiratime -L data${i} /storage/data${i};done
mkdir -p /storage/data{1..16}/3fs
sysctl -w fs.aio-max-nr=67108864
storage_main to /opt/3fs/bin and config files to /opt/3fs/etc.
rsync -avz meta:~/3fs/build/bin/storage_main /opt/3fs/bin
rsync -avz meta:~/3fs/configs/{storage_main_launcher.toml,storage_main.toml,storage_main_app.toml} /opt/3fs/etc
node_id in storage_main_app.toml. Each storage service is assigned a unique id between 10001 and 10005.cluster_id and mgmtd address in storage_main_launcher.toml.cluster_id = "stage"
[mgmtd_client]
mgmtd_server_addresses = ["RDMA://192.168.1.1:8000"]
storage_main.toml:[server.mgmtd]
mgmtd_server_addresses = ["RDMA://192.168.1.1:8000"]
[common.monitor.reporters.monitor_collector]
remote_ip = "192.168.1.1:10000"
[server.targets]
target_paths = ["/storage/data1/3fs","/storage/data2/3fs","/storage/data3/3fs","/storage/data4/3fs","/storage/data5/3fs","/storage/data6/3fs","/storage/data7/3fs","/storage/data8/3fs","/storage/data9/3fs","/storage/data10/3fs","/storage/data11/3fs","/storage/data12/3fs","/storage/data13/3fs","/storage/data14/3fs","/storage/data15/3fs","/storage/data16/3fs",]
admin_cli to upload the config file to mgmtd:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "set-config --type STORAGE --file /opt/3fs/etc/storage_main.toml"
rsync -avz meta:~/3fs/deploy/systemd/storage_main.service /usr/lib/systemd/system
systemctl start storage_main
list-nodes command to check if storage service has joined the cluster:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "list-nodes"
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "user-add --root --admin 0 root"
/opt/3fs/etc/token.txt.admin_cli commands to create storage targets on 5 storage nodes (16 SSD per node, 6 targets per SSD).
pip install -r ~/3fs/deploy/data_placement/requirements.txt
python ~/3fs/deploy/data_placement/src/model/data_placement.py \
-ql -relax -type CR --num_nodes 5 --replication_factor 3 --min_targets_per_disk 6
python ~/3fs/deploy/data_placement/src/setup/gen_chain_table.py \
--chain_table_type CR --node_id_begin 10001 --node_id_end 10005 \
--num_disks_per_node 16 --num_targets_per_disk 6 \
--target_id_prefix 1 --chain_id_prefix 9 \
--incidence_matrix_path output/DataPlacementModel-v_5-b_10-r_6-k_3-λ_2-lb_1-ub_1/incidence_matrix.pickle
output directory: create_target_cmd.txt, generated_chains.csv, and generated_chain_table.csv./opt/3fs/bin/admin_cli --cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' --config.user_info.token $(<"/opt/3fs/etc/token.txt") < output/create_target_cmd.txt
/opt/3fs/bin/admin_cli --cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' --config.user_info.token $(<"/opt/3fs/etc/token.txt") "upload-chains output/generated_chains.csv"
/opt/3fs/bin/admin_cli --cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' --config.user_info.token $(<"/opt/3fs/etc/token.txt") "upload-chain-table --desc stage 1 output/generated_chain_table.csv"
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "list-chains"
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "list-chain-tables"
For simplicity FUSE client is deployed on the meta node in this guide. However, we strongly advise against deploying clients on service nodes in production environment.
hf3fs_fuse_main to /opt/3fs/bin and config files to /opt/3fs/etc.
cp ~/3fs/build/bin/hf3fs_fuse_main /opt/3fs/bin
cp ~/3fs/configs/{hf3fs_fuse_main_launcher.toml,hf3fs_fuse_main.toml,hf3fs_fuse_main_app.toml} /opt/3fs/etc
mkdir -p /3fs/stage
hf3fs_fuse_main_launcher.toml
cluster_id = "stage"
mountpoint = '/3fs/stage'
token_file = '/opt/3fs/etc/token.txt'
[mgmtd_client]
mgmtd_server_addresses = ["RDMA://192.168.1.1:8000"]
hf3fs_fuse_main.toml.
[mgmtd]
mgmtd_server_addresses = ["RDMA://192.168.1.1:8000"]
[common.monitor.reporters.monitor_collector]
remote_ip = "192.168.1.1:10000"
admin_cli to upload the config file to mgmtd:
/opt/3fs/bin/admin_cli -cfg /opt/3fs/etc/admin_cli.toml --config.mgmtd_client.mgmtd_server_addresses '["RDMA://192.168.1.1:8000"]' "set-config --type FUSE --file /opt/3fs/etc/hf3fs_fuse_main.toml"
cp ~/3fs/deploy/systemd/hf3fs_fuse_main.service /usr/lib/systemd/system
systemctl start hf3fs_fuse_main
/3fs/stage:
mount | grep '/3fs/stage'
If mgmtd fails to start after running init-cluster, the most likely cause is an error in mgmtd_main.toml. Any changes to this file require clearing all FoundationDB data and re-running init-cluster
A minimum of two storage services is required for data replication. If set --num-nodes=1, the gen_chain_table.py script will fail. In a test environment, this limitation can be bypassed by deploying multiple storage services on a single machine.
All config files are managed by mgmtd. If any *_main.toml is updated, such as storage_main.toml, the modified file should be uploaded using admin_cli set-config.
When encountering any error during deployment,
stdout/stderr using journalctl, especially during service startup./var/log/3fs/ on service and client nodes./var/log/3fs/ exists before starting any service.