Back to Juicefs

Command Reference

docs/en/reference/command_reference.mdx

1.4.0-dev43.9 KB
Original Source

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

<!-- Special note: Since there are many common options for mount, gateway and webdav commands, in order to simplify document maintenance, we have unified these common options in the "_common_options.mdx" file. If you need to update related content, please check this file. -->

import CommonOptions from './_common_options.mdx';

Running juicefs by itself and it will print all available commands. In addition, you can add -h/--help flag after each command to get more information, e.g., juicefs format -h.

NAME:
   juicefs - A POSIX file system built on Redis and object storage.

USAGE:
   juicefs [global options] command [command options] [arguments...]

VERSION:
   1.2.0

COMMANDS:
   ADMIN:
     format   Format a volume
     config   Change configuration of a volume
     quota    Manage directory quotas
     destroy  Destroy an existing volume
     gc       Garbage collector of objects in data storage
     fsck     Check consistency of a volume
     restore  restore files from trash
     dump     Dump metadata into a JSON file
     load     Load metadata from a previously dumped JSON file
     version  Show version
   INSPECTOR:
     status   Show status of a volume
     stats    Show real time performance statistics of JuiceFS
     profile  Show profiling of operations completed in JuiceFS
     info     Show internal information of a path or inode
     debug    Collect and display system static and runtime information
     summary  Show tree summary of a directory
   SERVICE:
     mount    Mount a volume
     umount   Unmount a volume
     gateway  Start an S3-compatible gateway
     webdav   Start a WebDAV server
   TOOL:
     bench     Run benchmarks on a path
     objbench  Run benchmarks on an object storage
     warmup    Build cache for target directories/files
     rmr       Remove directories recursively
     sync      Sync between two storages
     clone     clone a file or directory without copying the underlying data
     compact   Trigger compaction of chunks

GLOBAL OPTIONS:
   --verbose, --debug, -v  enable debug log (default: false)
   --quiet, -q             show warning and errors only (default: false)
   --trace                 enable trace log (default: false)
   --log-id value          append the given log id in log, use "random" to use random uuid
   --no-agent              disable pprof (:6060) agent (default: false)
   --pyroscope value       pyroscope address
   --no-color              disable colors (default: false)
   --help, -h              show help (default: false)
   --version, -V           print version only (default: false)

COPYRIGHT:
   Apache License 2.0

Auto completion {#auto-completion}

To enable commands completion, simply source the script provided within hack/autocomplete directory. For example:

<Tabs groupId="juicefs-cli-autocomplete"> <TabItem value="bash" label="Bash">
shell
source hack/autocomplete/bash_autocomplete
</TabItem> <TabItem value="zsh" label="Zsh">
shell
source hack/autocomplete/zsh_autocomplete
</TabItem> </Tabs>

Please note the auto-completion is only enabled for the current session. If you want to apply it for all new sessions, add the source command to .bashrc or .zshrc:

<Tabs groupId="juicefs-cli-autocomplete"> <TabItem value="bash" label="Bash">
shell
echo "source path/to/bash_autocomplete" >> ~/.bashrc
</TabItem> <TabItem value="zsh" label="Zsh">
shell
echo "source path/to/zsh_autocomplete" >> ~/.zshrc
</TabItem> </Tabs>

Alternatively, if you are using bash on a Linux system, you may just copy the script to /etc/bash_completion.d and rename it to juicefs:

shell
cp hack/autocomplete/bash_autocomplete /etc/bash_completion.d/juicefs
source /etc/bash_completion.d/juicefs

Global options {#global-options}

ItemsDescription
-v --verbose --debugEnable debug logs.
-q --quietShow only warning and error logs.
--traceEnable more detailed debug logs than the --debug option.
--no-agentDisable pprof agent.
--pyroscopeConfig Pyroscope address, e.g. http://localhost:4040.
--no-colorDisable log color.

Admin {#admin}

juicefs format {#format}

Create and format a file system, if a volume already exists with the same META-URL, this command will skip the format step. To adjust configurations for existing volumes, use juicefs config.

Synopsis

shell
juicefs format [command options] META-URL NAME

# Create a simple test volume (data will be stored in a local directory)
juicefs format sqlite3://myjfs.db myjfs

# Create a volume with Redis and S3
juicefs format redis://localhost myjfs --storage=s3 --bucket=https://mybucket.s3.us-east-2.amazonaws.com

# Create a volume with password protected MySQL
juicefs format mysql://jfs:mypassword@(127.0.0.1:3306)/juicefs myjfs
# A safer alternative
META_PASSWORD=mypassword juicefs format mysql://jfs:@(127.0.0.1:3306)/juicefs myjfs
# Provide password from file
META_PASSWORD_FILE=/secret/mypassword.txt juicefs format mysql://jfs:@(127.0.0.1:3306)/juicefs myjfs

# Create a volume with quota enabled
juicefs format sqlite3://myjfs.db myjfs --inodes=1000000 --capacity=102400

# Create a volume with trash disabled
juicefs format sqlite3://myjfs.db myjfs --trash-days=0

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
NAMEName of the file system
--forceoverwrite existing format (default: false)
--no-updatedon't update existing volume (default: false)

Data storage options {#format-data-storage-options}

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: file, refer to documentation for all supported object storage types)
--bucket=/var/jfsA bucket URL to store data (default: $HOME/.juicefs/local or /var/jfs)
--access-key=valueAccess Key for object storage (can also be set via the environment variable ACCESS_KEY), see How to Set Up Object Storage for more.
--secret-key valueSecret Key for object storage (can also be set via the environment variable SECRET_KEY), see How to Set Up Object Storage for more.
--session-token=valuesession token for object storage, see How to Set Up Object Storage for more.
--storage-class value <VersionAdd>1.1</VersionAdd>the default storage class

Data format options {#format-data-format-options}

ItemsDescription
--block-size=4Msize of block in KiB (default: 4M). 4M is usually a better default value because many object storage services use 4M as their internal block size, thus using the same block size in JuiceFS usually yields better performance.
--compress=nonecompression algorithm, choose from lz4, zstd, none (default). Enabling compression will inevitably affect performance. Among the two supported algorithms, lz4 offers a better performance, while zstd comes with a higher compression ratio, Google for their detailed comparison.
--encrypt-rsa-key=valueA path to RSA private key (PEM)
--encrypt-algo=aes256gcm-rsaencrypt algorithm (aes256gcm-rsa, chacha20-rsa) (default: "aes256gcm-rsa")
--hash-prefixFor most object storages, if object storage blocks are sequentially named, they will also be closely stored in the underlying physical regions. When loaded with intensive concurrent consecutive reads, this can cause hotspots and hinder object storage performance.

Enabling --hash-prefix will add a hash prefix to name of the blocks (slice ID mod 256, see internal implementation), this distributes data blocks evenly across actual object storage regions, offering more consistent performance. Obviously, this option dictates object naming pattern and should be specified when a file system is created, and cannot be changed on-the-fly.

Currently, AWS S3 had already made improvements and no longer require application side optimization, but for other types of object storages, this option still recommended for large scale scenarios.| |--shards=0|If your object storage limit speed in a bucket level (or you're using a self-hosted object storage with limited performance), you can store the blocks into N buckets by hash of key (default: 0), when N is greater than 0, bucket should to be in the form of %d, e.g. --bucket "juicefs-%d". --shards cannot be changed afterwards and must be planned carefully ahead.|

Management options {#format-management-options}

ItemsDescription
--capacity=0storage space limit in GiB, default to 0 which means no limit. Capacity will include trash files, if trash is enabled.
--inodes=0Limit the number of inodes, default to 0 which means no limit.
--trash-days=1By default, delete files are put into trash, this option controls the number of days before trash files are expired, default to 1, set to 0 to disable trash.
--enable-acl=true <VersionAdd>1.2</VersionAdd>enable POSIX ACL,it is irreversible.

juicefs config {#config}

Change config of a volume. Note that after updating some settings, the client may not take effect immediately, and it needs to wait for a certain period of time. The specific waiting time can be controlled by the --heartbeat option.

Synopsis

shell
juicefs config [command options] META-URL

# Show the current configurations
juicefs config redis://localhost

# Change volume "quota"
juicefs config redis://localhost --inodes 10000000 --capacity 1048576

# Change maximum days before files in trash are deleted
juicefs config redis://localhost --trash-days 7

# Limit client version that is allowed to connect
juicefs config redis://localhost --min-client-version 1.0.0 --max-client-version 1.1.0

Options

ItemsDescription
--yes, -yautomatically answer 'yes' to all prompts and run non-interactively (default: false)
--forceskip sanity check and force update the configurations (default: false)

Data storage options {#config-data-storage-options}

ItemsDescription
--storage=file <VersionAdd>1.1</VersionAdd>Object storage type (e.g. s3, gs, oss, cos) (default: "file", refer to documentation for all supported object storage types).
--bucket=/var/jfsA bucket URL to store data (default: $HOME/.juicefs/local or /var/jfs)
--access-key=valueAccess Key for object storage (can also be set via the environment variable ACCESS_KEY), see How to Set Up Object Storage for more.
--secret-key valueSecret Key for object storage (can also be set via the environment variable SECRET_KEY), see How to Set Up Object Storage for more.
--session-token=valuesession token for object storage, see How to Set Up Object Storage for more.
--storage-class value <VersionAdd>1.1</VersionAdd>the default storage class
--upload-limit=0bandwidth limit for upload in Mbps (default: 0)
--download-limit=0bandwidth limit for download in Mbps (default: 0)

Management options {#config-management-options}

ItemsDescription
--capacity valuelimit for space in GiB
--inodes valuelimit for number of inodes
--trash-days valuenumber of days after which removed files will be permanently deleted
--enable-acl <VersionAdd>1.2</VersionAdd>enable POSIX ACL (irreversible), at the same time, the minimum client version allowed to connect will be upgraded to v1.2
--encrypt-secretencrypt the secret key if it was previously stored in plain format (default: false)
--min-client-version value <VersionAdd>1.1</VersionAdd>minimum client version allowed to connect
--max-client-version value <VersionAdd>1.1</VersionAdd>maximum client version allowed to connect
--dir-stats <VersionAdd>1.1</VersionAdd>enable dir stats, which is necessary for fast summary and dir quota (default: false)

juicefs quota <VersionAdd>1.1</VersionAdd> {#quota}

Manage directory quotas

Synopsis

shell
juicefs quota command [command options] META-URL

# Set quota to a directory
juicefs quota set redis://localhost --path /dir1 --capacity 1 --inodes 100

# Get quota of a directory
juicefs quota get redis://localhost --path /dir1

# List all directory quotas
juicefs quota list redis://localhost

# Delete quota of a directory
juicefs quota delete redis://localhost --path /dir1

# Check quota consistency of a directory
juicefs quota check redis://localhost

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see "JuiceFS supported metadata engines" for details.
--path valuefull path of the directory within the volume
--capacity valuehard quota of the directory limiting its usage of space in GiB (default: 0)
--inodes valuehard quota of the directory limiting its number of inodes (default: 0)
--repairrepair inconsistent quota (default: false)
--strictcalculate total usage of directory in strict mode (NOTE: may be slow for huge directory) (default: false)

juicefs destroy {#destroy}

Destroy an existing volume, will delete relevant data in metadata engine and object storage. See How to destroy a file system.

Synopsis

shell
juicefs destroy [command options] META-URL UUID

juicefs destroy redis://localhost e94d66a8-2339-4abd-b8d8-6812df737892

Options

ItemsDescription
--yes, -y <VersionAdd>1.1</VersionAdd>automatically answer 'yes' to all prompts and run non-interactively (default: false)
--forceskip sanity check and force destroy the volume (default: false)

juicefs gc {#gc}

If for some reason, a object storage block escape JuiceFS management completely, i.e. the metadata is gone, but the block still persists in the object storage, and cannot be released, this is called an "object leak". If this happens without any special file system manipulation, it could well indicate a bug within JuiceFS, file a GitHub Issue to let us know.

Meanwhile, you can run this command to deal with leaked objects. It also deletes stale slices produced by file overwrites. See Status Check & Maintenance.

Synopsis

shell
juicefs gc [command options] META-URL

# Check only, no writable change
juicefs gc redis://localhost

# Trigger compaction of all slices
juicefs gc redis://localhost --compact

# Delete leaked objects
juicefs gc redis://localhost --delete

Options

ItemsDescription
--compactcompact all chunks with more than 1 slices (default: false).
--deletedelete leaked objects (default: false)
--threads=10number of threads to delete leaked objects (default: 10)

juicefs fsck {#fsck}

Check consistency of file system.

Synopsis

shell
juicefs fsck [command options] META-URL

juicefs fsck redis://localhost

Options

ItemsDescription
--path value <VersionAdd>1.1</VersionAdd>absolute path within JuiceFS to check
--repair <VersionAdd>1.1</VersionAdd>repair specified path if it's broken (default: false)
--recursive, -r <VersionAdd>1.1</VersionAdd>recursively check or repair (default: false)
--sync-dir-stat <VersionAdd>1.1</VersionAdd>sync stat of all directories, even if they are existed and not broken (NOTE: it may take a long time for huge trees) (default: false)

juicefs restore <VersionAdd>1.1</VersionAdd> {#restore}

Rebuild the tree structure for trash files, and put them back to original directories.

Synopsis

shell
juicefs restore [command options] META HOUR ...

juicefs restore redis://localhost/1 2023-05-10-01

Options

ItemsDescription
--put-back valuemove the recovered files into original directory (default: false)
--threads valuenumber of threads (default: 10)

juicefs dump {#dump}

Dump metadata into a JSON file. Refer to "Metadata backup" for more information.

Synopsis

shell
juicefs dump [command options] META-URL [FILE]

# Export metadata to meta-dump.json
juicefs dump redis://localhost meta-dump.json

# Export metadata for only one subdirectory of the file system
juicefs dump redis://localhost sub-meta-dump.json --subdir /dir/in/jfs

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
FILEExport file path, if not specified, it will be exported to standard output. If the filename ends with .gz, it will be automatically compressed.
--subdir=pathOnly export metadata for the specified subdirectory.
--keep-secret-key <VersionAdd>1.1</VersionAdd>Export object storage authentication information, the default is false. Since it is exported in plain text, pay attention to data security when using it. If the export file does not contain object storage authentication information, you need to use juicefs config to reconfigure object storage authentication information after the subsequent import is completed.
--threads=10 <VersionAdd>1.2</VersionAdd>number of threads to dump metadata. (default: 10)
--fast <VersionAdd>1.2</VersionAdd>Use more memory to speedup dump.
--skip-trash <VersionAdd>1.2</VersionAdd>Skip files and directories in trash.

juicefs load {#load}

Load metadata from a previously dumped JSON file. Read "Metadata recovery and migration" to learn more.

Synopsis

shell
juicefs load [command options] META-URL [FILE]

# Import the metadata backup file meta-dump.json to the database
juicefs load redis://127.0.0.1:6379/1 meta-dump.json

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
FILEImport file path, if not specified, it will be imported from standard input. If the filename ends with .gz, it will be automatically decompressed.
--encrypt-rsa-key=path <VersionAdd>1.0.4</VersionAdd>The path to the RSA private key file used for encryption.
--encrypt-alg=aes256gcm-rsa <VersionAdd>1.0.4</VersionAdd>Encryption algorithm, the default is aes256gcm-rsa.

Inspector {#inspector}

juicefs status {#status}

Show status of JuiceFS.

Synopsis

shell
juicefs status [command options] META-URL

juicefs status redis://localhost

Options

ItemsDescription
--session=0, -s 0show detailed information (sustained inodes, locks) of the specified session (SID) (default: 0)
--more, -m <VersionAdd>1.1</VersionAdd>show more statistic information, may take a long time (default: false)

juicefs stats {#stats}

Show runtime statistics, read Real-time performance monitoring for more.

Synopsis

shell
juicefs stats [command options] MOUNTPOINT

juicefs stats /mnt/jfs

# More metrics
juicefs stats /mnt/jfs -l 1

Options

ItemsDescription
--schema=ufmcoschema string that controls the output sections (u: usage, f: FUSE, m: metadata, c: block cache, o: object storage, g: Go) (default: ufmco)
--interval=1interval in seconds between each update (default: 1)
--verbosity=0verbosity level, 0 or 1 is enough for most cases (default: 0)

juicefs profile {#profile}

Show profiling of operations completed in JuiceFS, based on access log. read Real-time performance monitoring for more.

Synopsis

shell
juicefs profile [command options] MOUNTPOINT/LOGFILE

# Monitor real time operations
juicefs profile /mnt/jfs

# Replay an access log
cat /mnt/jfs/.accesslog > /tmp/jfs.alog
# Press Ctrl-C to stop the "cat" command after some time
juicefs profile /tmp/jfs.alog

# Analyze an access log and print the total statistics immediately
juicefs profile /tmp/jfs.alog --interval 0

Options

ItemsDescription
--uid=value, -u valueonly track specified UIDs (separated by comma)
--gid=value, -g valueonly track specified GIDs (separated by comma)
--pid=value, -p valueonly track specified PIDs (separated by comma)
--interval=2flush interval in seconds; set it to 0 when replaying a log file to get an immediate result (default: 2)

juicefs info {#info}

Show internal information for given paths or inodes.

Synopsis

shell
juicefs info [command options] PATH or INODE

# Check a path
juicefs info /mnt/jfs/foo

# Check an inode
cd /mnt/jfs
juicefs info -i 100

Options

ItemsDescription
--inode, -iuse inode instead of path (current dir should be inside JuiceFS) (default: false)
--recursive, -rget summary of directories recursively (NOTE: it may take a long time for huge trees) (default: false)
--strict <VersionAdd>1.1</VersionAdd>get accurate summary of directories (NOTE: it may take a long time for huge trees) (default: false)
--rawshow internal raw information (default: false)

juicefs debug <VersionAdd>1.1</VersionAdd> {#debug}

It collects and displays information from multiple dimensions such as the operating environment and system logs to help better locate errors

Synopsis

shell
juicefs debug [command options] MOUNTPOINT

# Collect and display information about the mount point /mnt/jfs
juicefs debug /mnt/jfs

# Specify the output directory as /var/log
juicefs debug --out-dir=/var/log /mnt/jfs

# Get the last up to 1000 log entries
juicefs debug --out-dir=/var/log --limit=1000 /mnt/jfs

Options

ItemsDescription
--out-dir=./debug/The output directory of the results, automatically created if the directory does not exist (default: ./debug/)
--limit=valueThe number of log entries collected, from newest to oldest, if not specified, all entries will be collected
--stats-sec=5The number of seconds to sample .stats file (default: 5)
--trace-sec=5The number of seconds to sample trace metrics (default: 5)
--profile-sec=30The number of seconds to sample profile metrics (default: 30)

juicefs summary <VersionAdd>1.1</VersionAdd> {#summary}

It is used to show tree summary of target directory.

Synopsis

shell
juicefs summary [command options] PATH

# Show with path
juicefs summary /mnt/jfs/foo

# Show max depth of 5
juicefs summary --depth 5 /mnt/jfs/foo

# Show top 20 entries
juicefs summary --entries 20 /mnt/jfs/foo

# Show accurate result
juicefs summary --strict /mnt/jfs/foo

Options

ItemsDescription
--depth value, -d valuedepth of tree to show (zero means only show root) (default: 2)
--entries value, -e valueshow top N entries (sort by size) (default: 10)
--strictshow accurate summary, including directories and files (may be slow) (default: false)
--csvprint summary in csv format (default: false)

Service {#service}

juicefs mount {#mount}

Mount a volume. The volume must be formatted in advance.

JuiceFS can be mounted by root or normal user, but due to their privilege differences, cache directory and log path will vary, read below descriptions for more.

Synopsis

shell
juicefs mount [command options] META-URL MOUNTPOINT

# Mount in foreground
juicefs mount redis://localhost /mnt/jfs

# Mount in background with password protected Redis
juicefs mount redis://:mypassword@localhost /mnt/jfs -d
# A safer alternative
META_PASSWORD=mypassword juicefs mount redis://localhost /mnt/jfs -d

# Mount with a sub-directory as root
juicefs mount redis://localhost /mnt/jfs --subdir /dir/in/jfs

# Enable "writeback" mode, which improves performance at the risk of losing objects
juicefs mount redis://localhost /mnt/jfs -d --writeback

# Enable "read-only" mode
juicefs mount redis://localhost /mnt/jfs -d --read-only

# Disable metadata backup
juicefs mount redis://localhost /mnt/jfs --backup-meta 0

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
MOUNTPOINTfile system mount point, e.g. /mnt/jfs, Z:.
-d, --backgroundrun in background (default: false)
--no-syslogdisable syslog (default: false)
--log=pathpath of log file when running in background (default: $HOME/.juicefs/juicefs.log or /var/log/juicefs.log)
--forceforce to mount even if the mount point is already mounted by the same filesystem.
--update-fstab <VersionAdd>1.1</VersionAdd>add / update entry in /etc/fstab, will create a symlink from /sbin/mount.juicefs to JuiceFS executable if not existing (default: false)
--disable-transparent-hugepage <VersionAdd>1.3</VersionAdd>Disable the kernel’s Transparent Huge Page (THP). In situations like memory pressure, keeping THP enabled may cause processes to hang. (default: false)

FUSE related options {#mount-fuse-options}

ItemsDescription
--enable-xattrenable extended attributes (xattr) (default: false)
--enable-cap <VersionAdd>1.3</VersionAdd>enable security.capability xattr (default: false)
--enable-selinux <VersionAdd>1.3</VersionAdd>enable security.selinux xattr (default: false)
--enable-ioctl <VersionAdd>1.1</VersionAdd>enable ioctl (support GETFLAGS/SETFLAGS only) (default: false)
--root-squash value <VersionAdd>1.1</VersionAdd>mapping local root user (UID = 0) to another one specified as UID:GID
--all-squash value <VersionAdd>1.3</VersionAdd>mapping all users to another one specified as UID:GID
--umask value <VersionAdd>1.3</VersionAdd>umask for new file and directory in octal
--prefix-internal <VersionAdd>1.1</VersionAdd>add '.jfs' prefix to all internal files (default: false)
--max-fuse-io=128K <VersionAdd>1.3</VersionAdd>maximum size for fuse request (default: 128K)
-o valueother FUSE options, see FUSE Mount Options
<CommonOptions /> <!-- Note: The purpose of the following HTML is only to avoid reporting errors when checking for broken links (because these headers are in the "_common_options.mdx" file), and will not be displayed on the actual page. Please do not delete or move it (must be placed below the "<CommonOptions />" line). --> <div style={{ display: 'none' }}>

{#mount-metadata-options}

{#mount-metadata-cache-options}

{#mount-data-storage-options}

{#mount-data-cache-options}

{#mount-metrics-options}

</div>

juicefs umount {#umount}

Unmount a volume.

Synopsis

shell
juicefs umount [command options] MOUNTPOINT

juicefs umount /mnt/jfs

Options

ItemsDescription
-f, --forceforce unmount a busy mount point (default: false)
--flush <VersionAdd>1.1</VersionAdd>wait for all staging chunks to be flushed (default: false)

juicefs gateway {#gateway}

Start an S3-compatible gateway, read Deploy JuiceFS S3 Gateway for more.

Synopsis

shell
juicefs gateway [command options] META-URL ADDRESS

export MINIO_ROOT_USER=admin
export MINIO_ROOT_PASSWORD=12345678
juicefs gateway redis://localhost localhost:9000

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
ADDRESSS3 gateway address and listening port, for example: localhost:9000
--log value <VersionAdd>1.2</VersionAdd>path for gateway log
--access-log=pathpath for JuiceFS access log.
--background, -d <VersionAdd>1.2</VersionAdd>run in background (default: false)
--no-bannerdisable MinIO startup information (default: false)
--multi-bucketsuse top level of directories as buckets (default: false)
--keep-etagsave the ETag for uploaded objects (default: false)
--umask=022umask for new file and directory in octal (default: 022)
--object-tag <VersionAdd>1.2</VersionAdd>enable object tagging API
--domain value <VersionAdd>1.2</VersionAdd>domain for virtual-host-style requests
--refresh-iam-interval=5m <VersionAdd>1.2</VersionAdd>interval to reload gateway IAM from configuration (default: 5m)
<CommonOptions />

juicefs webdav {#webdav}

Start a WebDAV server, refer to Deploy WebDAV Server for more.

Synopsis

shell
juicefs webdav [command options] META-URL ADDRESS

juicefs webdav redis://localhost localhost:9007

Options

ItemsDescription
META-URLDatabase URL for metadata storage, see JuiceFS supported metadata engines for details.
ADDRESSWebDAV address and listening port, for example: localhost:9007.
--cert-file <VersionAdd>1.1</VersionAdd>certificate file for HTTPS
--key-file <VersionAdd>1.1</VersionAdd>key file for HTTPS
--gzipcompress served files via gzip (default: false)
--disallowListdisallow list a directory (default: false)
--enable-proppatch <VersionAdd>1.3</VersionAdd>enable proppatch method support
--log value <VersionAdd>1.2</VersionAdd>path for WebDAV log
--access-log=pathpath for JuiceFS access log
--background, -d <VersionAdd>1.2</VersionAdd>run in background (default: false)
--threads=50, -p 50 <VersionAdd>1.3</VersionAdd>number of threads for delete jobs (max 255)
<CommonOptions />

Tool {#tool}

juicefs bench {#bench}

Run benchmark, including read/write/stat for big and small files. For a detailed introduction to the bench subcommand, refer to the documentation.

Synopsis

shell
juicefs bench [command options] PATH

# Run benchmarks with 4 threads
juicefs bench /mnt/jfs -p 4

# Run benchmarks of only small files
juicefs bench /mnt/jfs --big-file-size 0

Options

ItemsDescription
--block-size=1block size in MiB (default: 1)
--big-file-size=1024size of big file in MiB (default: 1024)
--small-file-size=128size of small file in KiB (default: 128)
--small-file-count=100number of small files (default: 100)
--threads=1, -p 1number of concurrent threads (default: 1)

juicefs objbench {#objbench}

Run basic benchmarks on the target object storage to test if it works as expected. Read documentation for more.

Synopsis

shell
juicefs objbench [command options] BUCKET

# Run benchmarks on S3
ACCESS_KEY=myAccessKey SECRET_KEY=mySecretKey juicefs objbench --storage=s3 https://mybucket.s3.us-east-2.amazonaws.com -p 6

Options

ItemsDescription
--storage=fileObject storage type (e.g. s3, gs, oss, cos) (default: file, refer to documentation for all supported object storage types)
--access-key=valueAccess Key for object storage (can also be set via the environment variable ACCESS_KEY), see How to Set Up Object Storage for more.
--secret-key valueSecret Key for object storage (can also be set via the environment variable SECRET_KEY), see How to Set Up Object Storage for more.
--session-token value <VersionAdd>1.0</VersionAdd>session token for object storage
--shards<VersionAdd>1.3</VersionAdd>If your object storage limit speed in a bucket level (or you're using a self-hosted object storage with limited performance), you can store the blocks into N buckets by hash of key (default: 0), when N is greater than 0, bucket should to be in the form of %d, e.g. --bucket "juicefs-%d". --shards cannot be changed afterwards and must be planned carefully ahead.
--block-size=4096size of each IO block in KiB (default: 4096)
--big-object-size=1024size of each big object in MiB (default: 1024)
--small-object-size=128size of each small object in KiB (default: 128)
--small-objects=100number of small objects (default: 100)
--skip-functional-testsskip functional tests (default: false)
--threads=4, -p 4number of concurrent threads (default: 4)

juicefs warmup {#warmup}

Download data to local cache in advance, to achieve better performance on application's first read. You can specify a mount point path to recursively warm-up all files under this path. You can also specify a file through the --file option to only warm-up the files contained in it.

If the files needing warming up resides in many different directories, you should specify their names in a text file, and pass to the warmup command using the --file option, allowing juicefs warmup to download concurrently, which is significantly faster than calling juicefs warmup multiple times, each with a single file.

Synopsis

shell
juicefs warmup [command options] [PATH ...]

# Warm up all files in datadir
juicefs warmup /mnt/jfs/datadir

# Warm up selected files
echo '/jfs/f1
/jfs/f2
/jfs/f3' > /tmp/filelist.txt
juicefs warmup -f /tmp/filelist.txt

Options

ItemsDescription
--file=path, -f pathfile containing a list of paths (each line is a file path)
--threads=50, -p 50number of concurrent workers, default to 50. Reduce this number in low bandwidth environment to avoid download timeouts
--background, -brun in background (default: false)
--evict <VersionAdd>1.2</VersionAdd>evict cached blocks
--check <VersionAdd>1.2</VersionAdd>check whether the data blocks are cached or not

juicefs rmr {#rmr}

Remove all the files and subdirectories, similar to rm -rf, except this command deals with metadata directly (bypassing kernel), thus is much faster.

If trash is enabled, deleted files are moved into trash. Read more at Trash.

Synopsis

shell
juicefs rmr PATH ...

juicefs rmr /mnt/jfs/foo

Options

ItemsDescription
--skip-trash<VersionAdd>1.3</VersionAdd>skip trash and delete files directly (requires root)
--threads=50, -p 50<VersionAdd>1.3</VersionAdd>number of threads for delete jobs (max 255)

juicefs sync {#sync}

Sync between two storage, read Data migration for more.

Synopsis

shell
juicefs sync [command options] SRC DST

# Sync object from OSS to S3
juicefs sync oss://mybucket.oss-cn-shanghai.aliyuncs.com s3://mybucket.s3.us-east-2.amazonaws.com

# Sync objects from S3 to JuiceFS
juicefs sync s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

# SRC: a1/b1,a2/b2,aaa/b1   DST: empty   sync result: aaa/b1
juicefs sync --exclude='a?/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

# SRC: a1/b1,a2/b2,aaa/b1   DST: empty   sync result: a1/b1,aaa/b1
juicefs sync --include='a1/b1' --exclude='a[1-9]/b*' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

# SRC: a1/b1,a2/b2,aaa/b1,b1,b2  DST: empty   sync result: b2
juicefs sync --include='a1/b1' --exclude='a*' --include='b2' --exclude='b?' s3://mybucket.s3.us-east-2.amazonaws.com/ jfs://META-URL/

As shown in the examples, the format of both source (SRC) and destination (DST) paths is:

[NAME://][ACCESS_KEY:SECRET_KEY[:TOKEN]@]BUCKET[.ENDPOINT][/PREFIX]

In which:

  • NAME: JuiceFS supported data storage types like s3, oss, refer to this document for a full list.
  • ACCESS_KEY and SECRET_KEY: The credential required to access the data storage, special characters need to be URL encoded, e.g. / must be substituted with %2F. If you are not familiar with AKSK management, refer to this document.
  • TOKEN token used to access the object storage, as some object storage supports the use of temporary token to obtain permission for a limited time.
  • BUCKET[.ENDPOINT]: The access address of the data storage service. The format may be different for different storage types, and refer to the document.
  • [/PREFIX]: Optional, a prefix for the source and destination paths that can be used to limit synchronization of data only in certain paths.

Selection related options {#sync-selection-related-options}

ItemsDescription
--files-from <VersionAdd>1.3</VersionAdd>Only synchronize the objects recorded in the given file, where each line is the relative path of the object to be synchronized. If the object is a directory, it is recommended to end with /.
--start=KEY, -s KEY, --end=KEY, -e KEYProvide object storage key range for syncing.
--end KEY, -e KEYthe last KEY to sync
--exclude=PATTERNExclude keys matching PATTERN. Refer to the "Filtering" document to learn how to use it.
--include=PATTERNInclude keys matching PATTERN, need to be used with --exclude. Refer to the "Filtering" document to learn how to use it.
--match-full-path <VersionAdd>1.2</VersionAdd>Use "Full path filtering mode", default is false. Refer to the "Filtering modes" document to learn how to use it.
--max-size-SIZE <VersionAdd>1.2</VersionAdd>skip files larger than SIZE
--min-size-SIZE <VersionAdd>1.2</VersionAdd>skip files smaller than SIZE
--max-age=DURATION <VersionAdd>1.2</VersionAdd>Skip files whose last modification time exceeds DURATION, in seconds. For example, --max-age=3600 means to synchronize only files that have been modified within 1 hour.
--min-age=DURATION <VersionAdd>1.2</VersionAdd>Skip files whose last modification time is no more than DURATION, in seconds. For example, --min-age=3600 means to synchronize only files whose last modification time is more than 1 hour from the current time.
--start-time <VersionAdd>1.3</VersionAdd>skip files modified before start-time. example: 2006-01-02 15:04:05
--end-time <VersionAdd>1.3</VersionAdd>skip files modified after end-time. example: 2006-01-02 15:04:05
--limit=-1Limit the number of objects that will be processed, default to -1 which means unlimited.
--update, -uUpdate existing files if the source files' mtime is newer, default to false.
--force-update, -fAlways update existing file, default to false.
--existing, --ignore-non-existing <VersionAdd>1.1</VersionAdd>Skip creating new files on destination, default to false.
--ignore-existing <VersionAdd>1.1</VersionAdd>Skip updating files that already exist on destination, default to false.

Action related options {#sync-action-related-options}

ItemsDescription
--dirsSync empty directories as well.
--permsPreserve permissions, default to false.
--links, -lCopy symlinks as symlinks default to false.
--inplace <VersionAdd>1.2</VersionAdd>When a file in the source path is modified, directly modify the file with the same name in the destination path instead of first writing a temporary file in the destination path and then atomically renaming the temporary file to the real file name. This option only makes sense when the --update option is enabled and the storage system of the destination path supports in-place modification of files (such as JuiceFS, HDFS, NFS). That is to say, if the storage system of the destination path is object storage, enable this option is invalid. (default: false)
--delete-src, --deleteSrcDelete objects that already exist in destination. Different from rsync, files won't be deleted at the first run, instead they will be deleted at the next run, after files are successfully copied to the destination.
--delete-dst, --deleteDstDelete extraneous objects from destination.
--check-allVerify the integrity of all files in source and destination, default to false. Comparison is done on byte streams, which comes at a performance cost.
--check-newVerify the integrity of newly copied files, default to false. Comparison is done on byte streams, which comes at a performance cost.
--check-change <VersionAdd>1.3</VersionAdd>Verify whether the data has changed before and after synchronization, default is false. Based on file size and mtime, which is lightweight.
--max-failure<VersionAdd>1.3</VersionAdd>max number of allowed failed files (-1 for unlimited)
--dryDon't actually copy any file.

Storage related options {#sync-storage-related-options}

ItemsDescription
--threads=10, -p 10Number of concurrent threads, default to 10.
--list-threads=1 <VersionAdd>1.1</VersionAdd>Number of list threads, default to 1. Read concurrent list to learn its usage.
--list-depth=1 <VersionAdd>1.1</VersionAdd>Depth of concurrent list operation, default to 1. Read concurrent list to learn its usage.
--no-httpsDo not use HTTPS, default to false.
--storage-class value <VersionAdd>1.1</VersionAdd>the storage class for destination
--bwlimit=0Limit bandwidth in Mbps default to 0 which means unlimited.

Cluster related options {#sync-cluster-related-options}

ItemsDescription
--manager-addr=ADDRThe listening address of the Manager node in distributed synchronization mode in the format: <IP>:[port]. If not specified, it listens on a random port. If this option is omitted, it listens on a random local IPv4 address and a random port.
--worker=ADDR,ADDRWorker node addresses used in distributed syncing, comma separated.

Metrics related options {#sync-metircs-related-options}

ItemsDescription
--metrics value <VersionAdd>1.2</VersionAdd>address to export metrics (default: "127.0.0.1:9567")
--consul value <VersionAdd>1.2</VersionAdd>Consul address to register (default: "127.0.0.1:8500")

juicefs clone <VersionAdd>1.1</VersionAdd> {#clone}

Quickly clone directories or files within a single JuiceFS mount point. The cloning process involves copying only the metadata without copying the data blocks, making it extremely fast. Read Clone Files or Directories for more.

Synopsis

shell
juicefs clone [command options] SRC DST

# Clone a file
juicefs clone /mnt/jfs/file1 /mnt/jfs/file2

# Clone a directory
juicefs clone /mnt/jfs/dir1 /mnt/jfs/dir2

# Preserve the UID, GID, and mode of the file
juicefs clone -p /mnt/jfs/file1 /mnt/jfs/file2

Options

ItemsDescription
--preserve, -pBy default, the executor's UID and GID are used for the clone result, and the mode is recalculated based on the user's umask. Use this option to preserve the UID, GID, and mode of the file.

juicefs compact <VersionAdd>1.2</VersionAdd> {#compact}

Performs fragmentation optimization, merging, or cleaning of non-contiguous slices in the given directory to improve read performance. For detailed information, refer to 「Status Check and Maintenance」.

Overview

shell
juicefs compact [command options] PATH

# Perform fragmentation optimization on the specified directory
juicefs compact /mnt/jfs

Parameters

ItemDescription
--threads, -pNumber of threads to concurrently execute tasks (default: 10)