docs/en/faq.md
If you can't find an answer in the documentation, please try using the "Ask AI" feature (in the bottom right corner). If the AI assistant's answer helps you or provides a wrong answer, feel free to leave feedback on the response. Alternatively, use the document search feature (in the top right corner) and try searching with different keywords.
If these methods still do not resolve your question, you can join the JuiceFS Community for further assistance.
See "Comparison with Others" for more information.
First unmount JuiceFS volume, then re-mount the volume with newer version client.
Different types of JuiceFS clients have different ways to obtain logs. For details, please refer to "Client log" document.
JuiceFS cannot directly read files that already exist in object storage. Although JuiceFS typically uses object storage as the data storage layer, it is not a tool for accessing object storage in the traditional sense. You can refer to the technical architecture documentation for more details.
If you want to migrate existing data in an object storage bucket to JuiceFS, you can use JuiceFS Sync.
No, while JuiceFS supports using local disks or SFTP as the underlying storage, it does not interfere with the logical structure management of the underlying storage. If you wish to consolidate storage space from multiple servers, you may consider using MinIO or Ceph to create an object storage cluster, and then create a JuiceFS file system on top of it.
Yes, There is also a best practice document for Redis as the JuiceFS metadata engine for reference.
JuiceFS already supported many object storage, please check the list first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.
The first reason is that you may have enabled the trash feature. In order to ensure data security, the trash is enabled by default. The deleted files are actually placed in the trash and are not actually deleted, so the size of the object storage will not change. trash retention time can be specified with juicefs format or modified with juicefs config. Please refer to the "Trash" documentation for more information.
The second reason is that JuiceFS deletes the data in the object storage asynchronously, so the space change of the object storage will be slower. If you need to immediately clean up the data in the object store that needs to be deleted, you can try running the juicefs gc command.
When trash is disabled:
sustained)"** and will be processed after the program closes the filedelfile) and attempts to place it into the deletion queue (maxDeleting)When trash is enabled:
2024-01-15-14)delfile), placed into the deletion queue (maxDeleting)Deletion queue processing (asynchronous cleanup):
Pending Deleted Slices**delfile) and cleans them using the same method as files in the deletion queue.compact) will decrease the reference count of related slices, while clone and copyFileRange will increase the reference count of related slices.juicefs gc --compact --delete.--compress parameter in the format command, disabled by default), object storage usage may be smaller than the actual file size (depending on the compression ratio of different types of files).--bucket option?As of the release of JuiceFS 1.0, this feature is not supported.
As of the release of JuiceFS 1.0, this feature is not supported.
No. However, you can set up multiple buckets associated with the same object storage service when creating a file system, thus solving the problem of limiting the number of individual bucket objects, for example, multiple S3 Buckets can be associated with a single file system. Please refer to --shards option for details.
JuiceFS is a distributed file system, the latency of metadata is determined by 1 (reading) or 2 (writing) round trip(s) between client and metadata service (usually 1-3 ms). The latency of first byte is determined by the performance of underlying object storage (usually 20-100 ms). Throughput of sequential read/write could be 50MB/s - 2800MiB/s (see fio benchmark for more information), depends on network bandwidth and how the data could be compressed.
JuiceFS is built with multiple layers of caching (invalidated automatically), once the caching is warmed up, the latency and throughput of JuiceFS could be close to local file system (having the overhead of FUSE).
Yes, including those issued using mmap. Currently JuiceFS is optimized for sequential reading/writing, and optimized for random reading/writing is work in progress. If you want better random read performance, it's best to turn off compression (--compress none).
JuiceFS does not store the original file in the object storage, but splits it into data blocks using a fixed size (4MiB by default), then uploads it to the object storage, and stores the ID of the data block in the metadata engine. When random write happens, the original metadata is marked stale, and then JuiceFS Client uploads the new data block to the object storage, then update the metadata accordingly.
When reading the data of the overwritten part, according to the latest metadata, it can be read from the new data block uploaded during random writing, and the old data block may be deleted by the background garbage collection tasks automatically clean up. This shifts complexity from random writes to reads.
Read JuiceFS Internals and Data Processing Flow to learn more.
Use the --writeback option to write data to the local cache and then asynchronously upload it to the object storage backend. This is significantly faster than writing directly to object storage. See "Write Cache in Client" for more information.
Distributed cache is supported in our enterprise edition.
root privileges?Yes, JuiceFS could be mounted using juicefs without root privileges. The default directory for caching is $HOME/.juicefs/cache (macOS) or /var/jfsCache (Linux), you should change that to a directory which you have write permission.
See "Read Cache in Client" for more information.
In addition to mounting, the following methods are also supported:
Although a user has the same username on both hosts (for example, alice on host X and host Y), their user ID (UID) or group ID (GID) differs between them. File permissions in JuiceFS are based on these numeric IDs, not the username.
To confirm this, run the id command on each host and compare the output:
$ id alice
uid=1201(alice) gid=500(staff) groups=500(staff)
See Sync Accounts between Multiple Hosts to resolve this issue.
The built-in gateway subcommand does not support functions including as multi-user management, and provides only basic S3 gateway functions. If you need to use these advanced features, please refer to the documentation.
As of the release of JuiceFS 1.0, the community has two SDKs, one is the Java SDK that is highly compatible with the HDFS interface officially maintained by Juicedata, and the other is the Python SDK maintained by community users.