doc/cephfs/createfs.rst
A Ceph file system requires at least two RADOS pools, one for data and one for metadata. There are important considerations when planning these pools:
device_classes to configure pools this
way.ecpool) that can be used on an entire
hierarchy of directories and files (see also :ref:file-layouts).Refer to :doc:/rados/operations/pools to learn more about managing pools. For
example, to create two pools with default settings for use with a file system, you
might run the following commands:
.. code:: bash
$ ceph osd pool create cephfs_data
$ ceph osd pool create cephfs_metadata
The metadata pool will typically hold at most a few gigabytes of data. For this reason, a smaller PG count is usually recommended. 64 or 128 is commonly used in practice for large clusters.
.. note:: The names of the file systems, metadata pools, and data pools can only have characters in the set [a-zA-Z0-9_-.].
Once the pools are created, you may enable the file system using the fs new command:
.. code:: bash
$ ceph fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous-metadata-overlay] [<fscid:int>] [--recover] [--yes-i-really-really-mean-it] [<set>...]
This command creates a new file system with specified metadata and data pool. The specified data pool is the default data pool and cannot be changed once set. Each file system has its own set of MDS daemons assigned to ranks so ensure that you have sufficient standby daemons available to accommodate the new file system.
.. note::
--yes-i-really-really-mean-it may be used for some fs set commands
The --force option is used to achieve any of the following:
Creating pools_ for details.The --allow-dangerous-metadata-overlay option permits the reuse metadata and
data pools if it is already in-use. This should only be done in emergencies and
after careful reading of the documentation.
If the --fscid option is provided then this creates a file system with a
specific fscid. This can be used when an application expects the file system's ID
to be stable after it has been recovered, e.g., after Monitor databases are
lost and rebuilt. Consequently, file system IDs don't always keep increasing
with newer file systems.
The --recover option sets the state of file system's rank 0 to existing but
failed. So when a MDS daemon eventually picks up rank 0, the daemon reads the
existing in-RADOS metadata and doesn't overwrite it. The flag also prevents the
standby MDS daemons to join the file system.
The set option allows to set multiple options supported by fs set
atomically with the creation of the file system.
For example:
.. code:: bash
$ ceph fs new cephfs cephfs_metadata cephfs_data set max_mds 2 allow_standby_replay true
$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
Once a file system has been created, your MDS(s) will be able to enter an active state. For example, in a single MDS system:
.. code:: bash
$ ceph mds stat
cephfs-1/1/1 up {0=a=up:active}
Once the file system is created and the MDS is active, you are ready to mount the file system. If you have created more than one file system, you will choose which to use when mounting.
cephfs_mount_using_kernel_drivercephfs_mount_using_fuseceph-dokanIf you have created more than one file system, and a client does not
specify a file system when mounting, you can control which file system
they will see by using the ceph fs set-default command.
See :ref:adding-data-pool-to-file-system.
You may use Erasure Coded pools as CephFS data pools as long as they have overwrites enabled, which is done as follows:
.. code:: bash
ceph osd pool set my_ec_pool allow_ec_overwrites true
Note that EC overwrites are only supported when using OSDs with the BlueStore backend.
If you are storing lots of small files or are frequently modifying files you can improve performance by enabling EC optimizations, which is done as follows:
.. code:: bash
ceph osd pool set my_ec_pool allow_ec_optimizations true
You may not use Erasure Coded pools as CephFS metadata pools, because CephFS metadata is stored using RADOS OMAP data structures, which EC pools cannot store.