doc/mgr/smb.rst
.. _mgr-smb:
CephFS access can be provided to clients using the SMB protocol_ via the
Samba suite_ and samba-container_ images - managed by Ceph.
The smb manager module provides an interface for deploying and controlling
clusters of Samba services as well as managing SMB shares. In the smb
manager module a cluster is a logical management unit that may map to one or
more managed Samba service - by itself a cluster may or may not be using any
high-availability mechanisms.
If the module is not already enabled on your cluster you can enable by running
ceph mgr module enable smb.
There are two ways of interacting with the smb module. The :ref:imperative method <mgr-smb-imperative> uses commands like ceph smb cluster create ...
and ceph smb share rm ... and should be very familiar to those who have
used Ceph's nfs manager module on the command line. The :ref:declarative method <mgr-smb-declarative> uses the command ceph smb apply to process
"resource descriptions" specified in YAML or JSON. This method should be
familiar to those who have used Ceph orchestration with cephadm, just using SMB
specific resource types.
.. note:: Ceph managed Samba only supports SMB2 and SMB3 versions of the protocol. The SMB1 version of the protocol, sometimes known as CIFS, is not supported. Some systems, such as the Linux kernel, provide tooling for both SMB1 and SMB2+ under the CIFS moniker. Check the documentation of the software packages used to ensure they support SMB2+ regardless of how the tool is named.
.. note::
At this time, the smb module requires cephadm orchestration. It
does not function without orchestration.
.. _SMB protocol: https://en.wikipedia.org/wiki/Server_Message_Block
.. _Samba suite: https://samba.org
.. _samba-container: https://github.com/samba-in-kubernetes/samba-container
.. _mgr-smb-imperative:
Create Cluster ++++++++++++++
.. prompt:: bash #
ceph smb cluster create <cluster_id> {user|active-directory} [--domain-realm=<domain_realm>] [--domain-join-user-pass=<domain_join_user_pass>] [--define-user-pass=<define_user_pass>] [--custom-dns=<custom_dns>] [--placement=<placement>] [--clustering=<clustering>] [--password-filter=<password_filter>] [--password-filter-out=<password_filter_out>]
Create a new logical cluster, identified by the cluster ID value. The cluster create command must specify the authentication mode the cluster will use. This may either be one of:
user
keywordactive-directory
keywordOptions:
cluster_id
A short string uniquely identifying the cluster
auth_mode
One of user or active-directory
domain_realm
Required for active-directory clusters and ignored by user clusters.
The domain/realm value identifying the AD domain. Required when choosing
active-directory
domain_join_user_pass
Required for active-directory clusters and ignored by user clusters.
A string in the form <username>%<password> that will be used to join
Samba servers to the AD domain.
define_user_pass
Optional. Ignored by active-directory clusters.
A string of the form <username>%<password> that will be used for
authentication in user auth_mode. Can be specified multiple times to
define more than one user.
custom_dns
Optional. Can be specified multiple times. One or more IP Addresses that
will be applied to the Samba containers to override the default DNS
resolver(s). This option is intended to be used when the host Ceph node is
not configured to resolve DNS entries within AD domain(s).
placement
Optional. A Ceph orchestration :ref:placement specifier <orchestrator-cli-placement-spec>
clustering
Optional. Control if a cluster abstraction actually uses Samba's clustering
mechanism. The value may be one of default, always, or never.
A default value indicates that clustering should be enabled if the
placement count value is any value other than 1. A value of always
enables clustering regardless of the placement count. A value of never
disables clustering regardless of the placement count. If unspecified,
default is assumed.
public_addrs
Optional. A string in the form of <ipaddress/prefixlength>[%<destination address>].
Supported only when using Samba's clustering. Assign "virtual" IP addresses
that will be managed by the clustering subsystem and may automatically move
between nodes running Samba containers. Can be specified multiple times to
assign more than one public address to the SMB cluster.
password_filter
Optional. One of none or base64. If the filter is none the
password values on the command line are assumed to be plain text. If the
filter is base64 the password values are assumed to be obscured with
base64 encoding the string. If --password-filter-out is not specified
this filter will also be applied to the output.
password_filter_out
Optional. One of none, base64, or hidden. If the filter is
none the password fields in the output are emitted as plain text. If the
filter is base64 password fields will be obscured by base64 encoding
the string. If the filter is hidden the password values will be
replaced by an invalid generic replacement string containing only asterisks.
Examples
Create a cluster with two locally defined users:
.. prompt:: bash #
ceph smb cluster create bob user \
--define-user-pass=bob%Passw0rd1 \
--define-user-pass=carol%Passw0rd2
Create a cluster with a single user and an explicit placement value for
cephadm (hosts labeled ``smb``):
.. prompt:: bash #
ceph smb cluster create test1 user \
--define-user-pass=test%Passw0rd1 \
--placement="label:smb"
Create a cluster connected to an active directory system. Use a custom DNS
server:
.. prompt:: bash #
ceph smb cluster create test2 active-directory \
--domain-realm=MYDOM.EXAMPLE.ORG \
--domain-join-user-pass=Administrator%Ph0nyPassw0rd \
--custom-dns=192.168.76.210
Create a cluster connected to an active directory system, similar to the
previous example. Set three CTDB public address values and a custom placement:
.. prompt:: bash #
ceph smb cluster create test3 active-directory \
--domain-realm=MYDOM.EXAMPLE.ORG \
--domain-join-user-pass=Administrator%Ph0nyPassw0rd \
--custom-dns=192.168.76.210 \
--public-address=192.168.76.110/24 \
--public-address=192.168.76.111/24 \
--public-address=192.168.76.112/24 \
--placement="3 label:smb"
Remove Cluster
++++++++++++++
.. prompt:: bash #
ceph smb cluster rm <cluster_id> [--password-filter=<password_filter>]
Remove a logical SMB cluster from the Ceph cluster.
Options:
cluster_id
A ``cluster_id`` value identifying a cluster resource.
password_filter
Optional. One of ``none``, ``base64``, or ``hidden``. If the filter is
``none`` the password fields in the output are emitted as plain text. If
the filter is ``base64`` password fields will be obscured by base64
encoding the string. If the filter is ``hidden`` the password values will
be replaced by an invalid generic replacement string containing only
asterisks.
List Clusters
++++++++++++++
.. prompt:: bash #
ceph smb cluster ls [--format=<format>]
Print a listing of cluster ids. The output defaults to JSON, select YAML
encoding with the ``--format=yaml`` option.
Share Commands
--------------
Create Share
++++++++++++
.. prompt:: bash #
ceph smb share create <cluster_id> <share_id> <cephfs_volume> <path> [--share-name=<share_name>] [--subvolume=<subvolume>] [--readonly]
Create a new SMB share, hosted by the named cluster, that maps to the given
CephFS volume and path.
Options:
cluster_id
A short string uniquely identifying the cluster
share_id
A short string uniquely identifying the share
cephfs_volume
The name of the CephFS volume to be shared
path
A path relative to the root of the volume and/or subvolume
share_name
Optional. The public name of the share, visible to clients. If not provided
the ``share_id`` will be used automatically
subvolume
Optional. A subvolume name in the form ``[<subvolumegroup>/]<subvolume>``.
The option causes the path to be relative to the CephFS subvolume
specified.
readonly
Creates a read-only share
Examples
Create a share using the subvolume photos in the subvolumegroup company:
.. prompt:: bash #
ceph smb share create test1 pics cephfs --subvolume=company/photos --path=/
Create a share similar to the example above with a customized name:
.. prompt:: bash #
ceph smb share create test1 pics cephfs \
--subvolume=company/photos --path=/ --share-name="Company Photos"
Create a share at the root of a CephFS volume (not generally recommended):
.. prompt:: bash #
ceph smb share create test1 rootie cephfs --path=/
Create a read-only share at a custom path in the CephFS volume:
.. prompt:: bash #
ceph smb share create test1 plans cephfs \
--path=/qbranch/top/secret/plans --readonly
Update Share QoS ++++++++++++++++
.. prompt:: bash #
ceph smb share update cephfs qos <cluster_id> <share_id> [--read-iops-limit=<int>] [--write-iops-limit=<int>] [--read-bw-limit=<str>] [--write-bw-limit=<str>] [--read-burst-mult=<int>] [--write-burst-mult=<int>]
Update Quality of Service (QoS) settings for a CephFS-backed share. This allows administrators to apply per-share rate limits on SMB input/output (I/O) operations, specifically limits on IOPS (Input/Output Operations per Second) and bandwidth (in bytes per second) for both read and write operations. Additionally, burst multipliers can be configured to allow temporary bursts above the configured limits.
Options:
read_iops_limit
Optional integer. Maximum number of read operations per second (0 = disabled).
Valid range: 0 to 1,000,000. Values above this will be capped.
write_iops_limit
Optional integer. Maximum number of write operations per second (0 = disabled).
Valid range: 0 to 1,000,000. Values above this will be capped.
read_bw_limit
Optional string. Maximum allowed bandwidth for read operations (0 = disabled).
This can be specified as a plain integer representing bytes per second, or as a
human-readable string with bytes per second as a unit.
Example: "1M" = 1 MiB/s (1,048,576 bytes/s).
Valid range: 0 to 1 << 40 (≈1 T). Numeric values above this will be capped.
write_bw_limit
Optional string. Maximum allowed bandwidth for write operations (0 = disabled).
This can be specified as a plain integer representing bytes per second, or as a
human-readable string with bytes per second as a unit.
Example: "1M" = 1 MiB/s (1,048,576 bytes/s).
Valid range: 0 to 1 << 40 (≈1 T). Numeric values above this will be capped.
read_burst_mult
Optional integer. Burst multiplier for read operations (value ÷ 10 = multiplier),
allowing temporary bursts above the configured limit. Example: 20 = 2* the configured limit.
Range: 10-100 (1* to 10*), default: 15 (1.5*).
write_burst_mult
Optional integer. Burst multiplier for write operations (value ÷ 10 = multiplier),
allowing temporary bursts above the configured limit. Example: 20 = 2* the configured limit.
Range: 10-100 (1* to 10*), default: 15 (1.5*).
Behavior:
0 disables that specific QoS limit0 completely removes QoS configuration"10M", "5G")15 = 1.5*, 20 = 2*, 30 = 3*)The burst multiplier allows short-term I/O bursts above your configured limits.
For example, if you set read_iops_limit = 1000 and read_burst_mult = 20,
your share can handle bursts up to 2000 read operations per second for short
periods, while maintaining an average of 1000 IOPS over time.
This is useful for workloads that have occasional spikes in activity. The appropriate burst multiplier depends on your workload - higher values allow larger bursts but may temporarily consume more resources.
.. note:: The burst multiplier only affects short-term spikes. The long-term average throughput remains limited by your configured IOPS and bandwidth limits.
Examples:
Set QoS limits with burst multipliers for a share:
.. prompt:: bash #
ceph smb share update cephfs qos foo bar
--read-iops-limit=100
--write-iops-limit=200
--read-bw-limit="10M"
--write-bw-limit="20M"
--read-burst-mult=20
--write-burst-mult=15
In this example:
Disable QoS for a share:
.. prompt:: bash #
ceph smb share update cephfs qos foo bar
--read-iops-limit=0
--write-iops-limit=0
--read-bw-limit=0
--write-bw-limit=0
Remove Share ++++++++++++
.. prompt:: bash #
ceph smb share rm <cluster_id> <share_id>
Remove an SMB Share from the cluster.
List Shares +++++++++++
.. prompt:: bash #
ceph smb share ls <cluster_id> [--format=<format>]
Print a listing of share ids. The output defaults to JSON, select YAML
encoding with the --format=yaml option.
.. _mgr-smb-declarative:
In addition to the basic imperative management commands the smb manager
module supports configuration using declarative resource specifications.
Resource specifications can be written in either JSON or YAML. These resource
specifications can be applied to the cluster using the ceph smb apply
command, for example:
.. prompt:: bash #
ceph smb apply -i /path/to/resources.yaml
In addition to the resource specification the apply sub-command accepts
options that control how the input and output of the command behave:
.. prompt:: bash #
ceph smb apply [--format=<format>] [--password-filter=<password_filter>] [--password-filter-out=<password_filter_out>] -i <input>
Options:
format
One of json (the default) or yaml. Output format can be
selected independent of the input format.
password_filter
Optional. One of none or base64. If the filter is none the
password fields in the input are assumed to be plain text. If the filter is
base64 the password fields are assumed to be obscured with
base64 encoding the string. If --password-filter-out is not specified
this filter will also be applied to the output.
password_filter_out
Optional. One of none, base64, or hidden. If the filter is
none the password fields in the output are emitted as plain text. If
the filter is base64 password fields will be obscured by base64
encoding the string. If the filter is hidden the password values will
be replaced by an invalid generic replacement string containing only
asterisks.
input
A file name or - to use the standard input (aka stdin).
Resources that have already been applied to the Ceph cluster configuration can
be viewed using the ceph smb show command. For example:
.. prompt:: bash #
ceph smb show ceph.smb.cluster.cluster1
The show command can show all resources, resources of a given type, or specific
resource items. Options can be provided that control the output of the command.
.. prompt:: bash #
ceph smb show [resource_name...] [--format=<format>] [--results=<results>] [--password-filter=<password_filter>]
Options:
resource_name
One or more strings specifying a resource or resource type. See description below.
format
One of json (the default) or yaml.
results
One of collapsed (the default) or full. When set to collapsed
the output of the command will show only the resource JSON/YAML of
a single item if a single item is found. When set to full even if a
single item is found the output will always include a wrapper object like
(in pseudo-JSON): {"resources": [...Resource objects...]}.
password_filter
Optional. One of none, base64, or hidden. If the filter is
none the password fields in the output are emitted as plain text. If
the filter is base64 password fields will be obscured by base64
encoding the string. If the filter is hidden the password values will
be replaced by an invalid generic replacement string containing only
asterisks.
resource_name arguments can take the following forms:
ceph.smb.cluster: show all cluster resourcesceph.smb.cluster.<cluster_id>: show specific cluster with given cluster IDceph.smb.share: show all share resourcesceph.smb.share.<cluster_id>: show all share resources part of the given
clusterceph.smb.share.<cluster_id>.<share_id>: show specific share resource with
the given cluster and share idsceph.smb.usersgroups: show all Users & Groups resourcesceph.smb.usersgroups.<users_goups_id>: show a specific Users & Groups
resourceceph.smb.join.auth: show all join auth resourcesceph.smb.join.auth.<auth_id>: show a specific join auth resourceFor example:
.. prompt:: bash #
ceph smb show ceph.smb.cluster.bob ceph.smb.share.bob
Will show one cluster resource (if it exists) for the cluster "bob" as well as all share resources associated with the cluster "bob".
.. note::
The show subcommand prints out resources in the same form that the
apply command accepts, making it possible to "round-trip" values
between show and apply.
A resource specification is made up of one or more Ceph SMB resource
descriptions written in either JSON or YAML formats. More than one resource
can be specified if the resources are contained within a JSON/YAML list,
or a JSON/YAML object containing the key resources with a corresponding
list value containing the resources. Additionally, a YAML specification
may consist of a series of YAML documents each containing a resource.
An example YAML based simple list looks like the following:
.. code-block:: yaml
- resource_type: ceph.smb.cluster
cluster_id: rhumba
# ... other fields skipped for brevity ...
- resource_type: ceph.smb.cluster
cluster_id: salsa
# ... other fields skipped for brevity ...
- resource_type: ceph.smb.share
cluster_id: salsa
share_id: foo
# ... other fields skipped for brevity ...
An example JSON based simple list looks like the following:
.. code-block:: json
[
{"resource_type": "ceph.smb.cluster",
"cluster_id": "rhumba",
"...": "... other fields skipped for brevity ..."
},
{"resource_type": "ceph.smb.cluster",
"cluster_id": "salsa",
"...": "... other fields skipped for brevity ..."
},
{"resource_type": "ceph.smb.share",
"cluster_id": "salsa",
"share_id": "foo",
"...": "... other fields skipped for brevity ..."
}
]
An example YAML based resource list looks like the following:
.. code-block:: yaml
resources:
- resource_type: ceph.smb.cluster
cluster_id: rhumba
# ... other fields skipped for brevity ...
- resource_type: ceph.smb.cluster
cluster_id: salsa
# ... other fields skipped for brevity ...
- resource_type: ceph.smb.share
cluster_id: salsa
share_id: foo
# ... other fields skipped for brevity ...
An example JSON based resource list looks like the following:
.. code-block:: json
{
"resources": [
{"resource_type": "ceph.smb.cluster",
"cluster_id": "rhumba",
"...": "... other fields skipped for brevity ..."
},
{"resource_type": "ceph.smb.cluster",
"cluster_id": "salsa",
"...": "... other fields skipped for brevity ..."
},
{"resource_type": "ceph.smb.share",
"cluster_id": "salsa",
"share_id": "foo",
"...": "... other fields skipped for brevity ..."
}
]
}
An example YAML resource list consisting of multiple documents looks like the following:
.. code-block:: yaml
---
resource_type: ceph.smb.cluster
cluster_id: rhumba
# ... other fields skipped for brevity ...
---
resource_type: ceph.smb.cluster
cluster_id: salsa
# ... other fields skipped for brevity ...
---
resource_type: ceph.smb.share
cluster_id: salsa
share_id: foo
# ... other fields skipped for brevity ...
Each individual resource description must belong to one of the types described below.
.. note:: For brevity, all following examples will use YAML only. Assume that the equivalent JSON forms are valid.
A cluster resource supports the following fields:
resource_type
A literal string ceph.smb.cluster
cluster_id
A short string identifying the cluster
auth_mode
One of user or active-directory
intent
One of present or removed. If not provided, present is
assumed. If removed all following fields are optional
domain_settings
Object. Ignored/optional for user auth. Required for active-directory
Fields:
realm
Required string. AD domain/realm name.
join_sources
Required list. Each element is an object with :ref:`join source fields
<join-source-fields>`
user_group_settings
List. Ignored/optional for active-directory. Each element is an object
with :ref:user group source fields <user-group-source-fields>
custom_dns
Optional. List of IP Addresses. IP addresses will be used as DNS
resolver(s) in Samba containers allowing the containers to use domain DNS
even if the Ceph host does not
custom_ports
Optional. A mapping of service names to port numbers that will override the
default ports used for those services. The service names are:
smb, smbmetrics, ctdb, and remote-control. If a service
name is not present in the mapping the default port will be used.
For example, {"smb": 4455, "smbmetrics": 9009} will change the
ports used by SMB for client access and the metrics exporter, but
not change the port used by the CTDB clustering daemon.
Note - not all SMB clients are able to use alternate port numbers.
bind_addrs
Optional. A list of objects indicating what IP address or IP network the
SMB and related services may bind to. The fields described for these
objects are mutually exclusive, but at least one field is required.
(The behavior of this option changes when used with clustering and
public_addrs. See note below.)
Fields:
address
Optional. A single IP address represented as a string. For example,
``192.168.7.50``.
network
Optional. A single IP network represented as a string. A network
can be used to specify a range of many IP addresses. The network
string always includes a "/" character before a prefix length.
For example, ``192.168.7.0/24``.
placement
Optional. A Ceph Orchestration :ref:placement specifier <orchestrator-cli-placement-spec>. Defaults to one host if not provided
clustering
Optional. Control if a cluster abstraction actually uses Samba's clustering
mechanism. The value may be one of default, always, or never.
A default value indicates that clustering should be enabled if the
placement count value is any value other than 1. A value of always
enables clustering regardless of the placement count. A value of never
disables clustering regardless of the placement count. If unspecified,
default is assumed.
public_addrs
List of objects; optional. Supported only when using Samba's clustering.
Assign "virtual" IP addresses that will be managed by the clustering
subsystem and may automatically move between nodes running Samba
containers.
(The behavior of this option changes when used with bind_addrs. See
note below.)
Fields:
address
Required string. An IP address with a required prefix length (example:
``192.168.4.51/24``). This address will be assigned to one of the
host's network interfaces and managed automatically.
destination
Optional. String or list of strings. A ``destination`` defines where
the system will assign the managed IPs. Each string value must be a
network address (example ``192.168.4.0/24``). One or more destinations
may be supplied. The typical case is to use exactly one destination and
so the value may be supplied as a string, rather than a list with a
single item. Each destination network will be mapped to an interface on
a host. Run ``cephadm list-networks`` for an example of these mappings.
If destination is not supplied the network is automatically determined
using the address value supplied and taken as the destination.
remote_control
Optional object. This object configures an SMB cluster to deploy an extra
remote control service. This service provides a gRPC server that
can be used to enumerate connected clients and disconnect clients from
shares. This service uses mTLS for authentication. By default, this service
uses port 54445. The port can be configured using the custom_ports
parameter in the cluster resource. If the service is enabled and any of the
cert, key, or ca_cert fields are not populated mTLS will be
disabled and the service will operate in a read-only mode. Running the
service with mTLS disabled is not recommended.
Fields:
enabled
Optional boolean. If explicitly set to ``true`` or ``false`` this
field will enable or disable the remote control service. If left
unset the TLS fields will be checked - if the TLS fields are filled
automatically enable the service.
cert
Optional object. The fields are described in :ref:`tls source
fields<tls-source-fields>`
key
Optional object. The fields are described in :ref:`tls source
fields<tls-source-fields>`
ca_cert
Optional object. The fields are described in :ref:`tls source
fields<tls-source-fields>`
locally_enabled
Optional boolean. If set to ``true`` this field will enable the
remote control service local listener. The local listener lets
processes on the Ceph cluster host communicate with the remote
control service independently of the default TCP/mTLS listener.
The TLS certificates configuration values do not apply to this
unix socket based listener.
external_ceph_cluster:
Optional object. The fields are described in :ref:external Ceph cluster source fields<external-ceph-cluster-source-fields>. This is an
advanced option and should be used with caution.
debug_level:
Optional object. Specify subsystem based default logging level values.
Supported keys are samba and ctdb. Supported values include
numbers (1 through 10 typically) or level names such as INFO
or DEBUG. The system will translate names to numbers (for samba)
or vice-versa as needed. Example YAML snippet:
debug_level: {smb: 8, ctdb: INFO}.
custom_smb_global_options
Optional mapping. Specify key-value pairs that will be directly added to
the global smb.conf options (or equivalent) of a Samba server. Do
not use this option unless you are prepared to debug the Samba instances
yourself.
This option is meant for developers, feature investigators, and other
advanced users to take more direct control of a share's options without
needing to make changes to the Ceph codebase. Entries in this map should
match parameters in ``smb.conf`` and their values. A special key
``_allow_customization`` must appear somewhere in the mapping with the
value of ``i-take-responsibility-for-all-samba-configuration-errors`` as an
indicator that the user is aware that using this option can easily break
things in ways that the Ceph team can not help with. This special key will
automatically be removed from the list of options passed to Samba.
.. warning::
Setting the clustering option allows an administrator to choose exactly
when Samba's CTDB clustering will be used. By default, the use of Samba's
clustering is derived from the placement count. If you choose to set
clustering make sure you understand how clustering interacts with
placement. In particular, be aware that running multiple instances of the
same smb service without clustering enabled can cause unexpected behavior.
.. warning::
The behavior of the system when combining bind_addrs and
public_addrs on a cluster could lead to unexpected results. The smbd
process can only dynamically add/remove public addresses when assigned to
monitor a network interface (e.g. eth0) versus a specific address. If
the network interface is assigned multiple addresses and those addresses
overlap with a different SMB cluster it is possible the services may fail
to start. Currently, one must manually ensure that the interfaces used by
an IP or network is exclusively used for that network to ensure SMB
services start properly.
.. _join-source-fields:
A join source object supports the following fields:
source_type
Optional. Must be resource if specified.
ref
String. Required for source_type: resource. Must refer to the ID of a
ceph.smb.join.auth resource
.. _user-group-source-fields:
A user group source object supports the following fields:
source_type
Optional. One of resource (the default) or empty
ref
String. Required for source_type: resource. Must refer to the ID of a
ceph.smb.usersgroups resource
.. _tls-source-fields:
A TLS source object supports the following fields:
source_type
Optional. Must be resource if specified.
ref
String. Required for source_type: resource. Must refer to the ID of a
ceph.smb.tls.credential resource
.. _external-ceph-cluster-source-fields:
An external Ceph cluster source object supports the following fields:
source_type:
Optional. Must be resource if specified.
ref:
String. Required for source_type: resource. Must refer to the ID of
a ceph.smb.ext.cluster resource
.. note::
The source_type empty is generally only for debugging and testing
the module and should not be needed in production deployments.
The following is an example of a cluster configured for AD membership:
.. code-block:: yaml
resource_type: ceph.smb.cluster
cluster_id: tango
auth_mode: active-directory
domain_settings:
realm: DOMAIN1.SINK.TEST
join_sources:
# this join source refers to a join auth resource with id "join1-admin"
- source_type: resource
ref: join1-admin
custom_dns:
- "192.168.76.204"
placement:
count: 1
The following is an example of a cluster configured for standalone operation:
.. code-block:: yaml
resource_type: ceph.smb.cluster
cluster_id: rhumba
auth_mode: user
user_group_settings:
- source_type: resource
ref: ug1
placement:
hosts:
- node6.mycluster.sink.test
An example cluster resource with intent to remove:
.. code-block:: yaml
resource_type: ceph.smb.cluster
cluster_id: rhumba
intent: removed
A share resource supports the following fields:
resource_type
A literal string ceph.smb.share
cluster_id
A short string identifying the cluster
share_id
A short string identifying the share. Must be Unique within a cluster
intent
One of present or removed. If not provided, present is assumed.
If removed all following fields are optional
name
Optional string. A longer name capable of supporting spaces and other
characters that will be presented to SMB clients
readonly
Optional boolean, defaulting to false. If true no clients are permitted to
write to the share
browseable
Optional boolean, defaulting to true. If true the share will be included in
share listings visible to clients
comment
Optional string. A single line description used to provide human-readable
explanation or notes about the share.
max_connections
Optional integer. Specifies the maximum number of simultaneous client
connections to a specific share. The default value is 0 and it indicates
that there is no limit on the number of connections
cephfs
Required object. Fields:
volume
Required string. Name of the cephfs volume to use
path
Required string. Path within the volume or subvolume to share
subvolumegroup
Optional string. Name of a subvolumegroup to share
subvolume
Optional string. Name of a subvolume to share. If ``subvolumegroup`` is
not set and this value contains exactly one ``/`` character, the
subvolume field will automatically be split into
``<subvolumegroup>/<subvolume>`` parts for convenience
provider
Optional. Selects how CephFS storage should be provided to the share.
The value may be one of ``samba-vfs``, ``samba-vfs/classic``,
``samba-vfs/new`` or ``samba-vfs/proxied``. If unspecified,
``samba-vfs`` is assumed.
Selecting ``samba-vfs/new`` selects the new Samba VFS plugin to connect to
CephFS and ``samba-vfs/proxied`` uses the new VFS plugin but routes the
connections through a proxy. Using the proxy allows for a greater number of
simultaneous client connections to the share, but it comes at the cost of
performance. ``samba-vfs/classic`` uses the older Samba VFS plugin to
connect to CephFS. ``samba-vfs`` automatically selects the preferred VFS
based implementation, currently ``samba-vfs/proxied``. This option is
suitable for the majority of use cases and can be left unspecified for most
shares.
qos
Optional object. Quality of Service settings for the share. Fields:
read_iops_limit
Optional integer. Maximum number of read operations per second (0 = disabled).
Valid range: ``0`` to ``1,000,000``. Values above this will be capped.
write_iops_limit
Optional integer. Maximum number of write operations per second (0 = disabled).
Valid range: ``0`` to ``1,000,000``. Values above this will be capped.
read_bw_limit
Optional string. Maximum allowed bandwidth for read operations (0 = disabled).
This can be specified as a plain integer representing bytes per second, or as a
human-readable string with bytes per second as a unit.
Example: ``"1M"`` = 1 MiB/s (1,048,576 bytes/s).
Valid range: ``0`` to ``1 << 40`` (≈1 T). Numeric values above this will be capped.
write_bw_limit
Optional string. Maximum allowed bandwidth for write operations (0 = disabled).
This can be specified as a plain integer representing bytes per second, or as a
human-readable string with bytes per second as a unit.
Example: ``"1M"`` = 1 MiB/s (1,048,576 bytes/s).
Valid range: ``0`` to ``1 << 40`` (≈1 T). Numeric values above this will be capped.
read_burst_mult
Optional integer. Burst multiplier for read operations (value ÷ 10 = multiplier),
allowing temporary bursts above the configured limit. Example: ``20`` = 2* the configured limit.
Default: 15 (1.5*).
write_burst_mult
Optional integer. Burst multiplier for write operations (value ÷ 10 = multiplier),
allowing temporary bursts above the configured limit. Example: ``20`` = 2* the configured limit.
Default: 15 (1.5*).
restrict_access
Optional boolean, defaulting to false. If true the share will only permit
access by users explicitly listed in login_control.
login_control
Optional list of objects. Fields:
name
Required string. Name of the user or group.
category
Optional. One of ``user`` (default) or ``group``.
access
One of ``read`` (alias ``r``), ``read-write`` (alias ``rw``), ``none``,
or ``admin``. Specific access level to grant to the user or group when
logging into this share. The ``none`` value denies access to the share
regardless of the ``restrict_access`` value.
hosts_access
Optional list of objects. Items in the hosts_access list are used to
restrict the share to use by specific client addresses. If any allow
entries are found all other hosts will be denied. Fields:
access
Required string. One of ``allow`` or ``deny``.
address
Optional string. Required if ``network`` field is not supplied. The
string value must be either an IPv4 address or an IPv6 address. The
specific host will be allowed or denied access to the share.
network
Optional string. Required if ``address`` field is not supplied. The
string value must be either an IPv4 network or an IPv6 network (for
example ``192.0.2.0/24``). If the client's IP address is found within
the specified network that host will be allowed or denied access to the
share.
custom_smb_share_options
Optional mapping. Specify key-value pairs that will be directly added to
the smb.conf (or equivalent) of a Samba server. Do not use this
option unless you are prepared to debug the Samba instances yourself.
This option is meant for developers, feature investigators, and other
advanced users to take more direct control of a share's options without
needing to make changes to the Ceph codebase. Entries in this map should
match parameters in ``smb.conf`` and their values. A special key
``_allow_customization`` must appear somewhere in the mapping with the
value of ``i-take-responsibility-for-all-samba-configuration-errors`` as an
indicator that the user is aware that using this option can easily break
things in ways that the Ceph team can not help with. This special key will
automatically be removed from the list of options passed to Samba.
The following is an example of a share with QoS settings including burst multipliers and human-readable bandwidth limits:
.. code-block:: yaml
resource_type: ceph.smb.share
cluster_id: tango
share_id: sp1
name: "Staff Pics"
cephfs:
volume: cephfs
path: /pics
subvolumegroup: smbshares
subvolume: staff
qos:
read_iops_limit: 100
write_iops_limit: 50
read_bw_limit: "10M"
write_bw_limit: "5M"
read_burst_mult: 20
write_burst_mult: 15
Another example with plain byte values:
.. code-block:: yaml
resource_type: ceph.smb.share
cluster_id: tango
share_id: sp1
cephfs:
volume: cephfs
path: /pics
qos:
read_iops_limit: 100
write_iops_limit: 50
read_bw_limit: 10485760 # 10 MB/s
write_bw_limit: 5242880 # 5 MB/s
read_burst_mult: 10 # 1× burst
write_burst_mult: 20 # 2× burst
Another example, this time of a share with QoS disabled:
.. code-block:: yaml
resource_type: ceph.smb.share
cluster_id: tango
share_id: sp2
cephfs:
volume: cephfs
path: /data
qos:
read_iops_limit: 0
write_iops_limit: 0
read_bw_limit: 0
write_bw_limit: 0
# Note: burst multipliers are ignored when limits are disabled
And finally, a share with an intent to be removed:
.. code-block:: yaml
resource_type: ceph.smb.share
cluster_id: tango
share_id: sp2
intent: removed
A join auth resource supports the following fields:
resource_type
A literal string ceph.smb.join.auth
auth_id
A short string identifying the join auth resource
intent
One of present or removed. If not provided, present is assumed.
If removed all following fields are optional
auth
Required object. Fields:
username
Required string. User with ability to join a system to AD
password
Required string. The AD user's password
linked_to_cluster: Optional. A string containing a cluster ID. If set, the resource may only be used with the linked cluster and will automatically be removed when the linked cluster is removed.
Example:
.. code-block:: yaml
resource_type: ceph.smb.join.auth
auth_id: join1-admin
auth:
username: Administrator
password: Passw0rd
A users & groups resource supports the following fields:
resource_type
A literal string ceph.smb.usersgroups
users_groups_id
A short string identifying the users and groups resource
intent
One of present or removed. If not provided, present is assumed.
If removed all following fields are optional.
values
Required object. Fields:
users
List of objects. Fields:
name
A user name
password
A password
groups
List of objects. Fields:
name
The name of the group
linked_to_cluster: Optional. A string containing a cluster ID. If set, the resource may only be used with the linked cluster and will automatically be removed when the linked cluster is removed.
Example:
.. code-block:: yaml
resource_type: ceph.smb.usersgroups
users_groups_id: ug1
values:
users:
- name: chuckx
password: 3xample101
- name: steves
password: F00Bar123
groups: []
TLS credential resources store copies of TLS files such as Certificates, Keys, or CA Certificates. A TLS credential resource supports the following fields:
resource_type
A literal string ceph.smb.tls.credential
tls_credential_id
A short string identifying the TLS credential resource
intent
One of present or removed. If not provided, present is assumed.
If removed all following fields are optional
credential_type
Required string. The value may be one of cert, key, or ca-cert.
This value indicates what type of TLS credential the value field holds.
value:
A string containing the TLS certificate or key value in PEM encoding.
linked_to_cluster:
Optional. A string containing a cluster ID. If set, the resource may only
be used with the linked cluster and will automatically be removed when the
linked cluster is removed.
Example:
.. code-block:: yaml
resource_type: ceph.smb.tls.credential
tls_credential_id: mycert1
credential_type: cert
# NOTE: The value below is truncated to make the documentation more
# consise. A real embedded certificate is expected to be valid and
# will be longer than this example.
value: |
-----BEGIN CERTIFICATE-----
MIIFDjCCA/agAwIBAgISBtFQfoXc4RmyVabbv28RClKdMA0GCSqGSIb3DQEBCwUA
MDMxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQwwCgYDVQQD
EwNSMTAwHhcNMjUwNTE5MTAyNzUyWhcNMjUwODE3MTAyNzUxWjASMRAwDgYDVQQD
EwdjZXBoLmlvMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx6fif6PQ
LOTdnO8d1JHcF7D+oB/mQlplFz4vwq/GB6Y4oWK3uCQ4PPz/qyvE4wyvc5EPhjfg
d8XNc4ajEBcSUoRj3UwWwiA4oht0SyoJIfwVGp/kF5jxHhVCLdoaaqAxv7nAghWM
6Dg=
-----END CERTIFICATE-----
This resource can be used to configure an SMB Cluster hosted on Ceph cluster to use CephFS volumes provided by an external Ceph cluster. The values provided below allow the SMB server to connect to a cluster other than the one it is running on.
.. warning:: This is an advanced feature that should be used with care. It allows SMB servers to contact CephFS on a different cluster. Because of that, many values provided below can not be validated and other validations that smb mgr module normally does are disabled. In addition, automatic subvolume to path mapping is disabled. Shares in SMB clusters making use of an external Ceph cluster must not specify a subvolume by name and must specify an absolute path to a subvolume.
An external ceph cluster resource supports the following fields.
resource_type
A literal string ceph.smb.ext.cluster
external_ceph_cluster_id
A short string identifying the cluster
intent
One of present or removed. If not provided, present is
assumed. If removed all following fields are optional
fsid
String. The UUID/FSID of the external cluster
mon_host
String. The mon_host string (as sourced from a ceph.conf file)
cephfs_user
Object. Fields:
name
String. A ceph user name indicating the cephx user that will
access the CephFS volume(s) on the external cluster
key
String. The Base64 encoded key value corresponding to the cephx
user name provided
Using the resource descriptions above we can put together an example that creates a cluster and shares from scratch based on a resource configuration file. First, create the YAML with the contents:
.. code-block:: yaml
resources:
# Define an AD member server cluster
- resource_type: ceph.smb.cluster
cluster_id: tango
auth_mode: active-directory
domain_settings:
realm: DOMAIN1.SINK.TEST
join_sources:
- source_type: resource
ref: join1-admin
custom_dns:
- "192.168.76.204"
# deploy 1 set of samba containers on a host labeled "ilovesmb"
placement:
count: 1
label: ilovesmb
# Define a join auth that our cluster will use to join AD
# Warning: Typically you do not want to use the Administrator user
# to perform joins on a production AD
- resource_type: ceph.smb.join.auth
auth_id: join1-admin
auth:
username: Administrator
password: Passw0rd
# A share that uses the root of a subvolume
# The share name is the same as its id
- resource_type: ceph.smb.share
cluster_id: tango
share_id: cache
cephfs:
volume: cephfs
subvolumegroup: smb1
subvolume: cache
path: /
# A share that uses the a sub-dir of a subvolume
# The share name is not the same as its id
- resource_type: ceph.smb.share
cluster_id: tango
share_id: sp1
name: "Staff Pics"
cephfs:
volume: cephfs
path: /pics
subvolumegroup: smb1
subvolume: staff
qos:
read_iops_limit: 100
write_iops_limit: 50
read_bw_limit: "10MiB"
write_bw_limit: "5MiB"
read_burst_mult: 20
write_burst_mult: 15
Save this text to a YAML file named resources.yaml and make it available
on a cluster admin host. Then run:
.. prompt:: bash #
ceph smb apply -i resources.yaml
The command will print a summary of the changes made and begin to automatically
deploy the needed resources. See Accessing Shares_ for more information
about how to test this example deployment.
Later, if these resources are no longer needed they can be cleaned up in one
action with a new file removed.yaml containing:
.. code-block:: yaml
resources:
- resource_type: ceph.smb.cluster
cluster_id: tango
intent: removed
- resource_type: ceph.smb.join.auth
auth_id: join1-admin
intent: removed
- resource_type: ceph.smb.share
cluster_id: tango
share_id: cache
intent: removed
- resource_type: ceph.smb.share
cluster_id: tango
share_id: sp1
intent: removed
By issuing the command:
.. prompt:: bash #
ceph smb apply -i removed.yaml
The smb module will automatically deploy logical clusters on hosts using
cephadm orchestration. This orchestration is automatically triggered when a
cluster has been configured for at least one share. The placement field of
the cluster resource is passed onto the orchestration layer and is used to
determine on what nodes of the Ceph cluster Samba containers will be run.
At this time Samba services can only listen on port 445. Due to this restriction only one Samba server, as part of one cluster, may run on a single Ceph node at a time. Ensure that the placement specs on each cluster do not overlap.
The smb clusters are fully isolated from each other. This means that, as
long as you have sufficient resources in your Ceph cluster, you can run multiple
independent clusters that may or may not join the same AD domains/forests.
However you should not share a directory with multiple different clusters
that may have different authentication modes and/or identity mapping schemes.
.. note::
Future versions of the smb module may programatically attempt to prevent
such conditions.
Once a cluster and it's component Samba containers have been deployed and the
shares have been configured clients may connect to the servers. Microsoft
Windows systems have SMB support built in and using Windows Explorer a share
can be specified like so: \\<hostname>\<sharename>. For example:
\\ceph0.mycluster.sink.test\Staff Pics. The Windows node should
automatically attempt to log into the share. If the cluster and Windows client
are both configured for the same AD Domain then a password-less single sign-on
login will automatically be performed. If the cluster is configured for
user auth, a username and password prompt should appear. Enter one user
name and password combination that was specified in the cluster and/or
ceph.smb.usersgroups resource.
MacOS X systems and many Linux based systems also support connecting to SMB shares. Consult the documentation for those Operating Systems and Distributions for how to connect to SMB shares.
A Ceph cluster operator wanting to quickly test a share is functioning may want
to install smbclient or use the Samba Client Container image available from
the samba-container_ project with the image
quay.io/samba.org/samba-client:latest. On a client or within the container
run smbclient -U <username> //<hostname>/<sharename> and enter the password
at the prompt. Refer to the smbclient documentation_ for more details.
.. _smbclient documentation: https://www.samba.org/samba/docs/current/man-html/smbclient.1.html