extras/cliutils/README.md
cliutils is a Python library which provides wrapper around gluster system:: execute command to extend the functionalities of Gluster.
Example use cases:
gluster-eventsapi,
gluster-restapi, gluster-mountbroker etc.If a executable file present in $GLUSTER_LIBEXEC directory in all
peer nodes(Filename startswith peer_) then it can be executed by
running gluster system:: execute command from any one peer node.
peer_ and should exist in
$GLUSTER_LIBEXEC directory.To understand the functionality, create a executable file peer_hello
under $GLUSTER_LIBEXEC directory and copy to all peer nodes.
#!/usr/bin/env bash
echo "Hello from $(gluster system:: uuid get)"
Now run the following command from any one gluster node,
gluster system:: execute hello
Note: Gluster will not copy the executable script to all nodes,
copy peer_hello script to all peer nodes to use gluster system:: execute infrastructure.
It will run peer_hello executable in all peer nodes and shows the
output from each node(Below example shows output from my two nodes
cluster)
Hello from UUID: e7a3c5c8-e7ad-47ad-aa9c-c13907c4da84
Hello from UUID: c680fc0a-01f9-4c93-a062-df91cc02e40f
A Python wrapper around gluster system:: execute command is created
to address the following issues
system:: execute just skips it
and runs only in up nodes.system:: execute commands are not user friendlyAdvantages of cliutils:
execute_in_peers utility function will merge the gluster system:: execute output with gluster peer status to identify offline nodes.gluster system:: execute will fail to aggregate the output from other
nodes. node_output_ok or node_output_notok utility functions
returns zero both in case of success or error, but returns json
with ok: true or ok:false respectively.gluster system:: execute mountbroker, gluster system:: execute gsec_create and gluster system:: add_secret_pub are suffering from error handling. These
tools are not notifying user if any failures during execute or if a node
is down during execute.Create a file in $LIBEXEC/glusterfs/peer_message.py with following
content.
#!/usr/bin/python3
from gluster.cliutils import Cmd, runcli, execute_in_peers, node_output_ok
class NodeHello(Cmd):
name = "node-hello"
def run(self, args):
node_output_ok("Hello")
class Hello(Cmd):
name = "hello"
def run(self, args):
out = execute_in_peers("node-hello")
for row in out:
print ("{0} from {1}".format(row.output, row.hostname))
if __name__ == "__main__":
runcli()
When we run python peer_message.py, it will have two subcommands,
"node-hello" and "hello". This file should be copied to
$LIBEXEC/glusterfs directory in all peer nodes. User will call
subcommand "hello" from any one peer node, which internally call
gluster system:: execute message.py node-hello(This runs in all peer
nodes and collect the outputs)
For node component do not print the output directly, use
node_output_ok or node_output_notok functions. node_output_ok
additionally collects the node UUID and prints in JSON
format. execute_in_peers function will collect this output and
merges with peers list so that we don't miss the node information if
that node is offline.
If you observed already, function args is optional, if you don't
have arguments then no need to create a function. When we run the
file, we will have two subcommands. For example,
python peer_message.py hello
python peer_message.py node-hello
First subcommand calls second subcommand in all peer nodes. Basically
execute_in_peers(NAME, ARGS) will be converted into
CMD_NAME = FILENAME without "peers_"
gluster system:: execute <CMD_NAME> <SUBCOMMAND> <ARGS>
In our example,
filename = "peer_message.py"
cmd_name = "message.py"
gluster system:: execute ${cmd_name} node-hello
Now create symlink in /usr/bin or /usr/sbin directory depending on
the usecase.(Optional step for usability)
ln -s /usr/libexec/glusterfs/peer_message.py /usr/bin/gluster-message
Now users can use gluster-message instead of calling
/usr/libexec/glusterfs/peer_message.py
gluster-message hello
Following example uses prettytable library, which can be installed
using pip install prettytable or dnf install python-prettytable
#!/usr/bin/python3
from prettytable import PrettyTable
from gluster.cliutils import Cmd, runcli, execute_in_peers, node_output_ok
class NodeHello(Cmd):
name = "node-hello"
def run(self, args):
node_output_ok("Hello")
class Hello(Cmd):
name = "hello"
def run(self, args):
out = execute_in_peers("node-hello")
# Initialize the CLI table
table = PrettyTable(["ID", "NODE", "NODE STATUS", "MESSAGE"])
table.align["NODE STATUS"] = "r"
for row in out:
table.add_row([row.nodeid,
row.hostname,
"UP" if row.node_up else "DOWN",
row.output if row.ok else row.error])
print table
if __name__ == "__main__":
runcli()
Example output,
+--------------------------------------+-----------+-------------+---------+
| ID | NODE | NODE STATUS | MESSAGE |
+--------------------------------------+-----------+-------------+---------+
| e7a3c5c8-e7ad-47ad-aa9c-c13907c4da84 | localhost | UP | Hello |
| bb57a4c4-86eb-4af5-865d-932148c2759b | vm2 | UP | Hello |
| f69b918f-1ffa-4fe5-b554-ee10f051294e | vm3 | DOWN | N/A |
+--------------------------------------+-----------+-------------+---------+
If the project is created in $GLUSTER_SRC/tools/message
Add "message" to SUBDIRS list in $GLUSTER_SRC/tools/Makefile.am
and then create a Makefile.am in $GLUSTER_SRC/tools/message
directory with following content.
EXTRA_DIST = peer_message.py
peertoolsdir = $(libexecdir)/glusterfs/
peertools_SCRIPTS = peer_message.py
install-exec-hook:
$(mkdir_p) $(DESTDIR)$(bindir)
rm -f $(DESTDIR)$(bindir)/gluster-message
ln -s $(libexecdir)/glusterfs/peer_message.py \
$(DESTDIR)$(bindir)/gluster-message
uninstall-hook:
rm -f $(DESTDIR)$(bindir)/gluster-message
Thats all. Add following files in glusterfs.spec.in if packaging is
required.(Under %files section)
%{_libexecdir}/glusterfs/peer_message.py*
%{_bindir}/gluster-message
gluster-message without any arguments--help(gluster-message --help will
show all subcommands including node subcommands)