docs/ModuleAlgorithm.html
Taskflow provides template methods that let users create reusable building blocks called modules. Users can connect modules together to build more complex parallel algorithms.
You need to include the header file, taskflow/algorithm/module.hpp, for creating a module task over a schedulable graph target.
#include \<taskflow/algorithm/module.hpp\>
Similar to Composable Tasking, but in a more general setting, the template function tf::make_module_task allows you to create a task over a Taskflow graph that can be executed by an executor. This provides a flexible mechanism to encapsulate and reuse complex task logic within your Taskflow applications. The following example demonstrates how to create and launch multiple Taskflow graphs in parallel using asynchronous tasking:
#include \<taskflow/taskflow.hpp\>#include \<taskflow/algorithm/module.hpp\>int main() {tf::Executor executor;tf::Taskflow A;tf::Taskflow B;tf::Taskflow C;tf::Taskflow D;A.emplace([](){ printf("Taskflow A\n"); }); B.emplace([](){ printf("Taskflow B\n"); }); C.emplace([](){ printf("Taskflow C\n"); }); D.emplace([](){ printf("Taskflow D\n"); }); // launch the four taskflows using asynchronous taskingexecutor.async(tf::make\_module\_task(A));executor.async(tf::make\_module\_task(B));executor.async(tf::make\_module\_task(C));executor.async(tf::make\_module\_task(D));executor.wait\_for\_all();return 0;}
TaskflowAABBCCDD
Since the four taskflows are launched asynchronously without any dependencies between them, we can observe any order of the output message:
# one possible outputTaskflow B
Taskflow C
Taskflow A
Taskflow D# another possible outputTaskflow D
Taskflow A
Taskflow B
Taskflow C
If you need to enforce dependencies among these four taskflows, you can use dependent-async tasks. The example below launches the four taskflows one by one in sequential:
tf::Executor executor;tf::Taskflow A;tf::Taskflow B;tf::Taskflow C;tf::Taskflow D;A.emplace([](){ printf("Taskflow A\n"); }); B.emplace([](){ printf("Taskflow B\n"); }); C.emplace([](){ printf("Taskflow C\n"); }); D.emplace([](){ printf("Taskflow D\n"); }); auto TA = executor.silent\_dependent\_async(tf::make\_module\_task(A));auto TB = executor.silent\_dependent\_async(tf::make\_module\_task(B), TA);auto TC = executor.silent\_dependent\_async(tf::make\_module\_task(C), TB);auto [TD, FD] = executor.dependent\_async(tf::make\_module\_task(D), TC);FD.get();
TaskflowAABBA->BCCB->CDDC->D
# dependent-async tasks enforce a sequential execution of the four taskflowsTaskflow A
Taskflow B
Taskflow C
Taskflow D
The module task maker, tf::make_module_task, operates similarly to tf::Taskflow::composed_of, but provides a more general interface that can be used beyond Taskflow. Specifically, the following two approaches achieve equivalent functionality:
// approach 1: composition using composed\_oftf::Task m1 = taskflow1.composed\_of(taskflow2);// approach 2: composition using make\_module\_tasktf::Task m1 = taskflow1.emplace(tf::make\_module\_task(taskflow2));
In addition to encapsulate taskflow graphs, you can create a module task to schedule a custom graph target. A schedulable target (of type T) must define the method T::graph() that returns a reference to the tf::Graph object managed by T. The following example defines a custom graph that can be scheduled through making module tasks:
struct CustomGraph {tf::Graph graph;CustomGraph() {// use flow builder to inherit all task creation methods in tf::Taskflowtf::FlowBuilder builder(graph);tf::Task task = builder.emplace([](){std::cout \<\< "a task\n";// static task});}// returns a reference to the graph for taskflow compositionGraph& graph() { return graph; }};CustomGraph target;executor.async(tf::make\_module\_task(target));