docs/classtf_1_1Runtime.html
class to create a runtime task
A runtime object provides an interface for interacting with the scheduling system from within a task (i.e., the parent task of this runtime). It enables operations such as spawning asynchronous tasks, executing tasks cooperatively, and implementing recursive parallelism. The runtime guarantees an implicit join at the end of its scope, so all spawned tasks will finish before the parent runtime task continues to its successors.
tf::Executor executor(num\_threads);tf::Taskflow taskflow;std::atomic\<size\_t\> counter(0);tf::Task A = taskflow.emplace([&](tf::Runtime& rt){// spawn 1000 asynchronous tasks from this runtime taskfor(size\_t i=0; i\<1000; i++) {rt.silent\_async(&{ counter.fetch\_add(1, std::memory\_order\_relaxed); });}// implicit synchronization at the end of the runtime scope});tf::Task B = taskflow.emplace(&{assert(counter.load(std::memory\_order\_relaxed) == 1000);});A.precede(B);executor.run(taskflow).wait();
auto executor() -> Executor&obtains the running executorauto worker() -> Worker&acquire a reference to the underlying workervoid schedule(Task task)schedules an active task immediately to the worker's queue template<typename F> auto async(F&& f) -> autoruns the given callable asynchronously template<typename P, typename F> auto async(P&& params, F&& f) -> autoruns the given callable asynchronously template<typename F> void silent_async(F&& f)runs the given function asynchronously without returning any future object template<typename P, typename F> void silent_async(P&& params, F&& f)runs the given function asynchronously without returning any future object template<typename F, typename... Tasks, std::enable_if_t<all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr> auto dependent_async(F&& func, Tasks && ... tasks) -> autoruns the given function asynchronously when the given predecessors finish template<typename P, typename F, typename... Tasks, std::enable_if_t<is_task_params_v<P> && all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr> auto dependent_async(P&& params, F&& func, Tasks && ... tasks) -> autoruns the given function asynchronously when the given predecessors finish template<typename F, typename I, std::enable_if_t<!std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr> auto dependent_async(F&& func, I first, I last) -> autoruns the given function asynchronously when the given range of predecessors finish template<typename P, typename F, typename I, std::enable_if_t<is_task_params_v<P> && !std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr> auto dependent_async(P&& params, F&& func, I first, I last) -> autoruns the given function asynchronously when the given range of predecessors finish template<typename F, typename... Tasks, std::enable_if_t<all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr> auto silent_dependent_async(F&& func, Tasks && ... tasks) -> tf::AsyncTaskruns the given function asynchronously when the given predecessors finish template<typename P, typename F, typename... Tasks, std::enable_if_t<is_task_params_v<P> && all_same_v<AsyncTask, std::decay_t<Tasks>...>, void>* = nullptr> auto silent_dependent_async(P&& params, F&& func, Tasks && ... tasks) -> tf::AsyncTaskruns the given function asynchronously when the given predecessors finish template<typename F, typename I, std::enable_if_t<!std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr> auto silent_dependent_async(F&& func, I first, I last) -> tf::AsyncTaskruns the given function asynchronously when the given range of predecessors finish template<typename P, typename F, typename I, std::enable_if_t<is_task_params_v<P> && !std::is_same_v<std::decay_t<I>, AsyncTask>, void>* = nullptr> auto silent_dependent_async(P&& params, F&& func, I first, I last) -> tf::AsyncTaskruns the given function asynchronously when the given range of predecessors finishvoid corun()corun all tasks spawned by this runtime with other workersvoid corun_all()equivalent to tf::Runtime::corun - just an alias for legacy purposeauto is_cancelled() -> boolqueries if this runtime task has been cancelled
obtains the running executor
The running executor of a runtime task is the executor that runs the parent taskflow of that runtime task.
tf::Executor executor;tf::Taskflow taskflow;taskflow.emplace([&](tf::Runtime& rt){assert(&(rt.executor()) == &executor);});executor.run(taskflow).wait();
schedules an active task immediately to the worker's queue
| Parameters |
|---|
| task |
This member function immediately schedules an active task to the task queue of the associated worker in the runtime task. An active task is a task in a running taskflow. The task may or may not be running, and scheduling that task will immediately put the task into the task queue of the worker that is running the runtime task. Consider the following example:
tf::Task A, B, C, D;std::tie(A, B, C, D) = taskflow.emplace([] () { return 0; },[&C] (tf::Runtime& rt) {// C must be captured by referencestd::cout \<\< "B\n";rt.schedule(C);},[] () { std::cout \<\< "C\n"; },[] () { std::cout \<\< "D\n"; });A.precede(B, C, D);executor.run(taskflow).wait();
The executor will first run the condition task A which returns 0 to inform the scheduler to go to the runtime task B. During the execution of B, it directly schedules task C without going through the normal taskflow graph scheduling process. At this moment, task C is active because its parent taskflow is running. When the taskflow finishes, we will see both B and C in the output.
runs the given callable asynchronously
| Template parameters |
|---|
| F |
| Parameters |
| --- |
| f |
This method creates an asynchronous task that executes the given function with the specified arguments. Unlike tf::Executor::async, the task created here is parented to the runtime object and is implicitly synchronized at the end of the runtime's scope. Applications may also call tf::Runtime::corun explicitly to wait for all asynchronous tasks spawned from the runtime to complete. For example:
std::atomic\<int\> counter(0);taskflow.emplace([&](tf::Runtime& rt){auto fu1 = rt.async(&{ counter++; });auto fu2 = rt.async(&{ counter++; });fu1.get();fu2.get();assert(counter == 2);// spawn 100 asynchronous tasks from the worker of the runtimefor(int i=0; i\<100; i++) {rt.silent\_async(&{ counter++; });}// corun until the 100 asynchronous tasks have completedrt.corun();assert(counter == 102);// do something else afterwards ...});
runs the given callable asynchronously
| Template parameters |
|---|
| P |
| F |
| Parameters |
| --- |
| params |
| f |
Similar to tf::Runtime::async, but takes a parameter of type tf::TaskParams to initialize the asynchronous task.
taskflow.emplace([&](tf::Runtime& rt){auto future = rt.async("my task", [](){ return 10; });assert(future.get() == 10);});
runs the given function asynchronously without returning any future object
| Template parameters |
|---|
| F |
| Parameters |
| --- |
| f |
This function is more efficient than tf::Runtime::async and is recommended when the result of the asynchronous task does not need to be accessed via a std::future.
std::atomic\<int\> counter(0);taskflow.emplace([&](tf::Runtime& rt){for(int i=0; i\<100; i++) {rt.silent\_async(&{ counter++; });}rt.corun();assert(counter == 100);});
runs the given function asynchronously without returning any future object
| Template parameters |
|---|
| F |
| Parameters |
| --- |
| params |
| f |
Similar to tf::Runtime::silent_async, but takes a parameter of type tf::TaskParams to initialize the created asynchronous task.
taskflow.emplace([&](tf::Runtime& rt){rt.silent\_async("my task", [](){});});
runs the given function asynchronously when the given predecessors finish
| Template parameters |
|---|
| F |
| Tasks |
| Parameters |
| --- |
| func |
| tasks |
| Returns |
The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::AsyncTask handle and a std::future<int> that eventually will hold the result of the execution.
taskflow.emplace([](tf::Runtime& rt){tf::AsyncTask A = rt.silent\_dependent\_async([](){ printf("A\n"); });tf::AsyncTask B = rt.silent\_dependent\_async([](){ printf("B\n"); });auto [C, fuC] = rt.dependent\_async([](){ printf("C runs after A and B\n"); return 1;}, A, B);fuC.get();// C finishes, which in turns means both A and B finish});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.run(taskflow).wait();
runs the given function asynchronously when the given predecessors finish
| Template parameters |
|---|
| P |
| F |
| Tasks |
| Parameters |
| --- |
| params |
| func |
| tasks |
| Returns |
The example below creates three named asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::AsyncTask handle and a std::future<int> that eventually will hold the result of the execution. Assigned task names will appear in the observers of the executor.
taskflow.emplace([](tf::Runtime& rt){tf::AsyncTask A = rt.silent\_dependent\_async("A", [](){ printf("A\n"); });tf::AsyncTask B = rt.silent\_dependent\_async("B", [](){ printf("B\n"); });auto [C, fuC] = rt.dependent\_async("C",[](){ printf("C runs after A and B\n"); return 1;}, A, B);assert(fuC.get()==1);// C finishes, which in turns means both A and B finish});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.run(taskflow).wait();
runs the given function asynchronously when the given range of predecessors finish
| Template parameters |
|---|
| F |
| I |
| Parameters |
| --- |
| func |
| first |
| last |
| Returns |
The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::AsyncTask handle and a std::future<int> that eventually will hold the result of the execution.
taskflow.emplace([](tf::Runtime& rt){std::array\<tf::AsyncTask, 2\> array {rt.silent\_dependent\_async([](){ printf("A\n"); }),rt.silent\_dependent\_async([](){ printf("B\n"); })};auto [C, fuC] = rt.dependent\_async([](){ printf("C runs after A and B\n"); return 1;}, array.begin(), array.end());assert(fuC.get()==1);// C finishes, which in turns means both A and B finish});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.run(taskflow).wait();
runs the given function asynchronously when the given range of predecessors finish
| Template parameters |
|---|
| P |
| F |
| I |
| Parameters |
| --- |
| params |
| func |
| first |
| last |
| Returns |
The example below creates three named asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Task C returns a pair of its tf::AsyncTask handle and a std::future<int> that eventually will hold the result of the execution. Assigned task names will appear in the observers of the executor.
taskflow.emplace([](tf::Runtime& rt){std::array\<tf::AsyncTask, 2\> array {rt.silent\_dependent\_async("A", [](){ printf("A\n"); }),rt.silent\_dependent\_async("B", [](){ printf("B\n"); })};auto [C, fuC] = rt.dependent\_async("C",[](){ printf("C runs after A and B\n"); return 1;}, array.begin(), array.end());assert(fuC.get()==1);// C finishes, which in turns means both A and B finish});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.run(taskflow).wait();
runs the given function asynchronously when the given predecessors finish
| Template parameters |
|---|
| F |
| Tasks |
| Parameters |
| --- |
| func |
| tasks |
| Returns |
This member function is more efficient than tf::Runtime::dependent_async and is encouraged to use when you do not want a std::future to acquire the result or synchronize the execution. The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B.
taskflow.emplace([](tf::Runtime& rt){tf::AsyncTask A = rt.silent\_dependent\_async([](){ printf("A\n"); });tf::AsyncTask B = rt.silent\_dependent\_async([](){ printf("B\n"); });rt.silent\_dependent\_async([](){ printf("C runs after A and B\n"); }, A, B);});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.wait\_for\_all();
runs the given function asynchronously when the given predecessors finish
| Template parameters |
|---|
| F |
| Tasks |
| Parameters |
| --- |
| params |
| func |
| tasks |
| Returns |
This member function is more efficient than tf::Runtime::dependent_async and is encouraged to use when you do not want a std::future to acquire the result or synchronize the execution. The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Assigned task names will appear in the observers of the executor.
taskflow.emplace([](tf::Runtime& rt){tf::AsyncTask A = rt.silent\_dependent\_async("A", [](){ printf("A\n"); });tf::AsyncTask B = rt.silent\_dependent\_async("B", [](){ printf("B\n"); });rt.silent\_dependent\_async("C", [](){ printf("C runs after A and B\n"); }, A, B);});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.wait\_for\_all();
runs the given function asynchronously when the given range of predecessors finish
| Template parameters |
|---|
| F |
| I |
| Parameters |
| --- |
| func |
| first |
| last |
| Returns |
This member function is more efficient than tf::Runtime::dependent_async and is encouraged to use when you do not want a std::future to acquire the result or synchronize the execution. The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B.
Taskflow.emplace([&](tf::Runtime& rt){std::array\<tf::AsyncTask, 2\> array {rt.silent\_dependent\_async([](){ printf("A\n"); }),rt.silent\_dependent\_async([](){ printf("B\n"); })};rt.silent\_dependent\_async([](){ printf("C runs after A and B\n"); }, array.begin(), array.end());});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.wait\_for\_all();
runs the given function asynchronously when the given range of predecessors finish
| Template parameters |
|---|
| F |
| I |
| Parameters |
| --- |
| params |
| func |
| first |
| last |
| Returns |
This member function is more efficient than tf::Runtime::dependent_async and is encouraged to use when you do not want a std::future to acquire the result or synchronize the execution. The example below creates three asynchronous tasks, A, B, and C, in which task C runs after task A and task B. Assigned task names will appear in the observers of the executor.
taskflow.emplace([](tf::Runtime& rt){std::array\<tf::AsyncTask, 2\> array {rt.silent\_dependent\_async("A", [](){ printf("A\n"); }),rt.silent\_dependent\_async("B", [](){ printf("B\n"); })};rt.silent\_dependent\_async("C", [](){ printf("C runs after A and B\n"); }, array.begin(), array.end());});// implicit synchronization of all tasks at the end of runtime's scopeexecutor.run(taskflow).wait();
corun all tasks spawned by this runtime with other workers
Coruns all tasks spawned by this runtime cooperatively with other workers in the same executor until all these tasks finish. Under cooperative execution, a worker is not preempted. Instead, it continues participating in the work-stealing loop, executing available tasks alongside other workers.
std::atomic\<size\_t\> counter{0};taskflow.emplace([&](tf::Runtime& rt){// spawn 100 async tasks and waitfor(int i=0; i\<100; i++) {rt.silent\_async(&{ counter++; });}rt.corun();assert(counter == 100);// spawn another 100 async tasks and waitfor(int i=0; i\<100; i++) {rt.silent\_async(&{ counter++; });}rt.corun();assert(counter == 200);});
Only the parent worker of this runtime is allowed to call this corun.