2023-08-16 00:20:26 +10:00
|
|
|
/* SPDX-FileCopyrightText: 2023 Blender Authors
|
2023-05-31 16:19:06 +02:00
|
|
|
*
|
|
|
|
|
* SPDX-License-Identifier: GPL-2.0-or-later */
|
2020-10-09 11:56:12 +02:00
|
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
|
|
/** \file
|
|
|
|
|
* \ingroup bli
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#ifdef WITH_TBB
|
|
|
|
|
/* Quiet top level deprecation message, unrelated to API usage here. */
|
2020-11-10 08:48:18 -07:00
|
|
|
# if defined(WIN32) && !defined(NOMINMAX)
|
2020-10-15 17:14:57 -06:00
|
|
|
/* TBB includes Windows.h which will define min/max macros causing issues
|
|
|
|
|
* when we try to use std::min and std::max later on. */
|
|
|
|
|
# define NOMINMAX
|
2020-11-10 08:48:18 -07:00
|
|
|
# define TBB_MIN_MAX_CLEANUP
|
2020-10-15 17:14:57 -06:00
|
|
|
# endif
|
2021-02-10 18:17:23 +01:00
|
|
|
# include <tbb/blocked_range.h>
|
|
|
|
|
# include <tbb/parallel_for.h>
|
|
|
|
|
# include <tbb/parallel_for_each.h>
|
2022-02-09 13:08:04 +01:00
|
|
|
# include <tbb/parallel_invoke.h>
|
2021-10-04 13:12:38 +11:00
|
|
|
# include <tbb/parallel_reduce.h>
|
2021-06-16 16:29:21 +02:00
|
|
|
# include <tbb/task_arena.h>
|
2020-10-15 17:14:57 -06:00
|
|
|
# ifdef WIN32
|
2020-11-10 08:48:18 -07:00
|
|
|
/* We cannot keep this defined, since other parts of the code deal with this on their own, leading
|
|
|
|
|
* to multiple define warnings unless we un-define this, however we can only undefine this if we
|
|
|
|
|
* were the ones that made the definition earlier. */
|
|
|
|
|
# ifdef TBB_MIN_MAX_CLEANUP
|
|
|
|
|
# undef NOMINMAX
|
|
|
|
|
# endif
|
2020-10-15 17:14:57 -06:00
|
|
|
# endif
|
2020-10-09 11:56:12 +02:00
|
|
|
#endif
|
|
|
|
|
|
2023-05-21 13:31:21 +02:00
|
|
|
#include "BLI_function_ref.hh"
|
2020-10-09 11:56:12 +02:00
|
|
|
#include "BLI_index_range.hh"
|
2022-09-20 10:59:12 +02:00
|
|
|
#include "BLI_lazy_threading.hh"
|
2024-02-25 15:01:05 +01:00
|
|
|
#include "BLI_span.hh"
|
2020-10-09 11:56:12 +02:00
|
|
|
#include "BLI_utildefines.h"
|
|
|
|
|
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
namespace blender {
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Wrapper type around an integer to differentiate it from other parameters in a function call.
|
|
|
|
|
*/
|
|
|
|
|
struct GrainSize {
|
|
|
|
|
int64_t value;
|
|
|
|
|
|
|
|
|
|
explicit constexpr GrainSize(const int64_t grain_size) : value(grain_size) {}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
} // namespace blender
|
|
|
|
|
|
2021-06-16 16:13:53 +02:00
|
|
|
namespace blender::threading {
|
2020-10-09 11:56:12 +02:00
|
|
|
|
|
|
|
|
template<typename Range, typename Function>
|
2023-05-22 09:32:35 +02:00
|
|
|
inline void parallel_for_each(Range &&range, const Function &function)
|
2020-10-09 11:56:12 +02:00
|
|
|
{
|
|
|
|
|
#ifdef WITH_TBB
|
|
|
|
|
tbb::parallel_for_each(range, function);
|
|
|
|
|
#else
|
|
|
|
|
for (auto &value : range) {
|
|
|
|
|
function(value);
|
|
|
|
|
}
|
|
|
|
|
#endif
|
|
|
|
|
}
|
|
|
|
|
|
2023-05-21 13:31:21 +02:00
|
|
|
namespace detail {
|
|
|
|
|
void parallel_for_impl(IndexRange range,
|
|
|
|
|
int64_t grain_size,
|
|
|
|
|
FunctionRef<void(IndexRange)> function);
|
2024-02-25 15:01:05 +01:00
|
|
|
void parallel_for_weighted_impl(IndexRange range,
|
|
|
|
|
int64_t grain_size,
|
|
|
|
|
FunctionRef<void(IndexRange)> function,
|
|
|
|
|
FunctionRef<void(IndexRange, MutableSpan<int64_t>)> task_sizes_fn);
|
2024-03-19 18:23:56 +01:00
|
|
|
void memory_bandwidth_bound_task_impl(FunctionRef<void()> function);
|
2023-05-21 13:31:21 +02:00
|
|
|
} // namespace detail
|
|
|
|
|
|
2020-10-09 11:56:12 +02:00
|
|
|
template<typename Function>
|
2023-05-21 13:31:21 +02:00
|
|
|
inline void parallel_for(IndexRange range, int64_t grain_size, const Function &function)
|
2020-10-09 11:56:12 +02:00
|
|
|
{
|
2023-05-21 13:31:21 +02:00
|
|
|
if (range.is_empty()) {
|
2020-10-09 11:56:12 +02:00
|
|
|
return;
|
|
|
|
|
}
|
2023-05-21 13:31:21 +02:00
|
|
|
if (range.size() <= grain_size) {
|
|
|
|
|
function(range);
|
2021-12-02 12:54:35 +01:00
|
|
|
return;
|
|
|
|
|
}
|
2023-05-21 13:31:21 +02:00
|
|
|
detail::parallel_for_impl(range, grain_size, function);
|
2020-10-09 11:56:12 +02:00
|
|
|
}
|
|
|
|
|
|
2024-02-25 15:01:05 +01:00
|
|
|
/**
|
|
|
|
|
* Almost like `parallel_for` but allows passing in a function that estimates the amount of work
|
|
|
|
|
* per index. This allows distributing work to threads more evenly.
|
|
|
|
|
*
|
|
|
|
|
* Using this function makes sense when the work load for each index can differ significantly, so
|
|
|
|
|
* that it is impossible to determine a good constant grain size.
|
|
|
|
|
*
|
2024-02-26 10:23:12 +11:00
|
|
|
* This function has a bit more overhead than the unweighted #parallel_for. If that is noticeable
|
2024-02-25 15:01:05 +01:00
|
|
|
* highly depends on the use-case. So the overhead should be measured when trying to use this
|
|
|
|
|
* function for cases where all tasks may be very small.
|
|
|
|
|
*
|
|
|
|
|
* \param task_size_fn: Gets the task index as input and computes that tasks size.
|
|
|
|
|
* \param grain_size: Determines approximately how large a combined task should be. For example, if
|
|
|
|
|
* the grain size is 100, then 5 tasks of size 20 fit into it.
|
|
|
|
|
*/
|
|
|
|
|
template<typename Function, typename TaskSizeFn>
|
|
|
|
|
inline void parallel_for_weighted(IndexRange range,
|
|
|
|
|
int64_t grain_size,
|
|
|
|
|
const Function &function,
|
|
|
|
|
const TaskSizeFn &task_size_fn)
|
|
|
|
|
{
|
|
|
|
|
if (range.is_empty()) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
detail::parallel_for_weighted_impl(
|
|
|
|
|
range, grain_size, function, [&](const IndexRange sub_range, MutableSpan<int64_t> r_sizes) {
|
|
|
|
|
for (const int64_t i : sub_range.index_range()) {
|
|
|
|
|
const int64_t task_size = task_size_fn(sub_range[i]);
|
|
|
|
|
BLI_assert(task_size >= 0);
|
|
|
|
|
r_sizes[i] = task_size;
|
|
|
|
|
}
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
|
2023-05-22 09:30:51 +02:00
|
|
|
/**
|
|
|
|
|
* Move the sub-range boundaries down to the next aligned index. The "global" begin and end
|
|
|
|
|
* remain fixed though.
|
|
|
|
|
*/
|
|
|
|
|
inline IndexRange align_sub_range(const IndexRange unaligned_range,
|
|
|
|
|
const int64_t alignment,
|
|
|
|
|
const IndexRange global_range)
|
|
|
|
|
{
|
|
|
|
|
const int64_t global_begin = global_range.start();
|
|
|
|
|
const int64_t global_end = global_range.one_after_last();
|
|
|
|
|
const int64_t alignment_mask = ~(alignment - 1);
|
|
|
|
|
|
|
|
|
|
const int64_t unaligned_begin = unaligned_range.start();
|
|
|
|
|
const int64_t unaligned_end = unaligned_range.one_after_last();
|
|
|
|
|
const int64_t aligned_begin = std::max(global_begin, unaligned_begin & alignment_mask);
|
|
|
|
|
const int64_t aligned_end = unaligned_end == global_end ?
|
|
|
|
|
unaligned_end :
|
|
|
|
|
std::max(global_begin, unaligned_end & alignment_mask);
|
BLI: add named constructors for IndexRange
Unless you're very familiar with `IndexRange`, it's often hard to know what
e.g. `IndexRange(10, 15)` means. Without more context, one could think
that it means `10-14`, `10-15` or `10-24`. This patch adds named constructors
to `IndexRange` to make the behavior more obvious when writing and when
reading the code. With those one can use `IndexRange::from_begin_end(10, 15)`,
`IndexRange::from_begin_end_inclusive(10, 15)` or `IndexRange::from_begin_size(10, 15)`
respectively. While being a bit more verbose, the explicitness makes code easier to
understand and also allows abstracting away some common index computations.
The old unnamed constructor that takes a begin and size is not removed by this patch,
as that would make the patch significantly bigger. I think it's reasonable to generally
use the named constructors going forward and to change the existing usages of the
old constructor over time.
Pull Request: https://projects.blender.org/blender/blender/pulls/118606
2024-02-22 12:57:10 +01:00
|
|
|
const IndexRange aligned_range = IndexRange::from_begin_end(aligned_begin, aligned_end);
|
2023-05-22 09:30:51 +02:00
|
|
|
return aligned_range;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-22 00:03:25 +01:00
|
|
|
/**
|
|
|
|
|
* Same as #parallel_for but tries to make the sub-range sizes multiples of the given alignment.
|
|
|
|
|
* This can improve performance when the range is processed using vectorized and/or unrolled loops,
|
|
|
|
|
* because the fallback loop that processes remaining values is used less often. A disadvantage of
|
|
|
|
|
* using this instead of #parallel_for is that the size differences between sub-ranges can be
|
|
|
|
|
* larger, which means that work is distributed less evenly.
|
|
|
|
|
*/
|
|
|
|
|
template<typename Function>
|
2023-05-22 09:32:35 +02:00
|
|
|
inline void parallel_for_aligned(const IndexRange range,
|
|
|
|
|
const int64_t grain_size,
|
|
|
|
|
const int64_t alignment,
|
|
|
|
|
const Function &function)
|
2023-01-22 00:03:25 +01:00
|
|
|
{
|
|
|
|
|
parallel_for(range, grain_size, [&](const IndexRange unaligned_range) {
|
2023-05-22 09:30:51 +02:00
|
|
|
const IndexRange aligned_range = align_sub_range(unaligned_range, alignment, range);
|
2023-01-22 00:03:25 +01:00
|
|
|
function(aligned_range);
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
|
2021-07-05 18:09:36 -04:00
|
|
|
template<typename Value, typename Function, typename Reduction>
|
2023-05-22 09:32:35 +02:00
|
|
|
inline Value parallel_reduce(IndexRange range,
|
|
|
|
|
int64_t grain_size,
|
|
|
|
|
const Value &identity,
|
|
|
|
|
const Function &function,
|
|
|
|
|
const Reduction &reduction)
|
2021-07-05 18:09:36 -04:00
|
|
|
{
|
|
|
|
|
#ifdef WITH_TBB
|
2022-05-09 18:21:37 +02:00
|
|
|
if (range.size() >= grain_size) {
|
2022-09-20 10:59:12 +02:00
|
|
|
lazy_threading::send_hint();
|
2022-05-09 18:21:37 +02:00
|
|
|
return tbb::parallel_reduce(
|
|
|
|
|
tbb::blocked_range<int64_t>(range.first(), range.one_after_last(), grain_size),
|
|
|
|
|
identity,
|
|
|
|
|
[&](const tbb::blocked_range<int64_t> &subrange, const Value &ident) {
|
|
|
|
|
return function(IndexRange(subrange.begin(), subrange.size()), ident);
|
|
|
|
|
},
|
|
|
|
|
reduction);
|
|
|
|
|
}
|
2021-07-05 18:09:36 -04:00
|
|
|
#else
|
|
|
|
|
UNUSED_VARS(grain_size, reduction);
|
|
|
|
|
#endif
|
2022-05-09 18:21:37 +02:00
|
|
|
return function(range, identity);
|
2021-07-05 18:09:36 -04:00
|
|
|
}
|
|
|
|
|
|
2023-05-22 09:30:51 +02:00
|
|
|
template<typename Value, typename Function, typename Reduction>
|
2023-05-22 09:32:35 +02:00
|
|
|
inline Value parallel_reduce_aligned(const IndexRange range,
|
|
|
|
|
const int64_t grain_size,
|
|
|
|
|
const int64_t alignment,
|
|
|
|
|
const Value &identity,
|
|
|
|
|
const Function &function,
|
|
|
|
|
const Reduction &reduction)
|
2023-05-22 09:30:51 +02:00
|
|
|
{
|
|
|
|
|
parallel_reduce(
|
|
|
|
|
range,
|
|
|
|
|
grain_size,
|
|
|
|
|
identity,
|
|
|
|
|
[&](const IndexRange unaligned_range, const Value &ident) {
|
|
|
|
|
const IndexRange aligned_range = align_sub_range(unaligned_range, alignment, range);
|
|
|
|
|
function(aligned_range, ident);
|
|
|
|
|
},
|
|
|
|
|
reduction);
|
|
|
|
|
}
|
|
|
|
|
|
2022-02-09 13:08:04 +01:00
|
|
|
/**
|
|
|
|
|
* Execute all of the provided functions. The functions might be executed in parallel or in serial
|
|
|
|
|
* or some combination of both.
|
|
|
|
|
*/
|
2023-05-22 09:32:35 +02:00
|
|
|
template<typename... Functions> inline void parallel_invoke(Functions &&...functions)
|
2022-02-09 13:08:04 +01:00
|
|
|
{
|
|
|
|
|
#ifdef WITH_TBB
|
|
|
|
|
tbb::parallel_invoke(std::forward<Functions>(functions)...);
|
|
|
|
|
#else
|
|
|
|
|
(functions(), ...);
|
|
|
|
|
#endif
|
|
|
|
|
}
|
|
|
|
|
|
2022-07-26 11:06:49 +02:00
|
|
|
/**
|
|
|
|
|
* Same #parallel_invoke, but allows disabling threading dynamically. This is useful because when
|
|
|
|
|
* the individual functions do very little work, there is a lot of overhead from starting parallel
|
|
|
|
|
* tasks.
|
|
|
|
|
*/
|
|
|
|
|
template<typename... Functions>
|
2023-05-22 09:32:35 +02:00
|
|
|
inline void parallel_invoke(const bool use_threading, Functions &&...functions)
|
2022-07-26 11:06:49 +02:00
|
|
|
{
|
|
|
|
|
if (use_threading) {
|
2022-09-20 10:59:12 +02:00
|
|
|
lazy_threading::send_hint();
|
2022-07-26 11:06:49 +02:00
|
|
|
parallel_invoke(std::forward<Functions>(functions)...);
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
(functions(), ...);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-06-16 16:29:21 +02:00
|
|
|
/** See #BLI_task_isolate for a description of what isolating a task means. */
|
2023-05-22 09:32:35 +02:00
|
|
|
template<typename Function> inline void isolate_task(const Function &function)
|
2021-06-16 16:29:21 +02:00
|
|
|
{
|
|
|
|
|
#ifdef WITH_TBB
|
2022-11-06 15:04:47 +01:00
|
|
|
lazy_threading::ReceiverIsolation isolation;
|
2021-06-16 16:29:21 +02:00
|
|
|
tbb::this_task_arena::isolate(function);
|
|
|
|
|
#else
|
|
|
|
|
function();
|
|
|
|
|
#endif
|
|
|
|
|
}
|
|
|
|
|
|
2024-03-19 18:23:56 +01:00
|
|
|
/**
|
|
|
|
|
* Should surround parallel code that is highly bandwidth intensive, e.g. it just fills a buffer
|
2024-03-21 10:01:43 +11:00
|
|
|
* with no or just few additional operations. If the buffers are large, it's beneficial to limit
|
2024-03-19 18:23:56 +01:00
|
|
|
* the number of threads doing the work because that just creates more overhead on the hardware
|
|
|
|
|
* level and doesn't provide a notable performance benefit beyond a certain point.
|
|
|
|
|
*/
|
|
|
|
|
template<typename Function>
|
|
|
|
|
inline void memory_bandwidth_bound_task(const int64_t approximate_bytes_touched,
|
|
|
|
|
const Function &function)
|
|
|
|
|
{
|
|
|
|
|
/* Don't limit threading when all touched memory can stay in the CPU cache, because there a much
|
|
|
|
|
* higher memory bandwidth is available compared to accessing RAM. This value is supposed to be
|
|
|
|
|
* on the order of the L3 cache size. Accessing that value is not quite straight forward and even
|
2024-03-21 10:01:43 +11:00
|
|
|
* if it was, it's not clear if using the exact cache size would be beneficial because there is
|
2024-03-19 18:23:56 +01:00
|
|
|
* often more stuff going on on the CPU at the same time. */
|
|
|
|
|
if (approximate_bytes_touched <= 8 * 1024 * 1024) {
|
|
|
|
|
function();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
detail::memory_bandwidth_bound_task_impl(function);
|
|
|
|
|
}
|
|
|
|
|
|
2021-06-16 16:13:53 +02:00
|
|
|
} // namespace blender::threading
|