Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
210 lines
6.6 KiB
C++
210 lines
6.6 KiB
C++
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
|
|
|
#pragma once
|
|
|
|
/** \file
|
|
* \ingroup bli
|
|
*/
|
|
|
|
#ifdef WITH_TBB
|
|
/* Quiet top level deprecation message, unrelated to API usage here. */
|
|
# if defined(WIN32) && !defined(NOMINMAX)
|
|
/* TBB includes Windows.h which will define min/max macros causing issues
|
|
* when we try to use std::min and std::max later on. */
|
|
# define NOMINMAX
|
|
# define TBB_MIN_MAX_CLEANUP
|
|
# endif
|
|
# include <tbb/blocked_range.h>
|
|
# include <tbb/parallel_for.h>
|
|
# include <tbb/parallel_for_each.h>
|
|
# include <tbb/parallel_invoke.h>
|
|
# include <tbb/parallel_reduce.h>
|
|
# include <tbb/task_arena.h>
|
|
# ifdef WIN32
|
|
/* We cannot keep this defined, since other parts of the code deal with this on their own, leading
|
|
* to multiple define warnings unless we un-define this, however we can only undefine this if we
|
|
* were the ones that made the definition earlier. */
|
|
# ifdef TBB_MIN_MAX_CLEANUP
|
|
# undef NOMINMAX
|
|
# endif
|
|
# endif
|
|
#endif
|
|
|
|
#include "BLI_function_ref.hh"
|
|
#include "BLI_index_range.hh"
|
|
#include "BLI_lazy_threading.hh"
|
|
#include "BLI_utildefines.h"
|
|
|
|
namespace blender {
|
|
|
|
/**
|
|
* Wrapper type around an integer to differentiate it from other parameters in a function call.
|
|
*/
|
|
struct GrainSize {
|
|
int64_t value;
|
|
|
|
explicit constexpr GrainSize(const int64_t grain_size) : value(grain_size) {}
|
|
};
|
|
|
|
} // namespace blender
|
|
|
|
namespace blender::threading {
|
|
|
|
template<typename Range, typename Function>
|
|
inline void parallel_for_each(Range &&range, const Function &function)
|
|
{
|
|
#ifdef WITH_TBB
|
|
tbb::parallel_for_each(range, function);
|
|
#else
|
|
for (auto &value : range) {
|
|
function(value);
|
|
}
|
|
#endif
|
|
}
|
|
|
|
namespace detail {
|
|
void parallel_for_impl(IndexRange range,
|
|
int64_t grain_size,
|
|
FunctionRef<void(IndexRange)> function);
|
|
} // namespace detail
|
|
|
|
template<typename Function>
|
|
inline void parallel_for(IndexRange range, int64_t grain_size, const Function &function)
|
|
{
|
|
if (range.is_empty()) {
|
|
return;
|
|
}
|
|
if (range.size() <= grain_size) {
|
|
function(range);
|
|
return;
|
|
}
|
|
detail::parallel_for_impl(range, grain_size, function);
|
|
}
|
|
|
|
/**
|
|
* Move the sub-range boundaries down to the next aligned index. The "global" begin and end
|
|
* remain fixed though.
|
|
*/
|
|
inline IndexRange align_sub_range(const IndexRange unaligned_range,
|
|
const int64_t alignment,
|
|
const IndexRange global_range)
|
|
{
|
|
const int64_t global_begin = global_range.start();
|
|
const int64_t global_end = global_range.one_after_last();
|
|
const int64_t alignment_mask = ~(alignment - 1);
|
|
|
|
const int64_t unaligned_begin = unaligned_range.start();
|
|
const int64_t unaligned_end = unaligned_range.one_after_last();
|
|
const int64_t aligned_begin = std::max(global_begin, unaligned_begin & alignment_mask);
|
|
const int64_t aligned_end = unaligned_end == global_end ?
|
|
unaligned_end :
|
|
std::max(global_begin, unaligned_end & alignment_mask);
|
|
const IndexRange aligned_range{aligned_begin, aligned_end - aligned_begin};
|
|
return aligned_range;
|
|
}
|
|
|
|
/**
|
|
* Same as #parallel_for but tries to make the sub-range sizes multiples of the given alignment.
|
|
* This can improve performance when the range is processed using vectorized and/or unrolled loops,
|
|
* because the fallback loop that processes remaining values is used less often. A disadvantage of
|
|
* using this instead of #parallel_for is that the size differences between sub-ranges can be
|
|
* larger, which means that work is distributed less evenly.
|
|
*/
|
|
template<typename Function>
|
|
inline void parallel_for_aligned(const IndexRange range,
|
|
const int64_t grain_size,
|
|
const int64_t alignment,
|
|
const Function &function)
|
|
{
|
|
parallel_for(range, grain_size, [&](const IndexRange unaligned_range) {
|
|
const IndexRange aligned_range = align_sub_range(unaligned_range, alignment, range);
|
|
function(aligned_range);
|
|
});
|
|
}
|
|
|
|
template<typename Value, typename Function, typename Reduction>
|
|
inline Value parallel_reduce(IndexRange range,
|
|
int64_t grain_size,
|
|
const Value &identity,
|
|
const Function &function,
|
|
const Reduction &reduction)
|
|
{
|
|
#ifdef WITH_TBB
|
|
if (range.size() >= grain_size) {
|
|
lazy_threading::send_hint();
|
|
return tbb::parallel_reduce(
|
|
tbb::blocked_range<int64_t>(range.first(), range.one_after_last(), grain_size),
|
|
identity,
|
|
[&](const tbb::blocked_range<int64_t> &subrange, const Value &ident) {
|
|
return function(IndexRange(subrange.begin(), subrange.size()), ident);
|
|
},
|
|
reduction);
|
|
}
|
|
#else
|
|
UNUSED_VARS(grain_size, reduction);
|
|
#endif
|
|
return function(range, identity);
|
|
}
|
|
|
|
template<typename Value, typename Function, typename Reduction>
|
|
inline Value parallel_reduce_aligned(const IndexRange range,
|
|
const int64_t grain_size,
|
|
const int64_t alignment,
|
|
const Value &identity,
|
|
const Function &function,
|
|
const Reduction &reduction)
|
|
{
|
|
parallel_reduce(
|
|
range,
|
|
grain_size,
|
|
identity,
|
|
[&](const IndexRange unaligned_range, const Value &ident) {
|
|
const IndexRange aligned_range = align_sub_range(unaligned_range, alignment, range);
|
|
function(aligned_range, ident);
|
|
},
|
|
reduction);
|
|
}
|
|
|
|
/**
|
|
* Execute all of the provided functions. The functions might be executed in parallel or in serial
|
|
* or some combination of both.
|
|
*/
|
|
template<typename... Functions> inline void parallel_invoke(Functions &&...functions)
|
|
{
|
|
#ifdef WITH_TBB
|
|
tbb::parallel_invoke(std::forward<Functions>(functions)...);
|
|
#else
|
|
(functions(), ...);
|
|
#endif
|
|
}
|
|
|
|
/**
|
|
* Same #parallel_invoke, but allows disabling threading dynamically. This is useful because when
|
|
* the individual functions do very little work, there is a lot of overhead from starting parallel
|
|
* tasks.
|
|
*/
|
|
template<typename... Functions>
|
|
inline void parallel_invoke(const bool use_threading, Functions &&...functions)
|
|
{
|
|
if (use_threading) {
|
|
lazy_threading::send_hint();
|
|
parallel_invoke(std::forward<Functions>(functions)...);
|
|
}
|
|
else {
|
|
(functions(), ...);
|
|
}
|
|
}
|
|
|
|
/** See #BLI_task_isolate for a description of what isolating a task means. */
|
|
template<typename Function> inline void isolate_task(const Function &function)
|
|
{
|
|
#ifdef WITH_TBB
|
|
lazy_threading::ReceiverIsolation isolation;
|
|
tbb::this_task_arena::isolate(function);
|
|
#else
|
|
function();
|
|
#endif
|
|
}
|
|
|
|
} // namespace blender::threading
|