2023-08-16 00:20:26 +10:00
|
|
|
/* SPDX-FileCopyrightText: 2023 Blender Authors
|
2023-05-31 16:19:06 +02:00
|
|
|
*
|
|
|
|
|
* SPDX-License-Identifier: GPL-2.0-or-later */
|
2022-04-26 17:12:34 +02:00
|
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
|
|
/** \file
|
|
|
|
|
* \ingroup bli
|
|
|
|
|
*
|
|
|
|
|
* In geometry nodes, many functions accept fields as inputs. For the implementation that means
|
|
|
|
|
* that the inputs are virtual arrays. Usually those are backed by actual arrays or single values
|
|
|
|
|
* but sometimes virtual arrays are used to compute values on demand or convert between data
|
|
|
|
|
* formats.
|
|
|
|
|
*
|
|
|
|
|
* Using virtual arrays has the downside that individual elements are accessed through a virtual
|
|
|
|
|
* method call, which has some overhead compared to normal array access. Whether this overhead is
|
2022-04-28 14:03:49 +10:00
|
|
|
* negligible depends on the context. For very small functions (e.g. a single addition), the
|
2022-04-26 17:12:34 +02:00
|
|
|
* overhead can make the function many times slower. Furthermore, it prevents the compiler from
|
|
|
|
|
* doing some optimizations (e.g. loop unrolling and inserting SIMD instructions).
|
|
|
|
|
*
|
|
|
|
|
* The solution is to "devirtualize" the virtual arrays in cases when the overhead cannot be
|
|
|
|
|
* ignored. That means that the function is instantiated multiple times at compile time for the
|
|
|
|
|
* different cases. For example, there can be an optimized function that adds a span and a single
|
|
|
|
|
* value, and another function that adds a span and another span. At run-time there is a dynamic
|
|
|
|
|
* dispatch that executes the best function given the specific virtual arrays.
|
|
|
|
|
*
|
|
|
|
|
* The problem with this devirtualization is that it can result in exponentially increasing compile
|
|
|
|
|
* times and binary sizes, depending on the number of parameters that are devirtualized separately.
|
|
|
|
|
* So there is always a trade-off between run-time performance and compile-time/binary-size.
|
|
|
|
|
*
|
2023-01-07 12:55:48 +01:00
|
|
|
* This file provides a utility to devirtualize function parameters using a high level API. This
|
|
|
|
|
* makes it easy to experiment with different extremes of the mentioned trade-off and allows
|
|
|
|
|
* finding a good compromise for each function.
|
2022-04-26 17:12:34 +02:00
|
|
|
*/
|
|
|
|
|
|
2023-01-07 12:55:48 +01:00
|
|
|
namespace blender {
|
2022-04-26 17:12:34 +02:00
|
|
|
|
|
|
|
|
/**
|
2023-01-07 12:55:48 +01:00
|
|
|
* Calls the given function with devirtualized parameters if possible. Note that using many
|
|
|
|
|
* non-trivial devirtualizers results in exponential code growth.
|
|
|
|
|
*
|
|
|
|
|
* \return True if the function has been called.
|
|
|
|
|
*
|
|
|
|
|
* Every devirtualizer is expected to have a `devirtualize(auto fn) -> bool` method.
|
|
|
|
|
* This method is expected to do one of two things:
|
|
|
|
|
* - Call `fn` with the devirtualized argument and return what `fn` returns.
|
|
|
|
|
* - Don't call `fn` (because the devirtualization failed) and return false.
|
|
|
|
|
*
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
* Examples for devirtualizers: #BasicDevirtualizer, #VArrayDevirtualizer.
|
2022-04-26 17:12:34 +02:00
|
|
|
*/
|
2023-01-07 12:55:48 +01:00
|
|
|
template<typename Fn, typename... Devirtualizers>
|
|
|
|
|
inline bool call_with_devirtualized_parameters(const std::tuple<Devirtualizers...> &devis,
|
|
|
|
|
const Fn &fn)
|
|
|
|
|
{
|
|
|
|
|
/* In theory the code below could be generalized to avoid code duplication. However, the maximum
|
2023-01-09 17:39:35 +11:00
|
|
|
* number of parameters is expected to be relatively low. Explicitly implementing the different
|
2023-01-07 12:55:48 +01:00
|
|
|
* cases makes it more obvious to see what is going on and also makes inlining everything easier
|
|
|
|
|
* for the compiler. */
|
|
|
|
|
constexpr size_t DeviNum = sizeof...(Devirtualizers);
|
|
|
|
|
if constexpr (DeviNum == 0) {
|
|
|
|
|
fn();
|
|
|
|
|
return true;
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
if constexpr (DeviNum == 1) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto param0) {
|
|
|
|
|
fn(param0);
|
|
|
|
|
return true;
|
|
|
|
|
});
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
if constexpr (DeviNum == 2) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto &¶m0) {
|
|
|
|
|
return std::get<1>(devis).devirtualize([&](auto &¶m1) {
|
|
|
|
|
fn(param0, param1);
|
|
|
|
|
return true;
|
|
|
|
|
});
|
|
|
|
|
});
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
if constexpr (DeviNum == 3) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto &¶m0) {
|
|
|
|
|
return std::get<1>(devis).devirtualize([&](auto &¶m1) {
|
|
|
|
|
return std::get<2>(devis).devirtualize([&](auto &¶m2) {
|
|
|
|
|
fn(param0, param1, param2);
|
|
|
|
|
return true;
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
if constexpr (DeviNum == 4) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto &¶m0) {
|
|
|
|
|
return std::get<1>(devis).devirtualize([&](auto &¶m1) {
|
|
|
|
|
return std::get<2>(devis).devirtualize([&](auto &¶m2) {
|
|
|
|
|
return std::get<3>(devis).devirtualize([&](auto &¶m3) {
|
|
|
|
|
fn(param0, param1, param2, param3);
|
2022-04-26 17:40:53 +02:00
|
|
|
return true;
|
2023-01-07 12:55:48 +01:00
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
if constexpr (DeviNum == 5) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto &¶m0) {
|
|
|
|
|
return std::get<1>(devis).devirtualize([&](auto &¶m1) {
|
|
|
|
|
return std::get<2>(devis).devirtualize([&](auto &¶m2) {
|
|
|
|
|
return std::get<3>(devis).devirtualize([&](auto &¶m3) {
|
|
|
|
|
return std::get<4>(devis).devirtualize([&](auto &¶m4) {
|
|
|
|
|
fn(param0, param1, param2, param3, param4);
|
2022-04-26 17:40:53 +02:00
|
|
|
return true;
|
2023-01-07 12:55:48 +01:00
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
if constexpr (DeviNum == 6) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto &¶m0) {
|
|
|
|
|
return std::get<1>(devis).devirtualize([&](auto &¶m1) {
|
|
|
|
|
return std::get<2>(devis).devirtualize([&](auto &¶m2) {
|
|
|
|
|
return std::get<3>(devis).devirtualize([&](auto &¶m3) {
|
|
|
|
|
return std::get<4>(devis).devirtualize([&](auto &¶m4) {
|
|
|
|
|
return std::get<5>(devis).devirtualize([&](auto &¶m5) {
|
|
|
|
|
fn(param0, param1, param2, param3, param4, param5);
|
|
|
|
|
return true;
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
if constexpr (DeviNum == 7) {
|
|
|
|
|
return std::get<0>(devis).devirtualize([&](auto &¶m0) {
|
|
|
|
|
return std::get<1>(devis).devirtualize([&](auto &¶m1) {
|
|
|
|
|
return std::get<2>(devis).devirtualize([&](auto &¶m2) {
|
|
|
|
|
return std::get<3>(devis).devirtualize([&](auto &¶m3) {
|
|
|
|
|
return std::get<4>(devis).devirtualize([&](auto &¶m4) {
|
|
|
|
|
return std::get<5>(devis).devirtualize([&](auto &¶m5) {
|
|
|
|
|
return std::get<6>(devis).devirtualize([&](auto &¶m6) {
|
|
|
|
|
fn(param0, param1, param2, param3, param4, param5, param6);
|
|
|
|
|
return true;
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
|
|
|
|
});
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
return false;
|
|
|
|
|
}
|
2022-04-26 17:12:34 +02:00
|
|
|
|
|
|
|
|
/**
|
2023-01-07 12:55:48 +01:00
|
|
|
* A devirtualizer to be used with #call_with_devirtualized_parameters.
|
2022-04-26 17:12:34 +02:00
|
|
|
*
|
2023-01-07 12:55:48 +01:00
|
|
|
* This one is very simple, it does not perform any actual devirtualization. It can be used to pass
|
|
|
|
|
* parameters to the function that shouldn't be devirtualized.
|
2022-04-26 17:12:34 +02:00
|
|
|
*/
|
2023-01-07 12:55:48 +01:00
|
|
|
template<typename T> struct BasicDevirtualizer {
|
|
|
|
|
const T value;
|
2022-04-26 17:12:34 +02:00
|
|
|
|
2023-01-07 12:55:48 +01:00
|
|
|
template<typename Fn> bool devirtualize(const Fn &fn) const
|
|
|
|
|
{
|
|
|
|
|
return fn(this->value);
|
2022-04-26 17:12:34 +02:00
|
|
|
}
|
2023-01-07 12:55:48 +01:00
|
|
|
};
|
2022-04-26 17:12:34 +02:00
|
|
|
|
|
|
|
|
} // namespace blender
|