2023-08-16 00:20:26 +10:00
|
|
|
/* SPDX-FileCopyrightText: 2023 Blender Authors
|
2023-05-31 16:19:06 +02:00
|
|
|
*
|
|
|
|
|
* SPDX-License-Identifier: GPL-2.0-or-later */
|
2020-06-30 18:05:44 +02:00
|
|
|
|
|
|
|
|
#include "FN_multi_function.hh"
|
|
|
|
|
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
#include "BLI_task.hh"
|
|
|
|
|
#include "BLI_threads.h"
|
|
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
namespace blender::fn::multi_function {
|
2020-06-30 18:05:44 +02:00
|
|
|
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
using ExecutionHints = MultiFunction::ExecutionHints;
|
|
|
|
|
|
|
|
|
|
ExecutionHints MultiFunction::execution_hints() const
|
|
|
|
|
{
|
|
|
|
|
return this->get_execution_hints();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
ExecutionHints MultiFunction::get_execution_hints() const
|
|
|
|
|
{
|
|
|
|
|
return ExecutionHints{};
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool supports_threading_by_slicing_params(const MultiFunction &fn)
|
|
|
|
|
{
|
|
|
|
|
for (const int i : fn.param_indices()) {
|
2023-01-07 17:32:28 +01:00
|
|
|
const ParamType param_type = fn.param_type(i);
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
if (ELEM(param_type.interface_type(),
|
2023-01-07 17:32:28 +01:00
|
|
|
ParamType::InterfaceType::Mutable,
|
|
|
|
|
ParamType::InterfaceType::Output))
|
|
|
|
|
{
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
if (param_type.data_type().is_vector()) {
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
static int64_t compute_grain_size(const ExecutionHints &hints, const IndexMask &mask)
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
{
|
|
|
|
|
int64_t grain_size = hints.min_grain_size;
|
|
|
|
|
if (hints.uniform_execution_time) {
|
|
|
|
|
const int thread_count = BLI_system_thread_count();
|
|
|
|
|
/* Avoid using a small grain size even if it is not necessary. */
|
|
|
|
|
const int64_t thread_based_grain_size = mask.size() / thread_count / 4;
|
|
|
|
|
grain_size = std::max(grain_size, thread_based_grain_size);
|
|
|
|
|
}
|
|
|
|
|
if (hints.allocates_array) {
|
|
|
|
|
const int64_t max_grain_size = 10000;
|
|
|
|
|
/* Avoid allocating many large intermediate arrays. Better process data in smaller chunks to
|
|
|
|
|
* keep peak memory usage lower. */
|
|
|
|
|
grain_size = std::min(grain_size, max_grain_size);
|
|
|
|
|
}
|
|
|
|
|
return grain_size;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-22 00:03:25 +01:00
|
|
|
static int64_t compute_alignment(const int64_t grain_size)
|
|
|
|
|
{
|
|
|
|
|
if (grain_size <= 512) {
|
|
|
|
|
/* Don't use a number that's too large, or otherwise the work will be split quite unevenly. */
|
|
|
|
|
return 8;
|
|
|
|
|
}
|
|
|
|
|
/* It's not common that more elements are processed in a loop at once. */
|
|
|
|
|
return 32;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-22 00:13:47 +01:00
|
|
|
static void add_sliced_parameters(const Signature &signature,
|
|
|
|
|
Params &full_params,
|
|
|
|
|
const IndexRange slice_range,
|
|
|
|
|
ParamsBuilder &r_sliced_params)
|
|
|
|
|
{
|
|
|
|
|
for (const int param_index : signature.params.index_range()) {
|
|
|
|
|
const ParamType ¶m_type = signature.params[param_index].type;
|
|
|
|
|
switch (param_type.category()) {
|
|
|
|
|
case ParamCategory::SingleInput: {
|
|
|
|
|
const GVArray &varray = full_params.readonly_single_input(param_index);
|
|
|
|
|
r_sliced_params.add_readonly_single_input(varray.slice(slice_range));
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
case ParamCategory::SingleMutable: {
|
|
|
|
|
const GMutableSpan span = full_params.single_mutable(param_index);
|
|
|
|
|
const GMutableSpan sliced_span = span.slice(slice_range);
|
|
|
|
|
r_sliced_params.add_single_mutable(sliced_span);
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
case ParamCategory::SingleOutput: {
|
|
|
|
|
if (bool(signature.params[param_index].flag & ParamFlag::SupportsUnusedOutput)) {
|
|
|
|
|
const GMutableSpan span = full_params.uninitialized_single_output_if_required(
|
|
|
|
|
param_index);
|
|
|
|
|
if (span.is_empty()) {
|
|
|
|
|
r_sliced_params.add_ignored_single_output();
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
const GMutableSpan sliced_span = span.slice(slice_range);
|
|
|
|
|
r_sliced_params.add_uninitialized_single_output(sliced_span);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
const GMutableSpan span = full_params.uninitialized_single_output(param_index);
|
|
|
|
|
const GMutableSpan sliced_span = span.slice(slice_range);
|
|
|
|
|
r_sliced_params.add_uninitialized_single_output(sliced_span);
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
case ParamCategory::VectorInput:
|
|
|
|
|
case ParamCategory::VectorMutable:
|
|
|
|
|
case ParamCategory::VectorOutput: {
|
|
|
|
|
BLI_assert_unreachable();
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
void MultiFunction::call_auto(const IndexMask &mask, Params params, Context context) const
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
{
|
|
|
|
|
if (mask.is_empty()) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
const ExecutionHints hints = this->execution_hints();
|
|
|
|
|
const int64_t grain_size = compute_grain_size(hints, mask);
|
|
|
|
|
|
|
|
|
|
if (mask.size() <= grain_size) {
|
|
|
|
|
this->call(mask, params, context);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const bool supports_threading = supports_threading_by_slicing_params(*this);
|
|
|
|
|
if (!supports_threading) {
|
|
|
|
|
this->call(mask, params, context);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-22 00:03:25 +01:00
|
|
|
const int64_t alignment = compute_alignment(grain_size);
|
|
|
|
|
threading::parallel_for_aligned(
|
|
|
|
|
mask.index_range(), grain_size, alignment, [&](const IndexRange sub_range) {
|
|
|
|
|
const IndexMask sliced_mask = mask.slice(sub_range);
|
|
|
|
|
if (!hints.allocates_array) {
|
|
|
|
|
/* There is no benefit to changing indices in this case. */
|
|
|
|
|
this->call(sliced_mask, params, context);
|
|
|
|
|
return;
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
}
|
2023-01-22 00:03:25 +01:00
|
|
|
if (sliced_mask[0] < grain_size) {
|
|
|
|
|
/* The indices are low, no need to offset them. */
|
|
|
|
|
this->call(sliced_mask, params, context);
|
|
|
|
|
return;
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
}
|
2023-01-22 00:03:25 +01:00
|
|
|
const int64_t input_slice_start = sliced_mask[0];
|
|
|
|
|
const int64_t input_slice_size = sliced_mask.last() - input_slice_start + 1;
|
|
|
|
|
const IndexRange input_slice_range{input_slice_start, input_slice_size};
|
|
|
|
|
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
IndexMaskMemory memory;
|
|
|
|
|
const int64_t offset = -input_slice_start;
|
2024-02-13 11:37:26 +01:00
|
|
|
const IndexMask shifted_mask = mask.slice_and_shift(sub_range, offset, memory);
|
2023-01-22 00:03:25 +01:00
|
|
|
|
2024-02-13 11:37:26 +01:00
|
|
|
ParamsBuilder sliced_params{*this, &shifted_mask};
|
2023-01-22 00:13:47 +01:00
|
|
|
add_sliced_parameters(*signature_ref_, params, input_slice_range, sliced_params);
|
2024-02-13 11:37:26 +01:00
|
|
|
this->call(shifted_mask, sliced_params, context);
|
2023-01-22 00:03:25 +01:00
|
|
|
});
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
}
|
|
|
|
|
|
2021-11-21 12:37:04 +01:00
|
|
|
std::string MultiFunction::debug_name() const
|
|
|
|
|
{
|
|
|
|
|
return signature_ref_->function_name;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
} // namespace blender::fn::multi_function
|