Files
test2/source/blender/blenkernel/intern/attribute_math.cc

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

318 lines
11 KiB
C++
Raw Normal View History

/* SPDX-FileCopyrightText: 2023 Blender Authors
*
* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_array_utils.hh"
#include "BLI_math_euler.hh"
#include "BLI_math_matrix.hh"
#include "BLI_math_quaternion.hh"
#include "BKE_attribute_math.hh"
namespace blender::bke::attribute_math {
template<>
math::Quaternion mix2(const float factor, const math::Quaternion &a, const math::Quaternion &b)
{
return math::interpolate(a, b, factor);
}
template<>
math::Quaternion mix3(const float3 &weights,
const math::Quaternion &v0,
const math::Quaternion &v1,
const math::Quaternion &v2)
{
const float3 expmap_mixed = mix3(weights, v0.expmap(), v1.expmap(), v2.expmap());
return math::Quaternion::expmap(expmap_mixed);
}
template<>
math::Quaternion mix4(const float4 &weights,
const math::Quaternion &v0,
const math::Quaternion &v1,
const math::Quaternion &v2,
const math::Quaternion &v3)
{
const float3 expmap_mixed = mix4(weights, v0.expmap(), v1.expmap(), v2.expmap(), v3.expmap());
return math::Quaternion::expmap(expmap_mixed);
}
template<> float4x4 mix2(const float factor, const float4x4 &a, const float4x4 &b)
{
return math::interpolate(a, b, factor);
}
template<>
float4x4 mix3(const float3 &weights, const float4x4 &v0, const float4x4 &v1, const float4x4 &v2)
{
const float3 location = mix3(weights, v0.location(), v1.location(), v2.location());
const math::Quaternion rotation = mix3(
weights,
math::normalized_to_quaternion_safe(math::normalize(float3x3(v0))),
math::normalized_to_quaternion_safe(math::normalize(float3x3(v1))),
math::normalized_to_quaternion_safe(math::normalize(float3x3(v2))));
const float3 scale = mix3(weights, math::to_scale(v0), math::to_scale(v1), math::to_scale(v2));
return math::from_loc_rot_scale<float4x4>(location, rotation, scale);
}
template<>
float4x4 mix4(const float4 &weights,
const float4x4 &v0,
const float4x4 &v1,
const float4x4 &v2,
const float4x4 &v3)
{
const float3 location = mix4(
weights, v0.location(), v1.location(), v2.location(), v3.location());
const math::Quaternion rotation = mix4(weights,
math::to_quaternion(v0),
math::to_quaternion(v1),
math::to_quaternion(v2),
math::to_quaternion(v3));
const float3 scale = mix4(
weights, math::to_scale(v0), math::to_scale(v1), math::to_scale(v2), math::to_scale(v3));
return math::from_loc_rot_scale<float4x4>(location, rotation, scale);
}
ColorGeometry4fMixer::ColorGeometry4fMixer(MutableSpan<ColorGeometry4f> buffer,
ColorGeometry4f default_color)
: ColorGeometry4fMixer(buffer, buffer.index_range(), default_color)
{
}
ColorGeometry4fMixer::ColorGeometry4fMixer(MutableSpan<ColorGeometry4f> buffer,
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
const IndexMask &mask,
const ColorGeometry4f default_color)
: buffer_(buffer), default_color_(default_color), total_weights_(buffer.size(), 0.0f)
{
const ColorGeometry4f zero{0.0f, 0.0f, 0.0f, 0.0f};
mask.foreach_index([&](const int64_t i) { buffer_[i] = zero; });
}
void ColorGeometry4fMixer::set(const int64_t index,
const ColorGeometry4f &color,
const float weight)
{
buffer_[index].r = color.r * weight;
buffer_[index].g = color.g * weight;
buffer_[index].b = color.b * weight;
buffer_[index].a = color.a * weight;
total_weights_[index] = weight;
}
void ColorGeometry4fMixer::mix_in(const int64_t index,
const ColorGeometry4f &color,
const float weight)
{
Blenlib: Explicit Colors. Colors are often thought of as being 4 values that make up that can make any color. But that is of course too limited. In C we didn’t spend time to annotate what we meant when using colors. Recently `BLI_color.hh` was made to facilitate color structures in CPP. CPP has possibilities to enforce annotating structures during compilation and can adds conversions between them using function overloading and explicit constructors. The storage structs can hold 4 channels (r, g, b and a). Usage: Convert a theme byte color to a linearrgb premultiplied. ``` ColorTheme4b theme_color; ColorSceneLinear4f<eAlpha::Premultiplied> linearrgb_color = BLI_color_convert_to_scene_linear(theme_color).premultiply_alpha(); ``` The API is structured to make most use of inlining. Most notable are space conversions done via `BLI_color_convert_to*` functions. - Conversions between spaces (theme <=> scene linear) should always be done by invoking the `BLI_color_convert_to*` methods. - Encoding colors (compressing to store colors inside a less precision storage) should be done by invoking the `encode` and `decode` methods. - Changing alpha association should be done by invoking `premultiply_alpha` or `unpremultiply_alpha` methods. # Encoding. Color encoding is used to store colors with less precision as in using `uint8_t` in stead of `float`. This encoding is supported for `eSpace::SceneLinear`. To make this clear to the developer the `eSpace::SceneLinearByteEncoded` space is added. # Precision Colors can be stored using `uint8_t` or `float` colors. The conversion between the two precisions are available as methods. (`to_4b` and `to_4f`). # Alpha conversion Alpha conversion is only supported in SceneLinear space. Extending: - This file can be extended with `ColorHex/Hsl/Hsv` for different representations of rgb based colors. `ColorHsl4f<eSpace::SceneLinear, eAlpha::Premultiplied>` - Add non RGB spaces/storages ColorXyz. Reviewed By: JacquesLucke, brecht Differential Revision: https://developer.blender.org/D10978
2021-05-25 17:16:35 +02:00
ColorGeometry4f &output_color = buffer_[index];
output_color.r += color.r * weight;
output_color.g += color.g * weight;
output_color.b += color.b * weight;
output_color.a += color.a * weight;
total_weights_[index] += weight;
}
void ColorGeometry4fMixer::finalize()
{
this->finalize(buffer_.index_range());
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void ColorGeometry4fMixer::finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const float weight = total_weights_[i];
Blenlib: Explicit Colors. Colors are often thought of as being 4 values that make up that can make any color. But that is of course too limited. In C we didn’t spend time to annotate what we meant when using colors. Recently `BLI_color.hh` was made to facilitate color structures in CPP. CPP has possibilities to enforce annotating structures during compilation and can adds conversions between them using function overloading and explicit constructors. The storage structs can hold 4 channels (r, g, b and a). Usage: Convert a theme byte color to a linearrgb premultiplied. ``` ColorTheme4b theme_color; ColorSceneLinear4f<eAlpha::Premultiplied> linearrgb_color = BLI_color_convert_to_scene_linear(theme_color).premultiply_alpha(); ``` The API is structured to make most use of inlining. Most notable are space conversions done via `BLI_color_convert_to*` functions. - Conversions between spaces (theme <=> scene linear) should always be done by invoking the `BLI_color_convert_to*` methods. - Encoding colors (compressing to store colors inside a less precision storage) should be done by invoking the `encode` and `decode` methods. - Changing alpha association should be done by invoking `premultiply_alpha` or `unpremultiply_alpha` methods. # Encoding. Color encoding is used to store colors with less precision as in using `uint8_t` in stead of `float`. This encoding is supported for `eSpace::SceneLinear`. To make this clear to the developer the `eSpace::SceneLinearByteEncoded` space is added. # Precision Colors can be stored using `uint8_t` or `float` colors. The conversion between the two precisions are available as methods. (`to_4b` and `to_4f`). # Alpha conversion Alpha conversion is only supported in SceneLinear space. Extending: - This file can be extended with `ColorHex/Hsl/Hsv` for different representations of rgb based colors. `ColorHsl4f<eSpace::SceneLinear, eAlpha::Premultiplied>` - Add non RGB spaces/storages ColorXyz. Reviewed By: JacquesLucke, brecht Differential Revision: https://developer.blender.org/D10978
2021-05-25 17:16:35 +02:00
ColorGeometry4f &output_color = buffer_[i];
if (weight > 0.0f) {
const float weight_inv = 1.0f / weight;
output_color.r *= weight_inv;
output_color.g *= weight_inv;
output_color.b *= weight_inv;
output_color.a *= weight_inv;
}
else {
output_color = default_color_;
}
});
}
ColorGeometry4bMixer::ColorGeometry4bMixer(MutableSpan<ColorGeometry4b> buffer,
const ColorGeometry4b default_color)
: ColorGeometry4bMixer(buffer, buffer.index_range(), default_color)
{
}
ColorGeometry4bMixer::ColorGeometry4bMixer(MutableSpan<ColorGeometry4b> buffer,
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
const IndexMask &mask,
const ColorGeometry4b default_color)
: buffer_(buffer),
default_color_(default_color),
total_weights_(buffer.size(), 0.0f),
accumulation_buffer_(buffer.size(), float4(0, 0, 0, 0))
{
const ColorGeometry4b zero{0, 0, 0, 0};
mask.foreach_index([&](const int64_t i) { buffer_[i] = zero; });
}
void ColorGeometry4bMixer::ColorGeometry4bMixer::set(int64_t index,
const ColorGeometry4b &color,
const float weight)
{
accumulation_buffer_[index][0] = color.r * weight;
accumulation_buffer_[index][1] = color.g * weight;
accumulation_buffer_[index][2] = color.b * weight;
accumulation_buffer_[index][3] = color.a * weight;
total_weights_[index] = weight;
}
void ColorGeometry4bMixer::mix_in(int64_t index, const ColorGeometry4b &color, float weight)
{
float4 &accum_value = accumulation_buffer_[index];
accum_value[0] += color.r * weight;
accum_value[1] += color.g * weight;
accum_value[2] += color.b * weight;
accum_value[3] += color.a * weight;
total_weights_[index] += weight;
}
void ColorGeometry4bMixer::finalize()
{
this->finalize(buffer_.index_range());
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void ColorGeometry4bMixer::finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const float weight = total_weights_[i];
const float4 &accum_value = accumulation_buffer_[i];
ColorGeometry4b &output_color = buffer_[i];
if (weight > 0.0f) {
const float weight_inv = 1.0f / weight;
output_color.r = accum_value[0] * weight_inv;
output_color.g = accum_value[1] * weight_inv;
output_color.b = accum_value[2] * weight_inv;
output_color.a = accum_value[3] * weight_inv;
}
else {
output_color = default_color_;
}
});
}
float4x4Mixer::float4x4Mixer(MutableSpan<float4x4> buffer)
: float4x4Mixer(buffer, buffer.index_range())
{
}
float4x4Mixer::float4x4Mixer(MutableSpan<float4x4> buffer, const IndexMask & /*mask*/)
: buffer_(buffer),
total_weights_(buffer.size(), 0.0f),
location_buffer_(buffer.size(), float3(0)),
expmap_buffer_(buffer.size(), float3(0)),
scale_buffer_(buffer.size(), float3(0))
{
}
void float4x4Mixer::float4x4Mixer::set(int64_t index, const float4x4 &value, const float weight)
{
location_buffer_[index] = value.location() * weight;
expmap_buffer_[index] = math::to_quaternion(value).expmap() * weight;
scale_buffer_[index] = math::to_scale(value) * weight;
total_weights_[index] = weight;
}
void float4x4Mixer::mix_in(int64_t index, const float4x4 &value, float weight)
{
float3 location;
math::Quaternion rotation;
float3 scale;
math::to_loc_rot_scale_safe<true>(value, location, rotation, scale);
location_buffer_[index] += location * weight;
expmap_buffer_[index] += rotation.expmap() * weight;
scale_buffer_[index] += scale * weight;
total_weights_[index] += weight;
}
void float4x4Mixer::finalize()
{
this->finalize(buffer_.index_range());
}
void float4x4Mixer::finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const float weight = total_weights_[i];
if (weight > 0.0f) {
const float weight_inv = math::rcp(weight);
buffer_[i] = math::from_loc_rot_scale<float4x4>(
location_buffer_[i] * weight_inv,
math::Quaternion::expmap(expmap_buffer_[i] * weight_inv),
scale_buffer_[i] * weight_inv);
}
else {
buffer_[i] = float4x4::identity();
}
});
}
void gather(const GSpan src, const Span<int> map, GMutableSpan dst)
{
attribute_math::convert_to_static_type(src.type(), [&](auto dummy) {
using T = decltype(dummy);
array_utils::gather(src.typed<T>(), map, dst.typed<T>());
});
}
void gather(const GVArray &src, const Span<int> map, GMutableSpan dst)
{
attribute_math::convert_to_static_type(src.type(), [&](auto dummy) {
using T = decltype(dummy);
array_utils::gather(src.typed<T>(), map, dst.typed<T>());
});
}
void gather_group_to_group(const OffsetIndices<int> src_offsets,
const OffsetIndices<int> dst_offsets,
const IndexMask &selection,
const GSpan src,
GMutableSpan dst)
{
attribute_math::convert_to_static_type(src.type(), [&](auto dummy) {
using T = decltype(dummy);
array_utils::gather_group_to_group(
src_offsets, dst_offsets, selection, src.typed<T>(), dst.typed<T>());
});
}
Curves: Optimization of ed::curves::duplicate_points Adds new function `foreach_content_slice_by_offsets` that gathers mask's indices into `IndexRange` lists grouped by given offsets. New approach doesn't need temporary `points_to_duplicate` array of size `curves.points_num()`. Traversal is more effective as only selected indices get visited. Also point data is copied by point ranges instead of point by point. Performance was tested with following code: ```cpp TEST(curves_geometry, DuplicatePoints) { CurvesGeometry curves = create_basic_curves(100000, 100); IndexMaskMemory memory; IndexMask every_second = IndexMask::from_predicate( curves.points_range(), GrainSize(100), memory, [](const int i) { return bool(i % 2); }); for (const int i : IndexRange(1000)) { CurvesGeometry copy = curves; const int expected_point_count = copy.points_num() + every_second.size(); ed::curves::duplicate_points(copy, every_second); EXPECT_EQ(copy.points_num(), expected_point_count); } } ``` | | main | mask slices | slices & copy by ranges | | -------- | -------- | ----------- | ----------------------- | | full copy| 3346 ms | 2270 ms | 1614 ms | | `i % 2` | 8084 ms | 5918 ms | 5112 ms | | `!((i % 10 == 0) or` <br/>`(i % 10 == 1) or` <br/>`(i % 10 == 2))` | 4522 ms | 2860 ms | 2343 ms| | `(i % 10 == 0) or` <br/> `(i % 10 == 1) or` <br/> `(i % 10 == 2)`| 4088 ms | 1961 ms | 1639 ms| | `IndexRange(50020, 70)` <br/> <sub>(one small range in the middle)</sub>| 1966 ms | 100 ms | ~75 ms | Pull Request: https://projects.blender.org/blender/blender/pulls/133101
2025-01-30 17:19:16 +01:00
void gather_ranges_to_groups(const Span<IndexRange> src_ranges,
const OffsetIndices<int> dst_offsets,
const GSpan src,
GMutableSpan dst)
{
attribute_math::convert_to_static_type(src.type(), [&](auto dummy) {
using T = decltype(dummy);
Span<T> src_span = src.typed<T>();
MutableSpan<T> dst_span = dst.typed<T>();
threading::parallel_for(src_ranges.index_range(), 512, [&](const IndexRange range) {
for (const int i : range) {
dst_span.slice(dst_offsets[i]).copy_from(src_span.slice(src_ranges[i]));
}
});
});
}
void gather_to_groups(const OffsetIndices<int> dst_offsets,
const IndexMask &src_selection,
const GSpan src,
GMutableSpan dst)
{
attribute_math::convert_to_static_type(src.type(), [&](auto dummy) {
using T = decltype(dummy);
array_utils::gather_to_groups(dst_offsets, src_selection, src.typed<T>(), dst.typed<T>());
});
}
} // namespace blender::bke::attribute_math