BLI: refactor IndexMask for better performance and memory usage

Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
  `int64_t` for each index which is more than necessary in pretty much all
  practical cases currently. Using `int32_t` might still become limiting
  in the future in case we use this to index e.g. byte buffers larger than
  a few gigabytes. We also don't want to template `IndexMask`, because
  that would cause a split in the "ecosystem", or everything would have to
  be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
  array. This is generally good but has the problem that it is hard to fill
  from multiple-threads when the final size is not known from the beginning.
  This is commonly the case when e.g. converting an array of bool to an
  index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
  It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
  The most important part of that is to avoid all memory access when iterating
  over continuous ranges. For some core nodes (e.g. math nodes), we generate
  optimized code for the cases of irregular index masks and simple index ranges.

To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
  `O(log #indices)` now, but with a low constant factor. It should be possible
  to split a mask into n approximately equally sized parts in `O(n)` though,
  making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
  data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
  callbacks have to be used. To avoid extra code complexity at the call site,
  the `foreach_*` methods support multi-threading out of the box.

The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.

For more details see comments in `BLI_index_mask.hh`.

I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
  expected because we already made sure that e.g. add node evaluation is
  vectorized. The important thing here is to check that changes to the way we
  iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
  That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
  In the worst case, the memory requirements can be larger when there are many
  indices that are very far away. However, when they are far away from each other,
  that indicates that there aren't many indices in total. In common cases, memory
  usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
  `index_mask_from_selection` on 10.000.000 elements at various probabilities for
  `true` at every index:
  ```
  Probability      Old        New
  0              4.6 ms     0.8 ms
  0.001          5.1 ms     1.3 ms
  0.2            8.4 ms     1.8 ms
  0.5           15.3 ms     3.0 ms
  0.8           20.1 ms     3.0 ms
  0.999         25.1 ms     1.7 ms
  1             13.5 ms     1.1 ms
  ```

Pull Request: https://projects.blender.org/blender/blender/pulls/104629
This commit is contained in:
Jacques Lucke
2023-05-24 18:11:41 +02:00
parent f3f2f7fd47
commit 2cfcb8b0b8
182 changed files with 4104 additions and 2997 deletions

View File

@@ -294,7 +294,7 @@ template<typename T> class SimpleMixer {
/**
* \param mask: Only initialize these indices. Other indices in the buffer will be invalid.
*/
SimpleMixer(MutableSpan<T> buffer, const IndexMask mask, T default_value = {})
SimpleMixer(MutableSpan<T> buffer, const IndexMask &mask, T default_value = {})
: buffer_(buffer), default_value_(default_value), total_weights_(buffer.size(), 0.0f)
{
BLI_STATIC_ASSERT(std::is_trivial_v<T>, "");
@@ -327,7 +327,7 @@ template<typename T> class SimpleMixer {
this->finalize(IndexMask(buffer_.size()));
}
void finalize(const IndexMask mask)
void finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const float weight = total_weights_[i];
@@ -365,7 +365,7 @@ class BooleanPropagationMixer {
/**
* \param mask: Only initialize these indices. Other indices in the buffer will be invalid.
*/
BooleanPropagationMixer(MutableSpan<bool> buffer, const IndexMask mask) : buffer_(buffer)
BooleanPropagationMixer(MutableSpan<bool> buffer, const IndexMask &mask) : buffer_(buffer)
{
mask.foreach_index([&](const int64_t i) { buffer_[i] = false; });
}
@@ -391,7 +391,7 @@ class BooleanPropagationMixer {
*/
void finalize() {}
void finalize(const IndexMask /*mask*/) {}
void finalize(const IndexMask & /*mask*/) {}
};
/**
@@ -421,7 +421,7 @@ class SimpleMixerWithAccumulationType {
* \param mask: Only initialize these indices. Other indices in the buffer will be invalid.
*/
SimpleMixerWithAccumulationType(MutableSpan<T> buffer,
const IndexMask mask,
const IndexMask &mask,
T default_value = {})
: buffer_(buffer), default_value_(default_value), accumulation_buffer_(buffer.size())
{
@@ -449,7 +449,7 @@ class SimpleMixerWithAccumulationType {
this->finalize(buffer_.index_range());
}
void finalize(const IndexMask mask)
void finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const Item &item = accumulation_buffer_[i];
@@ -478,12 +478,12 @@ class ColorGeometry4fMixer {
* \param mask: Only initialize these indices. Other indices in the buffer will be invalid.
*/
ColorGeometry4fMixer(MutableSpan<ColorGeometry4f> buffer,
IndexMask mask,
const IndexMask &mask,
ColorGeometry4f default_color = ColorGeometry4f(0.0f, 0.0f, 0.0f, 1.0f));
void set(int64_t index, const ColorGeometry4f &color, float weight = 1.0f);
void mix_in(int64_t index, const ColorGeometry4f &color, float weight = 1.0f);
void finalize();
void finalize(IndexMask mask);
void finalize(const IndexMask &mask);
};
class ColorGeometry4bMixer {
@@ -500,12 +500,12 @@ class ColorGeometry4bMixer {
* \param mask: Only initialize these indices. Other indices in the buffer will be invalid.
*/
ColorGeometry4bMixer(MutableSpan<ColorGeometry4b> buffer,
IndexMask mask,
const IndexMask &mask,
ColorGeometry4b default_color = ColorGeometry4b(0, 0, 0, 255));
void set(int64_t index, const ColorGeometry4b &color, float weight = 1.0f);
void mix_in(int64_t index, const ColorGeometry4b &color, float weight = 1.0f);
void finalize();
void finalize(IndexMask mask);
void finalize(const IndexMask &mask);
};
template<typename T> struct DefaultMixerStruct {

View File

@@ -160,7 +160,7 @@ class CurvesGeometry : public ::CurvesGeometry {
/** Set all curve types to the value and call #update_curve_types. */
void fill_curve_types(CurveType type);
/** Set the types for the curves in the selection and call #update_curve_types. */
void fill_curve_types(IndexMask selection, CurveType type);
void fill_curve_types(const IndexMask &selection, CurveType type);
/** Update the cached count of curves of each type, necessary after #curve_types_for_write. */
void update_curve_types();
@@ -173,10 +173,10 @@ class CurvesGeometry : public ::CurvesGeometry {
/**
* All of the curve indices for curves with a specific type.
*/
IndexMask indices_for_curve_type(CurveType type, Vector<int64_t> &r_indices) const;
IndexMask indices_for_curve_type(CurveType type, IndexMaskMemory &memory) const;
IndexMask indices_for_curve_type(CurveType type,
IndexMask selection,
Vector<int64_t> &r_indices) const;
const IndexMask &selection,
IndexMaskMemory &memory) const;
Array<int> point_to_curve_map() const;
@@ -361,16 +361,16 @@ class CurvesGeometry : public ::CurvesGeometry {
void calculate_bezier_auto_handles();
void remove_points(IndexMask points_to_delete,
void remove_points(const IndexMask &points_to_delete,
const AnonymousAttributePropagationInfo &propagation_info = {});
void remove_curves(IndexMask curves_to_delete,
void remove_curves(const IndexMask &curves_to_delete,
const AnonymousAttributePropagationInfo &propagation_info = {});
/**
* Change the direction of selected curves (switch the start and end) without changing their
* shape.
*/
void reverse_curves(IndexMask curves_to_reverse);
void reverse_curves(const IndexMask &curves_to_reverse);
/**
* Remove any attributes that are unused based on the types in the curves.

View File

@@ -481,14 +481,14 @@ void copy_point_data(OffsetIndices<int> src_points_by_curve,
void copy_point_data(OffsetIndices<int> src_points_by_curve,
OffsetIndices<int> dst_points_by_curve,
IndexMask src_curve_selection,
const IndexMask &src_curve_selection,
GSpan src,
GMutableSpan dst);
template<typename T>
void copy_point_data(OffsetIndices<int> src_points_by_curve,
OffsetIndices<int> dst_points_by_curve,
IndexMask src_curve_selection,
const IndexMask &src_curve_selection,
Span<T> src,
MutableSpan<T> dst)
{
@@ -500,13 +500,13 @@ void copy_point_data(OffsetIndices<int> src_points_by_curve,
}
void fill_points(OffsetIndices<int> points_by_curve,
IndexMask curve_selection,
const IndexMask &curve_selection,
GPointer value,
GMutableSpan dst);
template<typename T>
void fill_points(const OffsetIndices<int> points_by_curve,
IndexMask curve_selection,
const IndexMask &curve_selection,
const T &value,
MutableSpan<T> dst)
{
@@ -541,7 +541,9 @@ bke::CurvesGeometry copy_only_curve_domain(const bke::CurvesGeometry &src_curves
/**
* Copy the number of points in every curve in the mask to the corresponding index in #sizes.
*/
void copy_curve_sizes(OffsetIndices<int> points_by_curve, IndexMask mask, MutableSpan<int> sizes);
void copy_curve_sizes(OffsetIndices<int> points_by_curve,
const IndexMask &mask,
MutableSpan<int> sizes);
/**
* Copy the number of points in every curve in #curve_ranges to the corresponding index in
@@ -554,12 +556,12 @@ void copy_curve_sizes(OffsetIndices<int> points_by_curve,
IndexMask indices_for_type(const VArray<int8_t> &types,
const std::array<int, CURVE_TYPES_NUM> &type_counts,
const CurveType type,
const IndexMask selection,
Vector<int64_t> &r_indices);
const IndexMask &selection,
IndexMaskMemory &memory);
void foreach_curve_by_type(const VArray<int8_t> &types,
const std::array<int, CURVE_TYPES_NUM> &type_counts,
IndexMask selection,
const IndexMask &selection,
FunctionRef<void(IndexMask)> catmull_rom_fn,
FunctionRef<void(IndexMask)> poly_fn,
FunctionRef<void(IndexMask)> bezier_fn,

View File

@@ -136,10 +136,10 @@ class GeometryFieldInput : public fn::FieldInput {
public:
using fn::FieldInput::FieldInput;
GVArray get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const override;
virtual GVArray get_varray_for_context(const GeometryFieldContext &context,
IndexMask mask) const = 0;
const IndexMask &mask) const = 0;
virtual std::optional<eAttrDomain> preferred_domain(const GeometryComponent &component) const;
};
@@ -147,11 +147,11 @@ class MeshFieldInput : public fn::FieldInput {
public:
using fn::FieldInput::FieldInput;
GVArray get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const override;
virtual GVArray get_varray_for_context(const Mesh &mesh,
eAttrDomain domain,
IndexMask mask) const = 0;
const IndexMask &mask) const = 0;
virtual std::optional<eAttrDomain> preferred_domain(const Mesh &mesh) const;
};
@@ -159,11 +159,11 @@ class CurvesFieldInput : public fn::FieldInput {
public:
using fn::FieldInput::FieldInput;
GVArray get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const override;
virtual GVArray get_varray_for_context(const CurvesGeometry &curves,
eAttrDomain domain,
IndexMask mask) const = 0;
const IndexMask &mask) const = 0;
virtual std::optional<eAttrDomain> preferred_domain(const CurvesGeometry &curves) const;
};
@@ -171,18 +171,20 @@ class PointCloudFieldInput : public fn::FieldInput {
public:
using fn::FieldInput::FieldInput;
GVArray get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const override;
virtual GVArray get_varray_for_context(const PointCloud &pointcloud, IndexMask mask) const = 0;
virtual GVArray get_varray_for_context(const PointCloud &pointcloud,
const IndexMask &mask) const = 0;
};
class InstancesFieldInput : public fn::FieldInput {
public:
using fn::FieldInput::FieldInput;
GVArray get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const override;
virtual GVArray get_varray_for_context(const Instances &instances, IndexMask mask) const = 0;
virtual GVArray get_varray_for_context(const Instances &instances,
const IndexMask &mask) const = 0;
};
class AttributeFieldInput : public GeometryFieldInput {
@@ -212,7 +214,7 @@ class AttributeFieldInput : public GeometryFieldInput {
}
GVArray get_varray_for_context(const GeometryFieldContext &context,
IndexMask mask) const override;
const IndexMask &mask) const override;
std::string socket_inspection_name() const override;
@@ -229,7 +231,7 @@ class IDAttributeFieldInput : public GeometryFieldInput {
}
GVArray get_varray_for_context(const GeometryFieldContext &context,
IndexMask mask) const override;
const IndexMask &mask) const override;
std::string socket_inspection_name() const override;
@@ -239,7 +241,7 @@ class IDAttributeFieldInput : public GeometryFieldInput {
VArray<float3> curve_normals_varray(const CurvesGeometry &curves, const eAttrDomain domain);
VArray<float3> mesh_normals_varray(const Mesh &mesh, const IndexMask mask, eAttrDomain domain);
VArray<float3> mesh_normals_varray(const Mesh &mesh, const IndexMask &mask, eAttrDomain domain);
class NormalFieldInput : public GeometryFieldInput {
public:
@@ -249,7 +251,7 @@ class NormalFieldInput : public GeometryFieldInput {
}
GVArray get_varray_for_context(const GeometryFieldContext &context,
IndexMask mask) const override;
const IndexMask &mask) const override;
std::string socket_inspection_name() const override;
@@ -288,7 +290,7 @@ class AnonymousAttributeFieldInput : public GeometryFieldInput {
}
GVArray get_varray_for_context(const GeometryFieldContext &context,
IndexMask mask) const override;
const IndexMask &mask) const override;
std::string socket_inspection_name() const override;
@@ -302,7 +304,7 @@ class CurveLengthFieldInput final : public CurvesFieldInput {
CurveLengthFieldInput();
GVArray get_varray_for_context(const CurvesGeometry &curves,
eAttrDomain domain,
IndexMask mask) const final;
const IndexMask &mask) const final;
uint64_t hash() const override;
bool is_equal_to(const fn::FieldNode &other) const override;
std::optional<eAttrDomain> preferred_domain(const bke::CurvesGeometry &curves) const final;

View File

@@ -155,7 +155,7 @@ class Instances {
* Remove the indices that are not contained in the mask input, and remove unused instance
* references afterwards.
*/
void remove(const blender::IndexMask mask,
void remove(const blender::IndexMask &mask,
const blender::bke::AnonymousAttributePropagationInfo &propagation_info);
/**
* Get an id for every instance. These can be used for e.g. motion blur.

View File

@@ -32,7 +32,7 @@ void sample_point_attribute(Span<int> corner_verts,
Span<int> looptri_indices,
Span<float3> bary_coords,
const GVArray &src,
IndexMask mask,
const IndexMask &mask,
GMutableSpan dst);
void sample_point_normals(Span<int> corner_verts,
@@ -47,20 +47,20 @@ void sample_corner_attribute(Span<MLoopTri> looptris,
Span<int> looptri_indices,
Span<float3> bary_coords,
const GVArray &src,
IndexMask mask,
const IndexMask &mask,
GMutableSpan dst);
void sample_corner_normals(Span<MLoopTri> looptris,
Span<int> looptri_indices,
Span<float3> bary_coords,
Span<float3> src,
IndexMask mask,
const IndexMask &mask,
MutableSpan<float3> dst);
void sample_face_attribute(Span<int> looptri_polys,
Span<int> looptri_indices,
const GVArray &src,
IndexMask mask,
const IndexMask &mask,
GMutableSpan dst);
/**
@@ -148,7 +148,7 @@ class BaryWeightFromPositionFn : public mf::MultiFunction {
public:
BaryWeightFromPositionFn(GeometrySet geometry);
void call(IndexMask mask, mf::Params params, mf::Context context) const;
void call(const IndexMask &mask, mf::Params params, mf::Context context) const;
};
/**
@@ -163,7 +163,7 @@ class CornerBaryWeightFromPositionFn : public mf::MultiFunction {
public:
CornerBaryWeightFromPositionFn(GeometrySet geometry);
void call(IndexMask mask, mf::Params params, mf::Context context) const;
void call(const IndexMask &mask, mf::Params params, mf::Context context) const;
};
/**
@@ -183,7 +183,7 @@ class BaryWeightSampleFn : public mf::MultiFunction {
public:
BaryWeightSampleFn(GeometrySet geometry, fn::GField src_field);
void call(IndexMask mask, mf::Params params, mf::Context context) const;
void call(const IndexMask &mask, mf::Params params, mf::Context context) const;
private:
void evaluate_source(fn::GField src_field);

View File

@@ -13,7 +13,7 @@ ColorGeometry4fMixer::ColorGeometry4fMixer(MutableSpan<ColorGeometry4f> buffer,
}
ColorGeometry4fMixer::ColorGeometry4fMixer(MutableSpan<ColorGeometry4f> buffer,
const IndexMask mask,
const IndexMask &mask,
const ColorGeometry4f default_color)
: buffer_(buffer), default_color_(default_color), total_weights_(buffer.size(), 0.0f)
{
@@ -49,7 +49,7 @@ void ColorGeometry4fMixer::finalize()
this->finalize(buffer_.index_range());
}
void ColorGeometry4fMixer::finalize(const IndexMask mask)
void ColorGeometry4fMixer::finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const float weight = total_weights_[i];
@@ -74,7 +74,7 @@ ColorGeometry4bMixer::ColorGeometry4bMixer(MutableSpan<ColorGeometry4b> buffer,
}
ColorGeometry4bMixer::ColorGeometry4bMixer(MutableSpan<ColorGeometry4b> buffer,
const IndexMask mask,
const IndexMask &mask,
const ColorGeometry4b default_color)
: buffer_(buffer),
default_color_(default_color),
@@ -111,7 +111,7 @@ void ColorGeometry4bMixer::finalize()
this->finalize(buffer_.index_range());
}
void ColorGeometry4bMixer::finalize(const IndexMask mask)
void ColorGeometry4bMixer::finalize(const IndexMask &mask)
{
mask.foreach_index([&](const int64_t i) {
const float weight = total_weights_[i];

View File

@@ -114,19 +114,17 @@ Curves *curve_legacy_to_curves(const Curve &curve_legacy, const ListBase &nurbs_
MutableSpan<float> radii = radius_attribute.span;
MutableSpan<float> tilts = curves.tilt_for_write();
auto create_poly = [&](IndexMask selection) {
threading::parallel_for(selection.index_range(), 256, [&](IndexRange range) {
for (const int curve_i : selection.slice(range)) {
const Nurb &src_curve = *src_curves[curve_i];
const Span<BPoint> src_points(src_curve.bp, src_curve.pntsu);
const IndexRange points = points_by_curve[curve_i];
auto create_poly = [&](const IndexMask &selection) {
selection.foreach_index(GrainSize(256), [&](const int curve_i) {
const Nurb &src_curve = *src_curves[curve_i];
const Span<BPoint> src_points(src_curve.bp, src_curve.pntsu);
const IndexRange points = points_by_curve[curve_i];
for (const int i : src_points.index_range()) {
const BPoint &bp = src_points[i];
positions[points[i]] = bp.vec;
radii[points[i]] = bp.radius;
tilts[points[i]] = bp.tilt;
}
for (const int i : src_points.index_range()) {
const BPoint &bp = src_points[i];
positions[points[i]] = bp.vec;
radii[points[i]] = bp.radius;
tilts[points[i]] = bp.tilt;
}
});
};
@@ -135,58 +133,54 @@ Curves *curve_legacy_to_curves(const Curve &curve_legacy, const ListBase &nurbs_
* positions don't agree with the types because of evaluation, or because one-sided aligned
* handles weren't considered. While recalculating automatic handles to fix those situations
* is an option, currently this opts not to for the sake of flexibility. */
auto create_bezier = [&](IndexMask selection) {
auto create_bezier = [&](const IndexMask &selection) {
MutableSpan<int> resolutions = curves.resolution_for_write();
MutableSpan<float3> handle_positions_l = curves.handle_positions_left_for_write();
MutableSpan<float3> handle_positions_r = curves.handle_positions_right_for_write();
MutableSpan<int8_t> handle_types_l = curves.handle_types_left_for_write();
MutableSpan<int8_t> handle_types_r = curves.handle_types_right_for_write();
threading::parallel_for(selection.index_range(), 256, [&](IndexRange range) {
for (const int curve_i : selection.slice(range)) {
const Nurb &src_curve = *src_curves[curve_i];
const Span<BezTriple> src_points(src_curve.bezt, src_curve.pntsu);
const IndexRange points = points_by_curve[curve_i];
selection.foreach_index(GrainSize(256), [&](const int curve_i) {
const Nurb &src_curve = *src_curves[curve_i];
const Span<BezTriple> src_points(src_curve.bezt, src_curve.pntsu);
const IndexRange points = points_by_curve[curve_i];
resolutions[curve_i] = src_curve.resolu;
resolutions[curve_i] = src_curve.resolu;
for (const int i : src_points.index_range()) {
const BezTriple &point = src_points[i];
positions[points[i]] = point.vec[1];
handle_positions_l[points[i]] = point.vec[0];
handle_types_l[points[i]] = handle_type_from_legacy(point.h1);
handle_positions_r[points[i]] = point.vec[2];
handle_types_r[points[i]] = handle_type_from_legacy(point.h2);
radii[points[i]] = point.radius;
tilts[points[i]] = point.tilt;
}
for (const int i : src_points.index_range()) {
const BezTriple &point = src_points[i];
positions[points[i]] = point.vec[1];
handle_positions_l[points[i]] = point.vec[0];
handle_types_l[points[i]] = handle_type_from_legacy(point.h1);
handle_positions_r[points[i]] = point.vec[2];
handle_types_r[points[i]] = handle_type_from_legacy(point.h2);
radii[points[i]] = point.radius;
tilts[points[i]] = point.tilt;
}
});
};
auto create_nurbs = [&](IndexMask selection) {
auto create_nurbs = [&](const IndexMask &selection) {
MutableSpan<int> resolutions = curves.resolution_for_write();
MutableSpan<float> nurbs_weights = curves.nurbs_weights_for_write();
MutableSpan<int8_t> nurbs_orders = curves.nurbs_orders_for_write();
MutableSpan<int8_t> nurbs_knots_modes = curves.nurbs_knots_modes_for_write();
threading::parallel_for(selection.index_range(), 256, [&](IndexRange range) {
for (const int curve_i : selection.slice(range)) {
const Nurb &src_curve = *src_curves[curve_i];
const Span src_points(src_curve.bp, src_curve.pntsu);
const IndexRange points = points_by_curve[curve_i];
selection.foreach_index(GrainSize(256), [&](const int curve_i) {
const Nurb &src_curve = *src_curves[curve_i];
const Span src_points(src_curve.bp, src_curve.pntsu);
const IndexRange points = points_by_curve[curve_i];
resolutions[curve_i] = src_curve.resolu;
nurbs_orders[curve_i] = src_curve.orderu;
nurbs_knots_modes[curve_i] = knots_mode_from_legacy(src_curve.flagu);
resolutions[curve_i] = src_curve.resolu;
nurbs_orders[curve_i] = src_curve.orderu;
nurbs_knots_modes[curve_i] = knots_mode_from_legacy(src_curve.flagu);
for (const int i : src_points.index_range()) {
const BPoint &bp = src_points[i];
positions[points[i]] = bp.vec;
radii[points[i]] = bp.radius;
tilts[points[i]] = bp.tilt;
nurbs_weights[points[i]] = bp.vec[3];
}
for (const int i : src_points.index_range()) {
const BPoint &bp = src_points[i];
positions[points[i]] = bp.vec;
radii[points[i]] = bp.radius;
tilts[points[i]] = bp.tilt;
nurbs_weights[points[i]] = bp.vec[3];
}
});
};
@@ -195,7 +189,7 @@ Curves *curve_legacy_to_curves(const Curve &curve_legacy, const ListBase &nurbs_
curves.curve_types(),
curves.curve_type_counts(),
curves.curves_range(),
[&](IndexMask /*selection*/) { BLI_assert_unreachable(); },
[&](const IndexMask & /*selection*/) { BLI_assert_unreachable(); },
create_poly,
create_bezier,
create_nurbs);

View File

@@ -11,7 +11,7 @@
#include "BLI_array_utils.hh"
#include "BLI_bounds.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_index_mask.hh"
#include "BLI_length_parameterize.hh"
#include "BLI_math_matrix.hh"
#include "BLI_math_rotation_legacy.hh"
@@ -268,7 +268,7 @@ void CurvesGeometry::fill_curve_types(const CurveType type)
this->tag_topology_changed();
}
void CurvesGeometry::fill_curve_types(const IndexMask selection, const CurveType type)
void CurvesGeometry::fill_curve_types(const IndexMask &selection, const CurveType type)
{
if (selection.size() == this->curves_num()) {
this->fill_curve_types(type);
@@ -281,7 +281,7 @@ void CurvesGeometry::fill_curve_types(const IndexMask selection, const CurveType
}
}
/* A potential performance optimization is only counting the changed indices. */
this->curve_types_for_write().fill_indices(selection.indices(), type);
index_mask::masked_fill<int8_t>(this->curve_types_for_write(), type, selection);
this->update_curve_types();
this->tag_topology_changed();
}
@@ -549,17 +549,17 @@ OffsetIndices<int> CurvesGeometry::evaluated_points_by_curve() const
}
IndexMask CurvesGeometry::indices_for_curve_type(const CurveType type,
Vector<int64_t> &r_indices) const
IndexMaskMemory &memory) const
{
return this->indices_for_curve_type(type, this->curves_range(), r_indices);
return this->indices_for_curve_type(type, this->curves_range(), memory);
}
IndexMask CurvesGeometry::indices_for_curve_type(const CurveType type,
const IndexMask selection,
Vector<int64_t> &r_indices) const
const IndexMask &selection,
IndexMaskMemory &memory) const
{
return curves::indices_for_type(
this->curve_types(), this->curve_type_counts(), type, selection, r_indices);
this->curve_types(), this->curve_type_counts(), type, selection, memory);
}
Array<int> CurvesGeometry::point_to_curve_map() const
@@ -573,8 +573,8 @@ void CurvesGeometry::ensure_nurbs_basis_cache() const
{
const bke::CurvesGeometryRuntime &runtime = *this->runtime;
runtime.nurbs_basis_cache.ensure([&](Vector<curves::nurbs::BasisCache> &r_data) {
Vector<int64_t> nurbs_indices;
const IndexMask nurbs_mask = this->indices_for_curve_type(CURVE_TYPE_NURBS, nurbs_indices);
IndexMaskMemory memory;
const IndexMask nurbs_mask = this->indices_for_curve_type(CURVE_TYPE_NURBS, memory);
if (nurbs_mask.is_empty()) {
r_data.clear_and_shrink();
return;
@@ -588,9 +588,9 @@ void CurvesGeometry::ensure_nurbs_basis_cache() const
const VArray<int8_t> orders = this->nurbs_orders();
const VArray<int8_t> knots_modes = this->nurbs_knots_modes();
threading::parallel_for(nurbs_mask.index_range(), 64, [&](const IndexRange range) {
nurbs_mask.foreach_segment(GrainSize(64), [&](const IndexMaskSegment segment) {
Vector<float, 32> knots;
for (const int curve_index : nurbs_mask.slice(range)) {
for (const int curve_index : segment) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
@@ -629,26 +629,23 @@ Span<float3> CurvesGeometry::evaluated_positions() const
const OffsetIndices<int> evaluated_points_by_curve = this->evaluated_points_by_curve();
const Span<float3> positions = this->positions();
auto evaluate_catmull = [&](const IndexMask selection) {
auto evaluate_catmull = [&](const IndexMask &selection) {
const VArray<bool> cyclic = this->cyclic();
const VArray<int> resolution = this->resolution();
threading::parallel_for(selection.index_range(), 128, [&](const IndexRange range) {
for (const int curve_index : selection.slice(range)) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
curves::catmull_rom::interpolate_to_evaluated(
positions.slice(points),
cyclic[curve_index],
resolution[curve_index],
evaluated_positions.slice(evaluated_points));
}
selection.foreach_index(GrainSize(128), [&](const int curve_index) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
curves::catmull_rom::interpolate_to_evaluated(positions.slice(points),
cyclic[curve_index],
resolution[curve_index],
evaluated_positions.slice(evaluated_points));
});
};
auto evaluate_poly = [&](const IndexMask selection) {
auto evaluate_poly = [&](const IndexMask &selection) {
curves::copy_point_data(
points_by_curve, evaluated_points_by_curve, selection, positions, evaluated_positions);
};
auto evaluate_bezier = [&](const IndexMask selection) {
auto evaluate_bezier = [&](const IndexMask &selection) {
const Span<float3> handle_positions_left = this->handle_positions_left();
const Span<float3> handle_positions_right = this->handle_positions_right();
if (handle_positions_left.is_empty() || handle_positions_right.is_empty()) {
@@ -657,35 +654,30 @@ Span<float3> CurvesGeometry::evaluated_positions() const
}
const Span<int> all_bezier_offsets =
runtime.evaluated_offsets_cache.data().all_bezier_offsets;
threading::parallel_for(selection.index_range(), 128, [&](const IndexRange range) {
for (const int curve_index : selection.slice(range)) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
const IndexRange offsets = curves::per_curve_point_offsets_range(points, curve_index);
curves::bezier::calculate_evaluated_positions(
positions.slice(points),
handle_positions_left.slice(points),
handle_positions_right.slice(points),
all_bezier_offsets.slice(offsets),
evaluated_positions.slice(evaluated_points));
}
selection.foreach_index(GrainSize(128), [&](const int curve_index) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
const IndexRange offsets = curves::per_curve_point_offsets_range(points, curve_index);
curves::bezier::calculate_evaluated_positions(positions.slice(points),
handle_positions_left.slice(points),
handle_positions_right.slice(points),
all_bezier_offsets.slice(offsets),
evaluated_positions.slice(evaluated_points));
});
};
auto evaluate_nurbs = [&](const IndexMask selection) {
auto evaluate_nurbs = [&](const IndexMask &selection) {
this->ensure_nurbs_basis_cache();
const VArray<int8_t> nurbs_orders = this->nurbs_orders();
const Span<float> nurbs_weights = this->nurbs_weights();
const Span<curves::nurbs::BasisCache> nurbs_basis_cache = runtime.nurbs_basis_cache.data();
threading::parallel_for(selection.index_range(), 128, [&](const IndexRange range) {
for (const int curve_index : selection.slice(range)) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
curves::nurbs::interpolate_to_evaluated(nurbs_basis_cache[curve_index],
nurbs_orders[curve_index],
nurbs_weights.slice_safe(points),
positions.slice(points),
evaluated_positions.slice(evaluated_points));
}
selection.foreach_index(GrainSize(128), [&](const int curve_index) {
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
curves::nurbs::interpolate_to_evaluated(nurbs_basis_cache[curve_index],
nurbs_orders[curve_index],
nurbs_weights.slice_safe(points),
positions.slice(points),
evaluated_positions.slice(evaluated_points));
});
};
curves::foreach_curve_by_type(this->curve_types(),
@@ -722,34 +714,32 @@ Span<float3> CurvesGeometry::evaluated_tangents() const
/* Correct the first and last tangents of non-cyclic Bezier curves so that they align with
* the inner handles. This is a separate loop to avoid the cost when Bezier type curves are
* not used. */
Vector<int64_t> bezier_indices;
const IndexMask bezier_mask = this->indices_for_curve_type(CURVE_TYPE_BEZIER, bezier_indices);
IndexMaskMemory memory;
const IndexMask bezier_mask = this->indices_for_curve_type(CURVE_TYPE_BEZIER, memory);
if (!bezier_mask.is_empty()) {
const OffsetIndices<int> points_by_curve = this->points_by_curve();
const Span<float3> positions = this->positions();
const Span<float3> handles_left = this->handle_positions_left();
const Span<float3> handles_right = this->handle_positions_right();
threading::parallel_for(bezier_mask.index_range(), 1024, [&](IndexRange range) {
for (const int curve_index : bezier_mask.slice(range)) {
if (cyclic[curve_index]) {
continue;
}
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
bezier_mask.foreach_index(GrainSize(1024), [&](const int curve_index) {
if (cyclic[curve_index]) {
return;
}
const IndexRange points = points_by_curve[curve_index];
const IndexRange evaluated_points = evaluated_points_by_curve[curve_index];
const float epsilon = 1e-6f;
if (!math::almost_equal_relative(
handles_right[points.first()], positions[points.first()], epsilon))
{
tangents[evaluated_points.first()] = math::normalize(handles_right[points.first()] -
positions[points.first()]);
}
if (!math::almost_equal_relative(
handles_left[points.last()], positions[points.last()], epsilon)) {
tangents[evaluated_points.last()] = math::normalize(positions[points.last()] -
handles_left[points.last()]);
}
const float epsilon = 1e-6f;
if (!math::almost_equal_relative(
handles_right[points.first()], positions[points.first()], epsilon))
{
tangents[evaluated_points.first()] = math::normalize(handles_right[points.first()] -
positions[points.first()]);
}
if (!math::almost_equal_relative(
handles_left[points.last()], positions[points.last()], epsilon)) {
tangents[evaluated_points.last()] = math::normalize(positions[points.last()] -
handles_left[points.last()]);
}
});
}
@@ -1126,13 +1116,13 @@ static void copy_construct_data(const GSpan src, GMutableSpan dst)
static CurvesGeometry copy_with_removed_points(
const CurvesGeometry &curves,
const IndexMask points_to_delete,
const IndexMask &points_to_delete,
const AnonymousAttributePropagationInfo &propagation_info)
{
/* Use a map from points to curves to facilitate using an #IndexMask input. */
const Array<int> point_to_curve_map = curves.point_to_curve_map();
const Vector<IndexRange> copy_point_ranges = points_to_delete.extract_ranges_invert(
const Vector<IndexRange> copy_point_ranges = points_to_delete.to_ranges_invert(
curves.points_range());
/* For every range of points to copy, find the offset in the result curves point layers. */
@@ -1227,7 +1217,7 @@ static CurvesGeometry copy_with_removed_points(
return new_curves;
}
void CurvesGeometry::remove_points(const IndexMask points_to_delete,
void CurvesGeometry::remove_points(const IndexMask &points_to_delete,
const AnonymousAttributePropagationInfo &propagation_info)
{
if (points_to_delete.is_empty()) {
@@ -1241,13 +1231,13 @@ void CurvesGeometry::remove_points(const IndexMask points_to_delete,
static CurvesGeometry copy_with_removed_curves(
const CurvesGeometry &curves,
const IndexMask curves_to_delete,
const IndexMask &curves_to_delete,
const AnonymousAttributePropagationInfo &propagation_info)
{
const OffsetIndices old_points_by_curve = curves.points_by_curve();
const Span<int> old_offsets = curves.offsets();
const Vector<IndexRange> old_curve_ranges = curves_to_delete.extract_ranges_invert(
curves.curves_range(), nullptr);
const Vector<IndexRange> old_curve_ranges = curves_to_delete.to_ranges_invert(
curves.curves_range());
Vector<IndexRange> new_curve_ranges;
Vector<IndexRange> old_point_ranges;
Vector<IndexRange> new_point_ranges;
@@ -1338,7 +1328,7 @@ static CurvesGeometry copy_with_removed_curves(
return new_curves;
}
void CurvesGeometry::remove_curves(const IndexMask curves_to_delete,
void CurvesGeometry::remove_curves(const IndexMask &curves_to_delete,
const AnonymousAttributePropagationInfo &propagation_info)
{
if (curves_to_delete.is_empty()) {
@@ -1353,43 +1343,38 @@ void CurvesGeometry::remove_curves(const IndexMask curves_to_delete,
template<typename T>
static void reverse_curve_point_data(const CurvesGeometry &curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
MutableSpan<T> data)
{
const OffsetIndices points_by_curve = curves.points_by_curve();
threading::parallel_for(curve_selection.index_range(), 256, [&](IndexRange range) {
for (const int curve_i : curve_selection.slice(range)) {
data.slice(points_by_curve[curve_i]).reverse();
}
});
curve_selection.foreach_index(
GrainSize(256), [&](const int curve_i) { data.slice(points_by_curve[curve_i]).reverse(); });
}
template<typename T>
static void reverse_swap_curve_point_data(const CurvesGeometry &curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
MutableSpan<T> data_a,
MutableSpan<T> data_b)
{
const OffsetIndices points_by_curve = curves.points_by_curve();
threading::parallel_for(curve_selection.index_range(), 256, [&](IndexRange range) {
for (const int curve_i : curve_selection.slice(range)) {
const IndexRange points = points_by_curve[curve_i];
MutableSpan<T> a = data_a.slice(points);
MutableSpan<T> b = data_b.slice(points);
for (const int i : IndexRange(points.size() / 2)) {
const int end_index = points.size() - 1 - i;
std::swap(a[end_index], b[i]);
std::swap(b[end_index], a[i]);
}
if (points.size() % 2) {
const int64_t middle_index = points.size() / 2;
std::swap(a[middle_index], b[middle_index]);
}
curve_selection.foreach_index(GrainSize(256), [&](const int curve_i) {
const IndexRange points = points_by_curve[curve_i];
MutableSpan<T> a = data_a.slice(points);
MutableSpan<T> b = data_b.slice(points);
for (const int i : IndexRange(points.size() / 2)) {
const int end_index = points.size() - 1 - i;
std::swap(a[end_index], b[i]);
std::swap(b[end_index], a[i]);
}
if (points.size() % 2) {
const int64_t middle_index = points.size() / 2;
std::swap(a[middle_index], b[middle_index]);
}
});
}
void CurvesGeometry::reverse_curves(const IndexMask curves_to_reverse)
void CurvesGeometry::reverse_curves(const IndexMask &curves_to_reverse)
{
Set<StringRef> bezier_handle_names{{ATTR_HANDLE_POSITION_LEFT,
ATTR_HANDLE_POSITION_RIGHT,

View File

@@ -4,21 +4,15 @@
* \ingroup bke
*/
#include "BLI_index_mask_ops.hh"
#include "BKE_curves_utils.hh"
namespace blender::bke::curves {
void copy_curve_sizes(const OffsetIndices<int> points_by_curve,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<int> sizes)
{
threading::parallel_for(mask.index_range(), 4096, [&](IndexRange ranges_range) {
for (const int64_t i : mask.slice(ranges_range)) {
sizes[i] = points_by_curve[i].size();
}
});
mask.foreach_index(GrainSize(4096), [&](const int i) { sizes[i] = points_by_curve[i].size(); });
}
void copy_curve_sizes(const OffsetIndices<int> points_by_curve,
@@ -54,32 +48,28 @@ void copy_point_data(const OffsetIndices<int> src_points_by_curve,
void copy_point_data(const OffsetIndices<int> src_points_by_curve,
const OffsetIndices<int> dst_points_by_curve,
const IndexMask src_curve_selection,
const IndexMask &src_curve_selection,
const GSpan src,
GMutableSpan dst)
{
threading::parallel_for(src_curve_selection.index_range(), 512, [&](IndexRange range) {
for (const int i : src_curve_selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
/* The arrays might be large, so a threaded copy might make sense here too. */
dst.slice(dst_points).copy_from(src.slice(src_points));
}
src_curve_selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
/* The arrays might be large, so a threaded copy might make sense here too. */
dst.slice(dst_points).copy_from(src.slice(src_points));
});
}
void fill_points(const OffsetIndices<int> points_by_curve,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const GPointer value,
GMutableSpan dst)
{
BLI_assert(*value.type() == dst.type());
const CPPType &type = dst.type();
threading::parallel_for(curve_selection.index_range(), 512, [&](IndexRange range) {
for (const int i : curve_selection.slice(range)) {
const IndexRange points = points_by_curve[i];
type.fill_assign_n(value.get(), dst.slice(points).data(), points.size());
}
curve_selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange points = points_by_curve[i];
type.fill_assign_n(value.get(), dst.slice(points).data(), points.size());
});
}
@@ -110,8 +100,8 @@ bke::CurvesGeometry copy_only_curve_domain(const bke::CurvesGeometry &src_curves
IndexMask indices_for_type(const VArray<int8_t> &types,
const std::array<int, CURVE_TYPES_NUM> &type_counts,
const CurveType type,
const IndexMask selection,
Vector<int64_t> &r_indices)
const IndexMask &selection,
IndexMaskMemory &memory)
{
if (type_counts[type] == types.size()) {
return selection;
@@ -120,22 +110,22 @@ IndexMask indices_for_type(const VArray<int8_t> &types,
return types.get_internal_single() == type ? IndexMask(types.size()) : IndexMask(0);
}
Span<int8_t> types_span = types.get_internal_span();
return index_mask_ops::find_indices_based_on_predicate(
selection, 4096, r_indices, [&](const int index) { return types_span[index] == type; });
return IndexMask::from_predicate(selection, GrainSize(4096), memory, [&](const int index) {
return types_span[index] == type;
});
}
void foreach_curve_by_type(const VArray<int8_t> &types,
const std::array<int, CURVE_TYPES_NUM> &counts,
const IndexMask selection,
const IndexMask &selection,
FunctionRef<void(IndexMask)> catmull_rom_fn,
FunctionRef<void(IndexMask)> poly_fn,
FunctionRef<void(IndexMask)> bezier_fn,
FunctionRef<void(IndexMask)> nurbs_fn)
{
Vector<int64_t> indices;
auto call_if_not_empty = [&](const CurveType type, FunctionRef<void(IndexMask)> fn) {
indices.clear();
const IndexMask mask = indices_for_type(types, counts, type, selection, indices);
IndexMaskMemory memory;
const IndexMask mask = indices_for_type(types, counts, type, selection, memory);
if (!mask.is_empty()) {
fn(mask);
}

View File

@@ -266,7 +266,7 @@ CurveLengthFieldInput::CurveLengthFieldInput()
GVArray CurveLengthFieldInput::get_varray_for_context(const CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const
const IndexMask & /*mask*/) const
{
return construct_curve_length_gvarray(curves, domain);
}

View File

@@ -117,7 +117,7 @@ void MeshComponent::ensure_owns_direct_data()
namespace blender::bke {
VArray<float3> mesh_normals_varray(const Mesh &mesh,
const IndexMask mask,
const IndexMask &mask,
const eAttrDomain domain)
{
switch (domain) {
@@ -135,11 +135,11 @@ VArray<float3> mesh_normals_varray(const Mesh &mesh,
Span<float3> vert_normals = mesh.vert_normals();
const Span<int2> edges = mesh.edges();
Array<float3> edge_normals(mask.min_array_size());
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const int2 &edge = edges[i];
edge_normals[i] = math::normalize(
math::interpolate(vert_normals[edge[0]], vert_normals[edge[1]], 0.5f));
}
});
return VArray<float3>::ForContainer(std::move(edge_normals));
}
@@ -923,24 +923,22 @@ class VArrayImpl_For_VertexWeights final : public VMutableArrayImpl<float> {
});
}
void materialize(IndexMask mask, float *dst) const override
void materialize(const IndexMask &mask, float *dst) const override
{
if (dverts_ == nullptr) {
mask.foreach_index([&](const int i) { dst[i] = 0.0f; });
}
threading::parallel_for(mask.index_range(), 4096, [&](const IndexRange range) {
for (const int64_t i : mask.slice(range)) {
if (const MDeformWeight *weight = this->find_weight_at_index(i)) {
dst[i] = weight->weight;
}
else {
dst[i] = 0.0f;
}
mask.foreach_index(GrainSize(4096), [&](const int64_t i) {
if (const MDeformWeight *weight = this->find_weight_at_index(i)) {
dst[i] = weight->weight;
}
else {
dst[i] = 0.0f;
}
});
}
void materialize_to_uninitialized(IndexMask mask, float *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, float *dst) const override
{
this->materialize(mask, dst);
}

View File

@@ -134,7 +134,7 @@ const Instances *GeometryFieldContext::instances() const
}
GVArray GeometryFieldInput::get_varray_for_context(const fn::FieldContext &context,
const IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
if (const GeometryFieldContext *geometry_context = dynamic_cast<const GeometryFieldContext *>(
@@ -169,7 +169,7 @@ std::optional<eAttrDomain> GeometryFieldInput::preferred_domain(
}
GVArray MeshFieldInput::get_varray_for_context(const fn::FieldContext &context,
const IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
if (const GeometryFieldContext *geometry_context = dynamic_cast<const GeometryFieldContext *>(
@@ -191,7 +191,7 @@ std::optional<eAttrDomain> MeshFieldInput::preferred_domain(const Mesh & /*mesh*
}
GVArray CurvesFieldInput::get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
if (const GeometryFieldContext *geometry_context = dynamic_cast<const GeometryFieldContext *>(
@@ -215,7 +215,7 @@ std::optional<eAttrDomain> CurvesFieldInput::preferred_domain(
}
GVArray PointCloudFieldInput::get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
if (const GeometryFieldContext *geometry_context = dynamic_cast<const GeometryFieldContext *>(
@@ -234,7 +234,7 @@ GVArray PointCloudFieldInput::get_varray_for_context(const fn::FieldContext &con
}
GVArray InstancesFieldInput::get_varray_for_context(const fn::FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
if (const GeometryFieldContext *geometry_context = dynamic_cast<const GeometryFieldContext *>(
@@ -253,7 +253,7 @@ GVArray InstancesFieldInput::get_varray_for_context(const fn::FieldContext &cont
}
GVArray AttributeFieldInput::get_varray_for_context(const GeometryFieldContext &context,
const IndexMask /*mask*/) const
const IndexMask & /*mask*/) const
{
const eCustomDataType data_type = cpp_type_to_custom_data_type(*type_);
if (auto attributes = context.attributes()) {
@@ -308,7 +308,7 @@ static StringRef get_random_id_attribute_name(const eAttrDomain domain)
}
GVArray IDAttributeFieldInput::get_varray_for_context(const GeometryFieldContext &context,
const IndexMask mask) const
const IndexMask &mask) const
{
const StringRef name = get_random_id_attribute_name(context.domain());
@@ -340,7 +340,7 @@ bool IDAttributeFieldInput::is_equal_to(const fn::FieldNode &other) const
}
GVArray AnonymousAttributeFieldInput::get_varray_for_context(const GeometryFieldContext &context,
const IndexMask /*mask*/) const
const IndexMask & /*mask*/) const
{
const eCustomDataType data_type = cpp_type_to_custom_data_type(*type_);
return *context.attributes()->lookup(*anonymous_id_, context.domain(), data_type);
@@ -391,7 +391,7 @@ std::optional<eAttrDomain> AnonymousAttributeFieldInput::preferred_domain(
namespace blender::bke {
GVArray NormalFieldInput::get_varray_for_context(const GeometryFieldContext &context,
const IndexMask mask) const
const IndexMask &mask) const
{
if (const Mesh *mesh = context.mesh()) {
return mesh_normals_varray(*mesh, mask, context.domain());

View File

@@ -105,11 +105,12 @@ blender::Span<InstanceReference> Instances::references() const
return references_;
}
void Instances::remove(const IndexMask mask,
void Instances::remove(const IndexMask &mask,
const AnonymousAttributePropagationInfo &propagation_info)
{
using namespace blender;
if (mask.is_range() && mask.as_range().start() == 0) {
const std::optional<IndexRange> masked_range = mask.to_range();
if (masked_range.has_value() && masked_range->start() == 0) {
/* Deleting from the end of the array can be much faster since no data has to be shifted. */
this->resize(mask.size());
this->remove_unused_references();
@@ -118,12 +119,12 @@ void Instances::remove(const IndexMask mask,
const Span<int> old_handles = this->reference_handles();
Vector<int> new_handles(mask.size());
array_utils::gather(old_handles, mask.indices(), new_handles.as_mutable_span());
array_utils::gather(old_handles, mask, new_handles.as_mutable_span());
reference_handles_ = std::move(new_handles);
const Span<float4x4> old_tansforms = this->transforms();
Vector<float4x4> new_transforms(mask.size());
array_utils::gather(old_tansforms, mask.indices(), new_transforms.as_mutable_span());
array_utils::gather(old_tansforms, mask, new_transforms.as_mutable_span());
transforms_ = std::move(new_transforms);
const bke::CustomDataAttributes &src_attributes = attributes_;
@@ -140,7 +141,7 @@ void Instances::remove(const IndexMask mask,
GSpan src = *src_attributes.get_for_read(id);
dst_attributes.create(id, meta_data.data_type);
GMutableSpan dst = *dst_attributes.get_for_write(id);
array_utils::gather(src, mask.indices(), dst);
array_utils::gather(src, mask, dst);
return true;
},

View File

@@ -20,16 +20,16 @@ BLI_NOINLINE static void sample_point_attribute(const Span<int> corner_verts,
const Span<int> looptri_indices,
const Span<float3> bary_coords,
const VArray<T> &src,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<T> dst)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const MLoopTri &tri = looptris[looptri_indices[i]];
dst[i] = attribute_math::mix3(bary_coords[i],
src[corner_verts[tri.tri[0]]],
src[corner_verts[tri.tri[1]]],
src[corner_verts[tri.tri[2]]]);
}
});
}
void sample_point_normals(const Span<int> corner_verts,
@@ -40,14 +40,14 @@ void sample_point_normals(const Span<int> corner_verts,
const IndexMask mask,
const MutableSpan<float3> dst)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const MLoopTri &tri = looptris[looptri_indices[i]];
const float3 value = attribute_math::mix3(bary_coords[i],
src[corner_verts[tri.tri[0]]],
src[corner_verts[tri.tri[1]]],
src[corner_verts[tri.tri[2]]]);
dst[i] = math::normalize(value);
}
});
}
void sample_point_attribute(const Span<int> corner_verts,
@@ -55,7 +55,7 @@ void sample_point_attribute(const Span<int> corner_verts,
const Span<int> looptri_indices,
const Span<float3> bary_coords,
const GVArray &src,
const IndexMask mask,
const IndexMask &mask,
const GMutableSpan dst)
{
BLI_assert(src.type() == dst.type());
@@ -78,40 +78,40 @@ BLI_NOINLINE static void sample_corner_attribute(const Span<MLoopTri> looptris,
const Span<int> looptri_indices,
const Span<float3> bary_coords,
const VArray<T> &src,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<T> dst)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
if constexpr (check_indices) {
if (looptri_indices[i] == -1) {
dst[i] = {};
continue;
return;
}
}
const MLoopTri &tri = looptris[looptri_indices[i]];
dst[i] = sample_corner_attribute_with_bary_coords(bary_coords[i], tri, src);
}
});
}
void sample_corner_normals(const Span<MLoopTri> looptris,
const Span<int> looptri_indices,
const Span<float3> bary_coords,
const Span<float3> src,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<float3> dst)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const MLoopTri &tri = looptris[looptri_indices[i]];
const float3 value = sample_corner_attribute_with_bary_coords(bary_coords[i], tri, src);
dst[i] = math::normalize(value);
}
});
}
void sample_corner_attribute(const Span<MLoopTri> looptris,
const Span<int> looptri_indices,
const Span<float3> bary_coords,
const GVArray &src,
const IndexMask mask,
const IndexMask &mask,
const GMutableSpan dst)
{
BLI_assert(src.type() == dst.type());
@@ -128,20 +128,20 @@ template<typename T>
void sample_face_attribute(const Span<int> looptri_polys,
const Span<int> looptri_indices,
const VArray<T> &src,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<T> dst)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const int looptri_index = looptri_indices[i];
const int poly_index = looptri_polys[looptri_index];
dst[i] = src[poly_index];
}
});
}
void sample_face_attribute(const Span<int> looptri_polys,
const Span<int> looptri_indices,
const GVArray &src,
const IndexMask mask,
const IndexMask &mask,
const GMutableSpan dst)
{
BLI_assert(src.type() == dst.type());
@@ -159,20 +159,20 @@ static void sample_barycentric_weights(const Span<float3> vert_positions,
const Span<MLoopTri> looptris,
const Span<int> looptri_indices,
const Span<float3> sample_positions,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<float3> bary_coords)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
if constexpr (check_indices) {
if (looptri_indices[i] == -1) {
bary_coords[i] = {};
continue;
return;
}
}
const MLoopTri &tri = looptris[looptri_indices[i]];
bary_coords[i] = compute_bary_coord_in_triangle(
vert_positions, corner_verts, tri, sample_positions[i]);
}
});
}
template<bool check_indices = false>
@@ -181,14 +181,14 @@ static void sample_nearest_weights(const Span<float3> vert_positions,
const Span<MLoopTri> looptris,
const Span<int> looptri_indices,
const Span<float3> sample_positions,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<float3> bary_coords)
{
for (const int i : mask) {
mask.foreach_index([&](const int i) {
if constexpr (check_indices) {
if (looptri_indices[i] == -1) {
bary_coords[i] = {};
continue;
return;
}
}
const MLoopTri &tri = looptris[looptri_indices[i]];
@@ -199,7 +199,7 @@ static void sample_nearest_weights(const Span<float3> vert_positions,
float3(1, 0, 0),
float3(0, 1, 0),
float3(0, 0, 1));
}
});
}
int sample_surface_points_spherical(RandomNumberGenerator &rng,
@@ -394,7 +394,7 @@ BaryWeightFromPositionFn::BaryWeightFromPositionFn(GeometrySet geometry)
looptris_ = mesh.looptris();
}
void BaryWeightFromPositionFn::call(IndexMask mask,
void BaryWeightFromPositionFn::call(const IndexMask &mask,
mf::Params params,
mf::Context /*context*/) const
{
@@ -430,7 +430,7 @@ CornerBaryWeightFromPositionFn::CornerBaryWeightFromPositionFn(GeometrySet geome
looptris_ = mesh.looptris();
}
void CornerBaryWeightFromPositionFn::call(IndexMask mask,
void CornerBaryWeightFromPositionFn::call(const IndexMask &mask,
mf::Params params,
mf::Context /*context*/) const
{
@@ -459,7 +459,7 @@ BaryWeightSampleFn::BaryWeightSampleFn(GeometrySet geometry, fn::GField src_fiel
this->set_signature(&signature_);
}
void BaryWeightSampleFn::call(const IndexMask mask,
void BaryWeightSampleFn::call(const IndexMask &mask,
mf::Params params,
mf::Context /*context*/) const
{

View File

@@ -451,10 +451,10 @@ void DataTypeConversions::convert_to_uninitialized(const CPPType &from_type,
static void call_convert_to_uninitialized_fn(const GVArray &from,
const mf::MultiFunction &fn,
const IndexMask mask,
const IndexMask &mask,
GMutableSpan to)
{
mf::ParamsBuilder params{fn, mask.min_array_size()};
mf::ParamsBuilder params{fn, &mask};
params.add_readonly_single_input(from);
params.add_uninitialized_single_output(to);
mf::ContextBuilder context;
@@ -515,13 +515,13 @@ class GVArray_For_ConvertedGVArray : public GVArrayImpl {
from_type_.destruct(buffer);
}
void materialize(const IndexMask mask, void *dst) const override
void materialize(const IndexMask &mask, void *dst) const override
{
type_->destruct_n(dst, mask.min_array_size());
this->materialize_to_uninitialized(mask, dst);
}
void materialize_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override
{
call_convert_to_uninitialized_fn(varray_,
*old_to_new_conversions_.multi_function,
@@ -573,13 +573,13 @@ class GVMutableArray_For_ConvertedGVMutableArray : public GVMutableArrayImpl {
varray_.set_by_relocate(index, buffer);
}
void materialize(const IndexMask mask, void *dst) const override
void materialize(const IndexMask &mask, void *dst) const override
{
type_->destruct_n(dst, mask.min_array_size());
this->materialize_to_uninitialized(mask, dst);
}
void materialize_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override
{
call_convert_to_uninitialized_fn(varray_,
*old_to_new_conversions_.multi_function,

View File

@@ -41,7 +41,10 @@ inline void copy(const Span<T> src, MutableSpan<T> dst, const int64_t grain_size
* Fill the destination span by copying masked values from the `src` array. Threaded based on
* grain-size.
*/
void copy(const GVArray &src, IndexMask selection, GMutableSpan dst, int64_t grain_size = 4096);
void copy(const GVArray &src,
const IndexMask &selection,
GMutableSpan dst,
int64_t grain_size = 4096);
/**
* Fill the destination span by copying values from the `src` array. Threaded based on
@@ -49,34 +52,34 @@ void copy(const GVArray &src, IndexMask selection, GMutableSpan dst, int64_t gra
*/
template<typename T>
inline void copy(const Span<T> src,
const IndexMask selection,
const IndexMask &selection,
MutableSpan<T> dst,
const int64_t grain_size = 4096)
{
BLI_assert(src.size() == dst.size());
threading::parallel_for(selection.index_range(), grain_size, [&](const IndexRange range) {
for (const int64_t index : selection.slice(range)) {
dst[index] = src[index];
}
});
selection.foreach_index_optimized<int64_t>(GrainSize(grain_size),
[&](const int64_t i) { dst[i] = src[i]; });
}
/**
* Fill the destination span by gathering indexed values from the `src` array.
*/
void gather(const GVArray &src, IndexMask indices, GMutableSpan dst, int64_t grain_size = 4096);
void gather(const GVArray &src,
const IndexMask &indices,
GMutableSpan dst,
int64_t grain_size = 4096);
/**
* Fill the destination span by gathering indexed values from the `src` array.
*/
void gather(GSpan src, IndexMask indices, GMutableSpan dst, int64_t grain_size = 4096);
void gather(GSpan src, const IndexMask &indices, GMutableSpan dst, int64_t grain_size = 4096);
/**
* Fill the destination span by gathering indexed values from the `src` array.
*/
template<typename T>
inline void gather(const VArray<T> &src,
const IndexMask indices,
const IndexMask &indices,
MutableSpan<T> dst,
const int64_t grain_size = 4096)
{
@@ -91,16 +94,17 @@ inline void gather(const VArray<T> &src,
*/
template<typename T, typename IndexT>
inline void gather(const Span<T> src,
const IndexMask indices,
const IndexMask &indices,
MutableSpan<T> dst,
const int64_t grain_size = 4096)
{
BLI_assert(indices.size() == dst.size());
threading::parallel_for(indices.index_range(), grain_size, [&](const IndexRange range) {
for (const int64_t i : range) {
dst[i] = src[indices[i]];
}
});
indices.foreach_segment(GrainSize(grain_size),
[&](const IndexMaskSegment segment, const int64_t segment_pos) {
for (const int64_t i : segment.index_range()) {
dst[segment_pos + i] = src[segment[i]];
}
});
}
/**

View File

@@ -0,0 +1,36 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
/** \file
* \ingroup bli
*/
#include <algorithm>
#include "BLI_utildefines.h"
namespace blender::binary_search {
/**
* Find the index of the first element where the predicate is true. The predicate must also be
* true for all following elements. If the predicate is false for all elements, the size of the
* range is returned.
*/
template<typename Iterator, typename Predicate>
int64_t find_predicate_begin(Iterator begin, Iterator end, Predicate &&predicate)
{
return std::lower_bound(begin,
end,
nullptr,
[&](const auto &value, void * /*dummy*/) { return !predicate(value); }) -
begin;
}
template<typename Range, typename Predicate>
int64_t find_predicate_begin(const Range &range, Predicate &&predicate)
{
return find_predicate_begin(range.begin(), range.end(), predicate);
}
} // namespace blender::binary_search

View File

@@ -43,7 +43,7 @@
* Constructs a single instance of that type at the given pointer.
* - `default_construct_n(void *ptr, int64_t n)`:
* Constructs n instances of that type in an array that starts at the given pointer.
* - `default_construct_indices(void *ptr, IndexMask mask)`:
* - `default_construct_indices(void *ptr, const IndexMask &mask)`:
* Constructs multiple instances of that type in an array that starts at the given pointer.
* Only the indices referenced by `mask` will by constructed.
*
@@ -105,37 +105,37 @@ class CPPType : NonCopyable, NonMovable {
bool has_special_member_functions_ = false;
void (*default_construct_)(void *ptr) = nullptr;
void (*default_construct_indices_)(void *ptr, IndexMask mask) = nullptr;
void (*default_construct_indices_)(void *ptr, const IndexMask &mask) = nullptr;
void (*value_initialize_)(void *ptr) = nullptr;
void (*value_initialize_indices_)(void *ptr, IndexMask mask) = nullptr;
void (*value_initialize_indices_)(void *ptr, const IndexMask &mask) = nullptr;
void (*destruct_)(void *ptr) = nullptr;
void (*destruct_indices_)(void *ptr, IndexMask mask) = nullptr;
void (*destruct_indices_)(void *ptr, const IndexMask &mask) = nullptr;
void (*copy_assign_)(const void *src, void *dst) = nullptr;
void (*copy_assign_indices_)(const void *src, void *dst, IndexMask mask) = nullptr;
void (*copy_assign_compressed_)(const void *src, void *dst, IndexMask mask) = nullptr;
void (*copy_assign_indices_)(const void *src, void *dst, const IndexMask &mask) = nullptr;
void (*copy_assign_compressed_)(const void *src, void *dst, const IndexMask &mask) = nullptr;
void (*copy_construct_)(const void *src, void *dst) = nullptr;
void (*copy_construct_indices_)(const void *src, void *dst, IndexMask mask) = nullptr;
void (*copy_construct_compressed_)(const void *src, void *dst, IndexMask mask) = nullptr;
void (*copy_construct_indices_)(const void *src, void *dst, const IndexMask &mask) = nullptr;
void (*copy_construct_compressed_)(const void *src, void *dst, const IndexMask &mask) = nullptr;
void (*move_assign_)(void *src, void *dst) = nullptr;
void (*move_assign_indices_)(void *src, void *dst, IndexMask mask) = nullptr;
void (*move_assign_indices_)(void *src, void *dst, const IndexMask &mask) = nullptr;
void (*move_construct_)(void *src, void *dst) = nullptr;
void (*move_construct_indices_)(void *src, void *dst, IndexMask mask) = nullptr;
void (*move_construct_indices_)(void *src, void *dst, const IndexMask &mask) = nullptr;
void (*relocate_assign_)(void *src, void *dst) = nullptr;
void (*relocate_assign_indices_)(void *src, void *dst, IndexMask mask) = nullptr;
void (*relocate_assign_indices_)(void *src, void *dst, const IndexMask &mask) = nullptr;
void (*relocate_construct_)(void *src, void *dst) = nullptr;
void (*relocate_construct_indices_)(void *src, void *dst, IndexMask mask) = nullptr;
void (*relocate_construct_indices_)(void *src, void *dst, const IndexMask &mask) = nullptr;
void (*fill_assign_indices_)(const void *value, void *dst, IndexMask mask) = nullptr;
void (*fill_assign_indices_)(const void *value, void *dst, const IndexMask &mask) = nullptr;
void (*fill_construct_indices_)(const void *value, void *dst, IndexMask mask) = nullptr;
void (*fill_construct_indices_)(const void *value, void *dst, const IndexMask &mask) = nullptr;
void (*print_)(const void *value, std::stringstream &ss) = nullptr;
bool (*is_equal_)(const void *a, const void *b) = nullptr;
@@ -323,7 +323,7 @@ class CPPType : NonCopyable, NonMovable {
this->default_construct_indices(ptr, IndexMask(n));
}
void default_construct_indices(void *ptr, IndexMask mask) const
void default_construct_indices(void *ptr, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(ptr));
@@ -348,7 +348,7 @@ class CPPType : NonCopyable, NonMovable {
this->value_initialize_indices(ptr, IndexMask(n));
}
void value_initialize_indices(void *ptr, IndexMask mask) const
void value_initialize_indices(void *ptr, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(ptr));
@@ -375,7 +375,7 @@ class CPPType : NonCopyable, NonMovable {
this->destruct_indices(ptr, IndexMask(n));
}
void destruct_indices(void *ptr, IndexMask mask) const
void destruct_indices(void *ptr, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(ptr));
@@ -401,7 +401,7 @@ class CPPType : NonCopyable, NonMovable {
this->copy_assign_indices(src, dst, IndexMask(n));
}
void copy_assign_indices(const void *src, void *dst, IndexMask mask) const
void copy_assign_indices(const void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -413,7 +413,7 @@ class CPPType : NonCopyable, NonMovable {
/**
* Similar to #copy_assign_indices, but does not leave gaps in the #dst array.
*/
void copy_assign_compressed(const void *src, void *dst, IndexMask mask) const
void copy_assign_compressed(const void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -444,7 +444,7 @@ class CPPType : NonCopyable, NonMovable {
this->copy_construct_indices(src, dst, IndexMask(n));
}
void copy_construct_indices(const void *src, void *dst, IndexMask mask) const
void copy_construct_indices(const void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -456,7 +456,7 @@ class CPPType : NonCopyable, NonMovable {
/**
* Similar to #copy_construct_indices, but does not leave gaps in the #dst array.
*/
void copy_construct_compressed(const void *src, void *dst, IndexMask mask) const
void copy_construct_compressed(const void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -486,7 +486,7 @@ class CPPType : NonCopyable, NonMovable {
this->move_assign_indices(src, dst, IndexMask(n));
}
void move_assign_indices(void *src, void *dst, IndexMask mask) const
void move_assign_indices(void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -517,7 +517,7 @@ class CPPType : NonCopyable, NonMovable {
this->move_construct_indices(src, dst, IndexMask(n));
}
void move_construct_indices(void *src, void *dst, IndexMask mask) const
void move_construct_indices(void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -548,7 +548,7 @@ class CPPType : NonCopyable, NonMovable {
this->relocate_assign_indices(src, dst, IndexMask(n));
}
void relocate_assign_indices(void *src, void *dst, IndexMask mask) const
void relocate_assign_indices(void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -579,7 +579,7 @@ class CPPType : NonCopyable, NonMovable {
this->relocate_construct_indices(src, dst, IndexMask(n));
}
void relocate_construct_indices(void *src, void *dst, IndexMask mask) const
void relocate_construct_indices(void *src, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || src != dst);
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(src));
@@ -598,7 +598,7 @@ class CPPType : NonCopyable, NonMovable {
this->fill_assign_indices(value, dst, IndexMask(n));
}
void fill_assign_indices(const void *value, void *dst, IndexMask mask) const
void fill_assign_indices(const void *value, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(value));
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(dst));
@@ -616,7 +616,7 @@ class CPPType : NonCopyable, NonMovable {
this->fill_construct_indices(value, dst, IndexMask(n));
}
void fill_construct_indices(const void *value, void *dst, IndexMask mask) const
void fill_construct_indices(const void *value, void *dst, const IndexMask &mask) const
{
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(value));
BLI_assert(mask.size() == 0 || this->pointer_can_point_to_instance(dst));

View File

@@ -15,9 +15,12 @@ template<typename T> void default_construct_cb(void *ptr)
{
new (ptr) T;
}
template<typename T> void default_construct_indices_cb(void *ptr, IndexMask mask)
template<typename T> void default_construct_indices_cb(void *ptr, const IndexMask &mask)
{
mask.foreach_index([&](int64_t i) { new (static_cast<T *>(ptr) + i) T; });
if constexpr (std::is_trivially_constructible_v<T>) {
return;
}
mask.foreach_index_optimized<int64_t>([&](int64_t i) { new (static_cast<T *>(ptr) + i) T; });
}
template<typename T> void value_initialize_cb(void *ptr)
@@ -25,89 +28,89 @@ template<typename T> void value_initialize_cb(void *ptr)
new (ptr) T();
}
template<typename T> void value_initialize_indices_cb(void *ptr, IndexMask mask)
template<typename T> void value_initialize_indices_cb(void *ptr, const IndexMask &mask)
{
mask.foreach_index([&](int64_t i) { new (static_cast<T *>(ptr) + i) T(); });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { new (static_cast<T *>(ptr) + i) T(); });
}
template<typename T> void destruct_cb(void *ptr)
{
(static_cast<T *>(ptr))->~T();
}
template<typename T> void destruct_indices_cb(void *ptr, IndexMask mask)
template<typename T> void destruct_indices_cb(void *ptr, const IndexMask &mask)
{
if (std::is_trivially_destructible_v<T>) {
return;
}
T *ptr_ = static_cast<T *>(ptr);
mask.foreach_index([&](int64_t i) { ptr_[i].~T(); });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { ptr_[i].~T(); });
}
template<typename T> void copy_assign_cb(const void *src, void *dst)
{
*static_cast<T *>(dst) = *static_cast<const T *>(src);
}
template<typename T> void copy_assign_indices_cb(const void *src, void *dst, IndexMask mask)
template<typename T> void copy_assign_indices_cb(const void *src, void *dst, const IndexMask &mask)
{
const T *src_ = static_cast<const T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) { dst_[i] = src_[i]; });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { dst_[i] = src_[i]; });
}
template<typename T> void copy_assign_compressed_cb(const void *src, void *dst, IndexMask mask)
template<typename T>
void copy_assign_compressed_cb(const void *src, void *dst, const IndexMask &mask)
{
const T *src_ = static_cast<const T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
dst_[i] = src_[best_mask[i]];
}
});
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { dst_[pos] = src_[i]; });
}
template<typename T> void copy_construct_cb(const void *src, void *dst)
{
blender::uninitialized_copy_n(static_cast<const T *>(src), 1, static_cast<T *>(dst));
}
template<typename T> void copy_construct_indices_cb(const void *src, void *dst, IndexMask mask)
template<typename T>
void copy_construct_indices_cb(const void *src, void *dst, const IndexMask &mask)
{
const T *src_ = static_cast<const T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) { new (dst_ + i) T(src_[i]); });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { new (dst_ + i) T(src_[i]); });
}
template<typename T> void copy_construct_compressed_cb(const void *src, void *dst, IndexMask mask)
template<typename T>
void copy_construct_compressed_cb(const void *src, void *dst, const IndexMask &mask)
{
const T *src_ = static_cast<const T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
new (dst_ + i) T(src_[best_mask[i]]);
}
});
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { new (dst_ + pos) T(src_[i]); });
}
template<typename T> void move_assign_cb(void *src, void *dst)
{
blender::initialized_move_n(static_cast<T *>(src), 1, static_cast<T *>(dst));
}
template<typename T> void move_assign_indices_cb(void *src, void *dst, IndexMask mask)
template<typename T> void move_assign_indices_cb(void *src, void *dst, const IndexMask &mask)
{
T *src_ = static_cast<T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) { dst_[i] = std::move(src_[i]); });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { dst_[i] = std::move(src_[i]); });
}
template<typename T> void move_construct_cb(void *src, void *dst)
{
blender::uninitialized_move_n(static_cast<T *>(src), 1, static_cast<T *>(dst));
}
template<typename T> void move_construct_indices_cb(void *src, void *dst, IndexMask mask)
template<typename T> void move_construct_indices_cb(void *src, void *dst, const IndexMask &mask)
{
T *src_ = static_cast<T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) { new (dst_ + i) T(std::move(src_[i])); });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { new (dst_ + i) T(std::move(src_[i])); });
}
template<typename T> void relocate_assign_cb(void *src, void *dst)
@@ -118,12 +121,12 @@ template<typename T> void relocate_assign_cb(void *src, void *dst)
*dst_ = std::move(*src_);
src_->~T();
}
template<typename T> void relocate_assign_indices_cb(void *src, void *dst, IndexMask mask)
template<typename T> void relocate_assign_indices_cb(void *src, void *dst, const IndexMask &mask)
{
T *src_ = static_cast<T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) {
mask.foreach_index_optimized<int64_t>([&](int64_t i) {
dst_[i] = std::move(src_[i]);
src_[i].~T();
});
@@ -137,12 +140,13 @@ template<typename T> void relocate_construct_cb(void *src, void *dst)
new (dst_) T(std::move(*src_));
src_->~T();
}
template<typename T> void relocate_construct_indices_cb(void *src, void *dst, IndexMask mask)
template<typename T>
void relocate_construct_indices_cb(void *src, void *dst, const IndexMask &mask)
{
T *src_ = static_cast<T *>(src);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) {
mask.foreach_index_optimized<int64_t>([&](int64_t i) {
new (dst_ + i) T(std::move(src_[i]));
src_[i].~T();
});
@@ -157,12 +161,13 @@ template<typename T> void fill_assign_cb(const void *value, void *dst, int64_t n
dst_[i] = value_;
}
}
template<typename T> void fill_assign_indices_cb(const void *value, void *dst, IndexMask mask)
template<typename T>
void fill_assign_indices_cb(const void *value, void *dst, const IndexMask &mask)
{
const T &value_ = *static_cast<const T *>(value);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) { dst_[i] = value_; });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { dst_[i] = value_; });
}
template<typename T> void fill_construct_cb(const void *value, void *dst, int64_t n)
@@ -174,12 +179,13 @@ template<typename T> void fill_construct_cb(const void *value, void *dst, int64_
new (dst_ + i) T(value_);
}
}
template<typename T> void fill_construct_indices_cb(const void *value, void *dst, IndexMask mask)
template<typename T>
void fill_construct_indices_cb(const void *value, void *dst, const IndexMask &mask)
{
const T &value_ = *static_cast<const T *>(value);
T *dst_ = static_cast<T *>(dst);
mask.foreach_index([&](int64_t i) { new (dst_ + i) T(value_); });
mask.foreach_index_optimized<int64_t>([&](int64_t i) { new (dst_ + i) T(value_); });
}
template<typename T> void print_cb(const void *value, std::stringstream &ss)

View File

@@ -44,7 +44,7 @@ namespace blender {
* - Call `fn` with the devirtualized argument and return what `fn` returns.
* - Don't call `fn` (because the devirtualization failed) and return false.
*
* Examples for devirtualizers: #BasicDevirtualizer, #IndexMaskDevirtualizer, #VArrayDevirtualizer.
* Examples for devirtualizers: #BasicDevirtualizer, #VArrayDevirtualizer.
*/
template<typename Fn, typename... Devirtualizers>
inline bool call_with_devirtualized_parameters(const std::tuple<Devirtualizers...> &devis,

View File

@@ -64,10 +64,10 @@ class GVectorArray : NonCopyable, NonMovable {
void extend(int64_t index, GSpan values);
/* Add multiple elements to multiple vectors. */
void extend(IndexMask mask, const GVVectorArray &values);
void extend(IndexMask mask, const GVectorArray &values);
void extend(const IndexMask &mask, const GVVectorArray &values);
void extend(const IndexMask &mask, const GVectorArray &values);
void clear(IndexMask mask);
void clear(const IndexMask &mask);
GMutableSpan operator[](int64_t index);
GSpan operator[](int64_t index) const;

View File

@@ -44,11 +44,11 @@ class GVArrayImpl {
virtual CommonVArrayInfo common_info() const;
virtual void materialize(const IndexMask mask, void *dst) const;
virtual void materialize_to_uninitialized(const IndexMask mask, void *dst) const;
virtual void materialize(const IndexMask &mask, void *dst) const;
virtual void materialize_to_uninitialized(const IndexMask &mask, void *dst) const;
virtual void materialize_compressed(IndexMask mask, void *dst) const;
virtual void materialize_compressed_to_uninitialized(IndexMask mask, void *dst) const;
virtual void materialize_compressed(const IndexMask &mask, void *dst) const;
virtual void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const;
virtual bool try_assign_VArray(void *varray) const;
};
@@ -126,13 +126,13 @@ class GVArrayCommon {
bool may_have_ownership() const;
void materialize(void *dst) const;
void materialize(const IndexMask mask, void *dst) const;
void materialize(const IndexMask &mask, void *dst) const;
void materialize_to_uninitialized(void *dst) const;
void materialize_to_uninitialized(const IndexMask mask, void *dst) const;
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const;
void materialize_compressed(IndexMask mask, void *dst) const;
void materialize_compressed_to_uninitialized(IndexMask mask, void *dst) const;
void materialize_compressed(const IndexMask &mask, void *dst) const;
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const;
CommonVArrayInfo common_info() const;
@@ -321,23 +321,23 @@ template<typename T> class GVArrayImpl_For_VArray : public GVArrayImpl {
new (r_value) T(varray_[index]);
}
void materialize(const IndexMask mask, void *dst) const override
void materialize(const IndexMask &mask, void *dst) const override
{
varray_.materialize(mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
void materialize_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
void materialize_compressed(const IndexMask mask, void *dst) const override
void materialize_compressed(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed(mask, MutableSpan(static_cast<T *>(dst), mask.size()));
}
void materialize_compressed_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.size()));
@@ -386,22 +386,22 @@ template<typename T> class VArrayImpl_For_GVArray : public VArrayImpl<T> {
return true;
}
void materialize(IndexMask mask, T *dst) const override
void materialize(const IndexMask &mask, T *dst) const override
{
varray_.materialize(mask, dst);
}
void materialize_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_to_uninitialized(mask, dst);
}
void materialize_compressed(IndexMask mask, T *dst) const override
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed(mask, dst);
}
void materialize_compressed_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed_to_uninitialized(mask, dst);
}
@@ -458,23 +458,23 @@ template<typename T> class GVMutableArrayImpl_For_VMutableArray : public GVMutab
varray_.set_all(Span(static_cast<const T *>(src), size_));
}
void materialize(const IndexMask mask, void *dst) const override
void materialize(const IndexMask &mask, void *dst) const override
{
varray_.materialize(mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
void materialize_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
void materialize_compressed(const IndexMask mask, void *dst) const override
void materialize_compressed(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed(mask, MutableSpan(static_cast<T *>(dst), mask.size()));
}
void materialize_compressed_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.size()));
@@ -536,22 +536,22 @@ template<typename T> class VMutableArrayImpl_For_GVMutableArray : public VMutabl
return true;
}
void materialize(IndexMask mask, T *dst) const override
void materialize(const IndexMask &mask, T *dst) const override
{
varray_.materialize(mask, dst);
}
void materialize_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_to_uninitialized(mask, dst);
}
void materialize_compressed(IndexMask mask, T *dst) const override
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed(mask, dst);
}
void materialize_compressed_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed_to_uninitialized(mask, dst);
}
@@ -592,11 +592,11 @@ class GVArrayImpl_For_GSpan : public GVMutableArrayImpl {
CommonVArrayInfo common_info() const override;
virtual void materialize(const IndexMask mask, void *dst) const override;
virtual void materialize_to_uninitialized(const IndexMask mask, void *dst) const override;
virtual void materialize(const IndexMask &mask, void *dst) const override;
virtual void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override;
virtual void materialize_compressed(const IndexMask mask, void *dst) const override;
virtual void materialize_compressed_to_uninitialized(const IndexMask mask,
virtual void materialize_compressed(const IndexMask &mask, void *dst) const override;
virtual void materialize_compressed_to_uninitialized(const IndexMask &mask,
void *dst) const override;
};
@@ -634,10 +634,10 @@ class GVArrayImpl_For_SingleValueRef : public GVArrayImpl {
void get(const int64_t index, void *r_value) const override;
void get_to_uninitialized(const int64_t index, void *r_value) const override;
CommonVArrayInfo common_info() const override;
void materialize(const IndexMask mask, void *dst) const override;
void materialize_to_uninitialized(const IndexMask mask, void *dst) const override;
void materialize_compressed(const IndexMask mask, void *dst) const override;
void materialize_compressed_to_uninitialized(const IndexMask mask, void *dst) const override;
void materialize(const IndexMask &mask, void *dst) const override;
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override;
void materialize_compressed(const IndexMask &mask, void *dst) const override;
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override;
};
class GVArrayImpl_For_SingleValueRef_final final : public GVArrayImpl_For_SingleValueRef {

View File

@@ -2,305 +2,900 @@
#pragma once
/** \file
* \ingroup bli
*
* An IndexMask references an array of unsigned integers with the following property:
* The integers must be in ascending order and there must not be duplicates.
*
* Remember that the array is only referenced and not owned by an IndexMask instance.
*
* In most cases the integers in the array represent some indices into another array. So they
* "select" or "mask" a some elements in that array. Hence the name IndexMask.
*
* The invariant stated above has the nice property that it makes it easy to check if an integer
* array is an IndexRange, i.e. no indices are skipped. That allows functions to implement two code
* paths: One where it iterates over the index array and one where it iterates over the index
* range. The latter one is more efficient due to less memory reads and potential usage of SIMD
* instructions.
*
* The IndexMask.foreach_index method helps writing code that implements both code paths at the
* same time.
*/
#include <array>
#include <optional>
#include <variant>
#include "BLI_index_range.hh"
#include "BLI_span.hh"
#include "BLI_bit_span.hh"
#include "BLI_function_ref.hh"
#include "BLI_linear_allocator.hh"
#include "BLI_offset_span.hh"
#include "BLI_task.hh"
#include "BLI_unique_sorted_indices.hh"
#include "BLI_vector.hh"
namespace blender {
template<typename T> class VArray;
}
class IndexMask {
namespace blender::index_mask {
/**
* Constants that define the maximum segment size. Segment sizes are limited so that the indices
* within each segment can be stored as #int16_t, which allows the mask to stored much more
* compactly than if 32 or 64 bit ints would be used.
* - Using 8 bit ints does not work well, because then the maximum segment size would be too small
* for eliminate per-segment overhead in many cases and also leads to many more segments.
* - The most-significant-bit is not used so that signed integers can be used which avoids common
* issues when mixing signed and unsigned ints.
* - The second most-significant bit is not used for indices so that #max_segment_size itself can
* be stored in the #int16_t.
* - The maximum number of indices in a segment is 16384, which is generally enough to make the
* overhead per segment negilible when processing large index masks.
* - A power of two is used for #max_segment_size, because that allows for faster construction of
* index masks for index ranges.
*/
static constexpr int64_t max_segment_size_shift = 14;
static constexpr int64_t max_segment_size = (1 << max_segment_size_shift); /* 16384 */
static constexpr int64_t max_segment_size_mask_low = max_segment_size - 1;
static constexpr int64_t max_segment_size_mask_high = ~max_segment_size_mask_low;
/**
* Encodes a position in an #IndexMask. The term "raw" just means that this does not have the usual
* iterator methods like `operator++`. Supporting those would require storing more data. Generally,
* the fastest way to iterate over an #IndexMask is using a `foreach_*` method anyway.
*/
struct RawMaskIterator {
/** Index of the segment in the index mask. */
int64_t segment_i;
/** Element within the segment. */
int16_t index_in_segment;
};
/**
* Base type of #IndexMask. This only exists to make it more convenient to construct an index mask
* in a few functions with #IndexMask::data_for_inplace_construction.
*
* The names intentionally have a trailing underscore here even though they are public in
* #IndexMaskData because they are private in #IndexMask.
*/
struct IndexMaskData {
/**
* Size of the index mask, i.e. the number of indices.
*/
int64_t indices_num_;
/**
* Number of segments in the index mask. Each segment contains at least one of the indices.
*/
int64_t segments_num_;
/**
* Pointer to the index array for every segment. The size of each array can be computed from
* #cumulative_segment_sizes_.
*/
const int16_t **indices_by_segment_;
/**
* Offset that is applied to the indices in each segment.
*/
const int64_t *segment_offsets_;
/**
* Encodes the size of each segment. The size of a specific segment can be computed by
* subtracting consecutive elements (also see #OffsetIndices). The size of this array is one
* larger than #segments_num_. Note that the first elements is _not_ necessarily zero when an
* index mask is a slice of another mask.
*/
const int64_t *cumulative_segment_sizes_;
/**
* Index into the first segment where the #IndexMask starts. This exists to support slicing
* without having to modify and therefor allocate a new #indices_by_segment_ array.
*/
int64_t begin_index_in_segment_;
/**
* Index into the last segment where the #IndexMask ends. This exists to support slicing without
* having to modify and therefore allocate a new #cumulative_segment_sizes_ array.
*/
int64_t end_index_in_segment_;
};
/**
* #IndexMask does not own any memory itself. In many cases the memory referenced by a mask has
* static life-time (e.g. when a mask is a range). To create more complex masks, additional memory
* is necessary. #IndexMaskMemory is a simple wrapper around a linear allocator that has to be
* passed to functions that might need to allocate extra memory.
*/
class IndexMaskMemory : public LinearAllocator<> {
private:
/** The underlying reference to sorted integers. */
Span<int64_t> indices_;
/** Inline buffer to avoid heap allocations when working with small index masks. */
AlignedBuffer<1024, 8> inline_buffer_;
public:
/** Creates an IndexMask that contains no indices. */
IndexMask() = default;
/**
* Create an IndexMask using the given integer array.
* This constructor asserts that the given integers are in ascending order and that there are no
* duplicates.
*/
IndexMask(Span<int64_t> indices) : indices_(indices)
IndexMaskMemory()
{
BLI_assert(IndexMask::indices_are_valid_index_mask(indices));
this->provide_buffer(inline_buffer_);
}
};
using IndexMaskSegment = OffsetSpan<int64_t, int16_t>;
/**
* An #IndexMask is a sequence of unique and sorted indices (`BLI_unique_sorted_indices.hh`).
* It's commonly used when a subset of elements in an array has to be processed.
*
* #IndexMask is a non-owning container. That data it references is usually either statically
* allocated or is owned by an #IndexMaskMemory.
*
* Internally, an index mask is split into an arbitrary number of ordered segments. Each segment
* contains up to #max_segment_size (2^14 = 16384) indices. The indices in a segment are stored as
* `int16_t`, but each segment also has a `int64_t` offset.
*
* The data structure is designed to satisfy the following key requirements:
* - Construct index mask for an #IndexRange in O(1) time (after initial setup).
* - Support efficient slicing (O(log n) with a low constant factor).
* - Support multi-threaded construction without severe serial bottlenecks.
* - Support efficient iteration over indices that uses #IndexRange when possible.
*
* Construction:
* A new index mask is usually created by calling one of its constructors which are O(1), or for
* more complex masks, by calling various `IndexMask::from_*` functions that create masks from
* various sources. Those generally need additional memory which is provided with by an
* #IndexMaskMemory.
*
* Some of the `IndexMask::from_* functions are have an `IndexMask universe` input. When
* provided, the function will only consider the indices in the "universe". The term comes from
* mathematics: https://en.wikipedia.org/wiki/Universe_(mathematics).
*
* Iteration:
* To iterate over the indices, one usually has to use one of the `foreach_*` functions which
* require a callback function. Due to the internal segmentation of the index mask, this is more
* efficient than using a normal C++ iterator and range-based for loops.
*
* There are multiple variants of the `foreach_*` functions which are useful in different
* scenarios. The callback can generally take one or two arguments. The first is the index
* stored in the mask and the second is the index that would have to be passed into `operator[]`
* to get the first index.
*
* The `foreach_*` methods also accept an optional `GrainSize` argument. When that is provided,
* multi-threading is used when appropriate. Integrating multi-threading at this level works well
* because mask iteration and parallelism are often used at the same time.
*
* Extraction:
* An #IndexMask can be converted into various other forms using the `to_*` methods.
*
*/
class IndexMask : private IndexMaskData {
public:
/** Construct an empty mask. */
IndexMask();
/** Construct a mask that contains the indices from 0 to `size - 1`. This takes O(1) time. */
explicit IndexMask(int64_t size);
/** Construct a mask that contains the indices in the range. This takes O(1) time. */
IndexMask(IndexRange range);
/** Construct a mask from unique sorted indices. */
template<typename T> static IndexMask from_indices(Span<T> indices, IndexMaskMemory &memory);
/** Construct a mask from the indices of set bits. */
static IndexMask from_bits(BitSpan bits, IndexMaskMemory &memory);
/** Construct a mask from the indices of set bits, but limited to the indices in #universe. */
static IndexMask from_bits(const IndexMask &universe, BitSpan bits, IndexMaskMemory &memory);
/** Construct a mask from the true indices. */
static IndexMask from_bools(Span<bool> bools, IndexMaskMemory &memory);
static IndexMask from_bools(const VArray<bool> &bools, IndexMaskMemory &memory);
/** Construct a mask from the true indices, but limited by the indices in #universe. */
static IndexMask from_bools(const IndexMask &universe,
Span<bool> bools,
IndexMaskMemory &memory);
static IndexMask from_bools(const IndexMask &universe,
const VArray<bool> &bools,
IndexMaskMemory &memory);
/** Construct a mask from all the indices for which the predicate is true. */
template<typename Fn>
static IndexMask from_predicate(const IndexMask &universe,
GrainSize grain_size,
IndexMaskMemory &memory,
Fn &&predicate);
/** Sorts all indices from #universe into the different output masks. */
template<typename T, typename Fn>
static void from_groups(const IndexMask &universe,
IndexMaskMemory &memory,
Fn &&get_group_index,
MutableSpan<IndexMask> r_masks);
int64_t size() const;
bool is_empty() const;
IndexRange index_range() const;
int64_t first() const;
int64_t last() const;
/**
* Use this method when you know that no indices are skipped. It is more efficient than preparing
* an integer array all the time.
* \return Minimum number of elements an array has to have so that it can be indexed by every
* index stored in the mask.
*/
IndexMask(IndexRange range) : indices_(range.as_span()) {}
int64_t min_array_size() const;
/**
* Construct an IndexMask from a sorted list of indices. Note, the created IndexMask is only
* valid as long as the initializer_list is valid.
* \return Position where the #query_index is stored, or none if the index is not in the mask.
*/
std::optional<RawMaskIterator> find(int64_t query_index) const;
/**
* \return True when the #query_index is stored in the mask.
*/
bool contains(int64_t query_index) const;
/** \return The iterator for the given index such that `mask[iterator] == mask[index]`. */
RawMaskIterator index_to_iterator(int64_t index) const;
/** \return The index for the given iterator such that `mask[iterator] == mask[index]`. */
int64_t iterator_to_index(const RawMaskIterator &it) const;
/**
* Get the index at the given position. Prefer `foreach_*` methods for better performance. This
* takes O(log n) time.
*/
int64_t operator[](int64_t i) const;
/**
* Same as above but takes O(1) time. It's still preferable to use `foreach_*` methods for
* iteration.
*/
int64_t operator[](const RawMaskIterator &it) const;
/**
* Get a new mask that contains a consecutive subset of this mask. Takes O(log n) time and but
* can reuse the memory from the source mask.
*/
IndexMask slice(IndexRange range) const;
IndexMask slice(int64_t start, int64_t size) const;
/**
* Same as above but can also add an offset to every index in the mask.
* Takes O(log n + range.size()) time but with a very small constant factor.
*/
IndexMask slice_and_offset(IndexRange range, int64_t offset, IndexMaskMemory &memory) const;
IndexMask slice_and_offset(int64_t start,
int64_t size,
int64_t offset,
IndexMaskMemory &memory) const;
/**
* \return A new index mask that contains all the indices from the universe that are not in the
* current mask.
*/
IndexMask complement(IndexRange universe, IndexMaskMemory &memory) const;
/**
* \return Number of segments in the mask.
*/
int64_t segments_num() const;
/**
* \return Indices stored in the n-th segment.
*/
IndexMaskSegment segment(int64_t segment_i) const;
/**
* Calls the function once for every index.
*
* Don't do this:
* IndexMask mask = {3, 4, 5};
* Supported function signatures:
* - `(int64_t i)`
* - `(int64_t i, int64_t pos)`
*
* Do this:
* do_something_with_an_index_mask({3, 4, 5});
* `i` is the index that should be processed and `pos` is the position of that index in the mask:
* `i == mask[pos]`
*/
IndexMask(const std::initializer_list<int64_t> &indices) : IndexMask(Span<int64_t>(indices)) {}
template<typename Fn> void foreach_index(Fn &&fn) const;
template<typename Fn> void foreach_index(GrainSize grain_size, Fn &&fn) const;
/**
* Creates an IndexMask that references the indices [0, n-1].
* Same as #foreach_index, but generates more code, increasing compile time and binary size. This
* is because separate loops are generated for segments that are ranges and those that are not.
* Only use this when very little processing is done for each element.
*/
explicit IndexMask(int64_t n) : IndexMask(IndexRange(n)) {}
template<typename IndexT, typename Fn> void foreach_index_optimized(Fn &&fn) const;
template<typename IndexT, typename Fn>
void foreach_index_optimized(GrainSize grain_size, Fn &&fn) const;
/** Checks that the indices are non-negative and in ascending order. */
static bool indices_are_valid_index_mask(Span<int64_t> indices)
{
if (!indices.is_empty()) {
if (indices.first() < 0) {
return false;
/**
* Calls the function once for every segment. This should be used instead of #foreach_index if
* the algorithm can be implemented more efficiently by processing multiple elements at once.
*
* Supported function signatures:
* - `(IndexMaskSegment segment)`
* - `(IndexMaskSegment segment, int64_t segment_pos)`
*
* The `segment_pos` is the position in the mask where the segment starts:
* `segment[0] == mask[segment_pos]`
*/
template<typename Fn> void foreach_segment(Fn &&fn) const;
template<typename Fn> void foreach_segment(GrainSize grain_size, Fn &&fn) const;
/**
* This is similar to #foreach_segment but supports slightly different function signatures:
* - `(auto segment)`
* - `(auto segment, int64_t segment_pos)`
*
* The `segment` input is either of type `IndexMaskSegment` or `IndexRange`, so the function has
* to support both cases. This also means that more code is generated by the compiler because the
* function is instantiated twice. Only use this when very little processing happens per element.
*/
template<typename Fn> void foreach_segment_optimized(Fn &&fn) const;
template<typename Fn> void foreach_segment_optimized(GrainSize grain_size, Fn &&fn) const;
/**
* Calls the function once for every range. Note that this might call the function for each index
* separately in the worst case if there are no consecutive indices.
*
* Support function signatures:
* - `(IndexRange segment)`
* - `(IndexRange segment, int64_t segment_pos)`
*/
template<typename Fn> void foreach_range(Fn &&fn) const;
/**
* Fill the provided span with the indices in the mask. The span is expected to have the same
* size as the mask.
*/
template<typename T> void to_indices(MutableSpan<T> r_indices) const;
/**
* Set the bits at indices in the mask to 1 and all other bits to 0.
*/
void to_bits(MutableBitSpan r_bits) const;
/**
* Set the bools at indies inthe mask to true and all others to false.
*/
void to_bools(MutableSpan<bool> r_bools) const;
/**
* Try to convert the entire index mask into a range. This only works if there are no gaps
* between any indices.
*/
std::optional<IndexRange> to_range() const;
/**
* \return All index ranges in the mask. In the worst case this is a separate range for every
* index.
*/
Vector<IndexRange> to_ranges() const;
/**
* \return All index ranges in the universe that are not in the mask. In the worst case this is a
* separate range for every index.
*/
Vector<IndexRange> to_ranges_invert(IndexRange universe) const;
/**
* \return All segments in sorted vector. Segments that encode a range are already converted to
* an #IndexRange.
*/
template<int64_t N = 4>
Vector<std::variant<IndexRange, IndexMaskSegment>, N> to_spans_and_ranges() const;
/**
* Is used by some functions to get low level access to the mask in order to construct it.
*/
IndexMaskData &data_for_inplace_construction();
};
/**
* Utility that makes it efficient to build many small index masks from segments one after another.
* The class has to be constructed once. Afterwards, `update` has to be called to fill the mask
* with the provided segment.
*/
class IndexMaskFromSegment : NonCopyable, NonMovable {
private:
int64_t segment_offset_;
const int16_t *segment_indices_;
std::array<int64_t, 2> cumulative_segment_sizes_;
IndexMask mask_;
public:
IndexMaskFromSegment();
const IndexMask &update(IndexMaskSegment segment);
};
inline IndexMaskFromSegment::IndexMaskFromSegment()
{
IndexMaskData &data = mask_.data_for_inplace_construction();
cumulative_segment_sizes_[0] = 0;
data.segments_num_ = 1;
data.indices_by_segment_ = &segment_indices_;
data.segment_offsets_ = &segment_offset_;
data.cumulative_segment_sizes_ = cumulative_segment_sizes_.data();
data.begin_index_in_segment_ = 0;
}
inline const IndexMask &IndexMaskFromSegment::update(const IndexMaskSegment segment)
{
const Span<int16_t> indices = segment.base_span();
BLI_assert(!indices.is_empty());
BLI_assert(std::is_sorted(indices.begin(), indices.end()));
BLI_assert(indices[0] >= 0);
BLI_assert(indices.last() < max_segment_size);
const int64_t indices_num = indices.size();
IndexMaskData &data = mask_.data_for_inplace_construction();
segment_offset_ = segment.offset();
segment_indices_ = indices.data();
cumulative_segment_sizes_[1] = int16_t(indices_num);
data.indices_num_ = indices_num;
data.end_index_in_segment_ = indices_num;
return mask_;
}
std::array<int16_t, max_segment_size> build_static_indices_array();
const IndexMask &get_static_index_mask_for_min_size(const int64_t min_size);
std::ostream &operator<<(std::ostream &stream, const IndexMask &mask);
/* -------------------------------------------------------------------- */
/** \name Inline Utilities
* \{ */
inline const std::array<int16_t, max_segment_size> &get_static_indices_array()
{
alignas(64) static const std::array<int16_t, max_segment_size> data =
build_static_indices_array();
return data;
}
template<typename T>
inline void masked_fill(MutableSpan<T> data, const T &value, const IndexMask &mask)
{
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { data[i] = value; });
}
/* -------------------------------------------------------------------- */
/** \name #RawMaskIterator Inline Methods
* \{ */
inline bool operator!=(const RawMaskIterator &a, const RawMaskIterator &b)
{
return a.segment_i != b.segment_i || a.index_in_segment != b.index_in_segment;
}
inline bool operator==(const RawMaskIterator &a, const RawMaskIterator &b)
{
return !(a != b);
}
/* -------------------------------------------------------------------- */
/** \name #IndexMask Inline Methods
* \{ */
inline void init_empty_mask(IndexMaskData &data)
{
static constexpr int64_t cumulative_sizes_for_empty_mask[1] = {0};
data.indices_num_ = 0;
data.segments_num_ = 0;
data.cumulative_segment_sizes_ = cumulative_sizes_for_empty_mask;
/* Intentionally leave some pointer uninitialized which must not be accessed on empty masks
* anyway. */
}
inline IndexMask::IndexMask()
{
init_empty_mask(*this);
}
inline IndexMask::IndexMask(const int64_t size)
{
if (size == 0) {
init_empty_mask(*this);
return;
}
*this = get_static_index_mask_for_min_size(size);
indices_num_ = size;
segments_num_ = ((size + max_segment_size - 1) >> max_segment_size_shift);
begin_index_in_segment_ = 0;
end_index_in_segment_ = size - ((size - 1) & max_segment_size_mask_high);
}
inline IndexMask::IndexMask(const IndexRange range)
{
if (range.is_empty()) {
init_empty_mask(*this);
return;
}
const int64_t one_after_last = range.one_after_last();
*this = get_static_index_mask_for_min_size(one_after_last);
const int64_t first_segment_i = range.first() >> max_segment_size_shift;
const int64_t last_segment_i = range.last() >> max_segment_size_shift;
indices_num_ = range.size();
segments_num_ = last_segment_i - first_segment_i + 1;
indices_by_segment_ += first_segment_i;
segment_offsets_ += first_segment_i;
cumulative_segment_sizes_ += first_segment_i;
begin_index_in_segment_ = range.first() & max_segment_size_mask_low;
end_index_in_segment_ = one_after_last - ((one_after_last - 1) & max_segment_size_mask_high);
}
inline int64_t IndexMask::size() const
{
return indices_num_;
}
inline bool IndexMask::is_empty() const
{
return indices_num_ == 0;
}
inline IndexRange IndexMask::index_range() const
{
return IndexRange(indices_num_);
}
inline int64_t IndexMask::first() const
{
BLI_assert(indices_num_ > 0);
return segment_offsets_[0] + indices_by_segment_[0][begin_index_in_segment_];
}
inline int64_t IndexMask::last() const
{
BLI_assert(indices_num_ > 0);
const int64_t last_segment_i = segments_num_ - 1;
return segment_offsets_[last_segment_i] +
indices_by_segment_[last_segment_i][end_index_in_segment_ - 1];
}
inline int64_t IndexMask::min_array_size() const
{
if (indices_num_ == 0) {
return 0;
}
return this->last() + 1;
}
inline RawMaskIterator IndexMask::index_to_iterator(const int64_t index) const
{
BLI_assert(index >= 0);
BLI_assert(index < indices_num_);
RawMaskIterator it;
const int64_t full_index = index + cumulative_segment_sizes_[0] + begin_index_in_segment_;
it.segment_i = -1 +
binary_search::find_predicate_begin(
cumulative_segment_sizes_,
cumulative_segment_sizes_ + segments_num_ + 1,
[&](const int64_t cumulative_size) { return cumulative_size > full_index; });
it.index_in_segment = full_index - cumulative_segment_sizes_[it.segment_i];
return it;
}
inline int64_t IndexMask::iterator_to_index(const RawMaskIterator &it) const
{
BLI_assert(it.segment_i >= 0);
BLI_assert(it.segment_i < segments_num_);
BLI_assert(it.index_in_segment >= 0);
BLI_assert(it.index_in_segment < cumulative_segment_sizes_[it.segment_i + 1] -
cumulative_segment_sizes_[it.segment_i]);
return it.index_in_segment + cumulative_segment_sizes_[it.segment_i] -
cumulative_segment_sizes_[0] - begin_index_in_segment_;
}
inline int64_t IndexMask::operator[](const int64_t i) const
{
const RawMaskIterator it = this->index_to_iterator(i);
return (*this)[it];
}
inline int64_t IndexMask::operator[](const RawMaskIterator &it) const
{
return segment_offsets_[it.segment_i] + indices_by_segment_[it.segment_i][it.index_in_segment];
}
inline int64_t IndexMask::segments_num() const
{
return segments_num_;
}
inline IndexMaskSegment IndexMask::segment(const int64_t segment_i) const
{
BLI_assert(segment_i >= 0);
BLI_assert(segment_i < segments_num_);
const int64_t full_segment_size = cumulative_segment_sizes_[segment_i + 1] -
cumulative_segment_sizes_[segment_i];
const int64_t begin_index = (segment_i == 0) ? begin_index_in_segment_ : 0;
const int64_t end_index = (segment_i == segments_num_ - 1) ? end_index_in_segment_ :
full_segment_size;
const int64_t segment_size = end_index - begin_index;
return IndexMaskSegment{segment_offsets_[segment_i],
{indices_by_segment_[segment_i] + begin_index, segment_size}};
}
inline IndexMask IndexMask::slice(const IndexRange range) const
{
return this->slice(range.start(), range.size());
}
inline IndexMaskData &IndexMask::data_for_inplace_construction()
{
return *this;
}
template<typename Fn>
constexpr bool has_segment_and_start_parameter =
std::is_invocable_r_v<void, Fn, IndexMaskSegment, int64_t> ||
std::is_invocable_r_v<void, Fn, IndexRange, int64_t>;
template<typename Fn> inline void IndexMask::foreach_index(Fn &&fn) const
{
this->foreach_segment(
[&](const IndexMaskSegment indices, [[maybe_unused]] const int64_t start_segment_pos) {
if constexpr (std::is_invocable_r_v<void, Fn, int64_t, int64_t>) {
for (const int64_t i : indices.index_range()) {
fn(indices[i], start_segment_pos + i);
}
}
else {
for (const int64_t index : indices) {
fn(index);
}
}
});
}
template<typename Fn>
inline void IndexMask::foreach_index(const GrainSize grain_size, Fn &&fn) const
{
threading::parallel_for(this->index_range(), grain_size.value, [&](const IndexRange range) {
const IndexMask sub_mask = this->slice(range);
sub_mask.foreach_index([&](const int64_t i, [[maybe_unused]] const int64_t index_pos) {
if constexpr (std::is_invocable_r_v<void, Fn, int64_t, int64_t>) {
fn(i, index_pos + range.start());
}
}
for (int64_t i = 1; i < indices.size(); i++) {
if (indices[i - 1] >= indices[i]) {
return false;
}
}
return true;
}
operator Span<int64_t>() const
{
return indices_;
}
const int64_t *begin() const
{
return indices_.begin();
}
const int64_t *end() const
{
return indices_.end();
}
/**
* Returns the n-th index referenced by this IndexMask. The `index_range` method returns an
* IndexRange containing all indices that can be used as parameter here.
*/
int64_t operator[](int64_t n) const
{
return indices_[n];
}
/**
* Returns the minimum size an array has to have, if the integers in this IndexMask are going to
* be used as indices in that array.
*/
int64_t min_array_size() const
{
if (indices_.size() == 0) {
return 0;
}
else {
return indices_.last() + 1;
}
}
Span<int64_t> indices() const
{
return indices_;
}
/**
* Returns true if this IndexMask does not skip any indices. This check requires O(1) time.
*/
bool is_range() const
{
return indices_.size() > 0 && indices_.last() - indices_.first() == indices_.size() - 1;
}
/**
* Returns the IndexRange referenced by this IndexMask. This method should only be called after
* the caller made sure that this IndexMask is actually a range.
*/
IndexRange as_range() const
{
BLI_assert(this->is_range());
return IndexRange{indices_.first(), indices_.size()};
}
/**
* Calls the given callback for every referenced index. The callback has to take one unsigned
* integer as parameter.
*
* This method implements different code paths for the cases when the IndexMask represents a
* range or not.
*/
template<typename CallbackT> void foreach_index(const CallbackT &callback) const
{
this->to_best_mask_type([&](const auto &mask) {
for (const int64_t i : mask) {
callback(i);
else {
fn(i);
}
});
}
});
}
/**
* Often an #IndexMask wraps a range of indices without any gaps. In this case, it is more
* efficient to compute the indices in a loop on-the-fly instead of reading them from memory.
* This method makes it easy to generate code for both cases.
*
* The given function is expected to take one parameter that can either be of type #IndexRange or
* #Span<int64_t>.
*/
template<typename Fn> void to_best_mask_type(const Fn &fn) const
{
if (this->is_range()) {
const IndexRange masked_range = this->as_range();
fn(masked_range);
template<typename T, typename Fn>
#if (defined(__GNUC__) && !defined(__clang__))
[[gnu::optimize("O3")]]
#endif
inline void
optimized_foreach_index(const IndexMaskSegment segment, const Fn fn)
{
BLI_assert(segment.last() < std::numeric_limits<T>::max());
if (unique_sorted_indices::non_empty_is_range(segment.base_span())) {
const T start = T(segment[0]);
const T last = T(segment.last());
for (T i = start; i <= last; i++) {
fn(i);
}
}
else {
for (const int64_t i : segment) {
fn(T(i));
}
}
}
template<typename T, typename Fn>
#if (defined(__GNUC__) && !defined(__clang__))
[[gnu::optimize("O3")]]
#endif
inline void
optimized_foreach_index_with_pos(const IndexMaskSegment segment,
const int64_t segment_pos,
const Fn fn)
{
BLI_assert(segment.last() < std::numeric_limits<T>::max());
BLI_assert(segment.size() + segment_pos < std::numeric_limits<T>::max());
if (unique_sorted_indices::non_empty_is_range(segment.base_span())) {
const T start = T(segment[0]);
const T last = T(segment.last());
for (T i = start, pos = T(segment_pos); i <= last; i++, pos++) {
fn(i, pos);
}
}
else {
T pos = T(segment_pos);
for (const int64_t i : segment.index_range()) {
const T index = T(segment[i]);
fn(index, pos);
pos++;
}
}
}
template<typename IndexT, typename Fn>
inline void IndexMask::foreach_index_optimized(Fn &&fn) const
{
this->foreach_segment(
[&](const IndexMaskSegment segment, [[maybe_unused]] const int64_t segment_pos) {
if constexpr (std::is_invocable_r_v<void, Fn, IndexT, IndexT>) {
optimized_foreach_index_with_pos<IndexT>(segment, segment_pos, fn);
}
else {
optimized_foreach_index<IndexT>(segment, fn);
}
});
}
template<typename IndexT, typename Fn>
inline void IndexMask::foreach_index_optimized(const GrainSize grain_size, Fn &&fn) const
{
threading::parallel_for(this->index_range(), grain_size.value, [&](const IndexRange range) {
const IndexMask sub_mask = this->slice(range);
sub_mask.foreach_segment(
[&](const IndexMaskSegment segment, [[maybe_unused]] const int64_t segment_pos) {
if constexpr (std::is_invocable_r_v<void, Fn, IndexT, IndexT>) {
optimized_foreach_index_with_pos<IndexT>(segment, segment_pos + range.start(), fn);
}
else {
optimized_foreach_index<IndexT>(segment, fn);
}
});
});
}
template<typename Fn> inline void IndexMask::foreach_segment_optimized(Fn &&fn) const
{
this->foreach_segment(
[&](const IndexMaskSegment segment, [[maybe_unused]] const int64_t start_segment_pos) {
if (unique_sorted_indices::non_empty_is_range(segment.base_span())) {
const IndexRange range(segment[0], segment.size());
if constexpr (has_segment_and_start_parameter<Fn>) {
fn(range, start_segment_pos);
}
else {
fn(range);
}
}
else {
if constexpr (has_segment_and_start_parameter<Fn>) {
fn(segment, start_segment_pos);
}
else {
fn(segment);
}
}
});
}
template<typename Fn>
inline void IndexMask::foreach_segment_optimized(const GrainSize grain_size, Fn &&fn) const
{
threading::parallel_for(this->index_range(), grain_size.value, [&](const IndexRange range) {
const IndexMask sub_mask = this->slice(range);
sub_mask.foreach_segment_optimized(
[&fn, range_start = range.start()](const auto segment,
[[maybe_unused]] const int64_t start_segment_pos) {
if constexpr (has_segment_and_start_parameter<Fn>) {
fn(segment, start_segment_pos + range_start);
}
else {
fn(segment);
}
});
});
}
template<typename Fn> inline void IndexMask::foreach_segment(Fn &&fn) const
{
[[maybe_unused]] int64_t segment_pos = 0;
for (const int64_t segment_i : IndexRange(segments_num_)) {
const IndexMaskSegment segment = this->segment(segment_i);
if constexpr (has_segment_and_start_parameter<Fn>) {
fn(segment, segment_pos);
segment_pos += segment.size();
}
else {
const Span<int64_t> masked_indices = indices_;
fn(masked_indices);
fn(segment);
}
}
}
/**
* Returns an IndexRange that can be used to index this IndexMask.
*
* The range is [0, number of indices - 1].
*
* This is not to be confused with the `as_range` method.
*/
IndexRange index_range() const
{
return indices_.index_range();
}
template<typename Fn>
inline void IndexMask::foreach_segment(const GrainSize grain_size, Fn &&fn) const
{
threading::parallel_for(this->index_range(), grain_size.value, [&](const IndexRange range) {
const IndexMask sub_mask = this->slice(range);
sub_mask.foreach_segment(
[&fn, range_start = range.start()](const IndexMaskSegment mask_segment,
[[maybe_unused]] const int64_t segment_pos) {
if constexpr (has_segment_and_start_parameter<Fn>) {
fn(mask_segment, segment_pos + range_start);
}
else {
fn(mask_segment);
}
});
});
}
/**
* Returns the largest index that is referenced by this IndexMask.
*/
int64_t last() const
{
return indices_.last();
}
/**
* Returns the number of indices referenced by this IndexMask.
*/
int64_t size() const
{
return indices_.size();
}
bool is_empty() const
{
return indices_.is_empty();
}
bool contained_in(const IndexRange range) const
{
if (indices_.is_empty()) {
return true;
}
if (range.size() < indices_.size()) {
return false;
}
return indices_.first() >= range.first() && indices_.last() <= range.last();
}
IndexMask slice(const int64_t start, const int64_t size) const
{
return IndexMask(indices_.slice(start, size));
}
IndexMask slice(const IndexRange slice) const
{
return IndexMask(indices_.slice(slice));
}
IndexMask slice_safe(int64_t start, int64_t size) const;
IndexMask slice_safe(IndexRange slice) const;
/**
* Create a sub-mask that is also shifted to the beginning.
* The shifting to the beginning allows code to work with smaller indices,
* which is more memory efficient.
*
* \return New index mask with the size of #slice. It is either empty or starts with 0.
* It might reference indices that have been appended to #r_new_indices.
*
* Example:
* \code{.unparsed}
* this: [2, 3, 5, 7, 8, 9, 10]
* slice: ^--------^
* output: [0, 2, 4, 5]
* \endcode
*
* All the indices in the sub-mask are shifted by 3 towards zero,
* so that the first index in the output is zero.
*/
IndexMask slice_and_offset(IndexRange slice, Vector<int64_t> &r_new_indices) const;
/**
* Get a new mask that contains all the indices that are not in the current mask.
* If necessary, the indices referenced by the new mask are inserted in #r_new_indices.
*/
IndexMask invert(const IndexRange full_range, Vector<int64_t> &r_new_indices) const;
/**
* Get all contiguous index ranges within the mask.
*/
Vector<IndexRange> extract_ranges() const;
/**
* Similar to #extract ranges, but works on the inverted mask. So the returned ranges are
* in-between the indices in the mask.
*
* Using this method is generally more efficient than first inverting the index mask and then
* extracting the ranges.
*
* If #r_skip_amounts is passed in, it will contain the number of indices that have been skipped
* before each range in the return value starts.
*/
Vector<IndexRange> extract_ranges_invert(const IndexRange full_range,
Vector<int64_t> *r_skip_amounts = nullptr) const;
};
/** To be used with #call_with_devirtualized_parameters. */
template<bool UseRange, bool UseSpan> struct IndexMaskDevirtualizer {
const IndexMask &mask;
template<typename Fn> bool devirtualize(const Fn &fn) const
{
if constexpr (UseRange) {
if (this->mask.is_range()) {
return fn(this->mask.as_range());
template<typename Fn> inline void IndexMask::foreach_range(Fn &&fn) const
{
this->foreach_segment([&](const IndexMaskSegment indices, [[maybe_unused]] int64_t segment_pos) {
Span<int16_t> base_indices = indices.base_span();
while (!base_indices.is_empty()) {
const int64_t next_range_size = unique_sorted_indices::find_size_of_next_range(base_indices);
const IndexRange range(int64_t(base_indices[0]) + indices.offset(), next_range_size);
if constexpr (has_segment_and_start_parameter<Fn>) {
fn(range, segment_pos);
}
else {
fn(range);
}
segment_pos += next_range_size;
base_indices = base_indices.drop_front(next_range_size);
}
if constexpr (UseSpan) {
return fn(this->mask.indices());
}
return false;
}
};
});
}
namespace detail {
IndexMask from_predicate_impl(
const IndexMask &universe,
GrainSize grain_size,
IndexMaskMemory &memory,
FunctionRef<int64_t(IndexMaskSegment indices, int16_t *r_true_indices)> filter_indices);
}
template<typename Fn>
inline IndexMask IndexMask::from_predicate(const IndexMask &universe,
const GrainSize grain_size,
IndexMaskMemory &memory,
Fn &&predicate)
{
return detail::from_predicate_impl(
universe,
grain_size,
memory,
[&](const IndexMaskSegment indices, int16_t *__restrict r_true_indices) {
int16_t *r_current = r_true_indices;
const int16_t *in_end = indices.base_span().end();
const int64_t offset = indices.offset();
for (const int16_t *in_current = indices.base_span().data(); in_current < in_end;
in_current++) {
const int16_t local_index = *in_current;
const int64_t global_index = int64_t(local_index) + offset;
const bool condition = predicate(global_index);
*r_current = local_index;
/* Branchless conditional increment. */
r_current += condition;
}
const int16_t true_indices_num = int16_t(r_current - r_true_indices);
return true_indices_num;
});
}
template<typename T, typename Fn>
void IndexMask::from_groups(const IndexMask &universe,
IndexMaskMemory &memory,
Fn &&get_group_index,
MutableSpan<IndexMask> r_masks)
{
Vector<Vector<T>> indices_by_group(r_masks.size());
universe.foreach_index([&](const int64_t i) {
const int group_index = get_group_index(i);
indices_by_group[group_index].append(T(i));
});
for (const int64_t i : r_masks.index_range()) {
r_masks[i] = IndexMask::from_indices<T>(indices_by_group[i], memory);
}
}
std::optional<IndexRange> inline IndexMask::to_range() const
{
if (indices_num_ == 0) {
return IndexRange{};
}
const int64_t first_index = this->first();
const int64_t last_index = this->last();
if (last_index - first_index == indices_num_ - 1) {
return IndexRange(first_index, indices_num_);
}
return std::nullopt;
}
template<int64_t N>
inline Vector<std::variant<IndexRange, IndexMaskSegment>, N> IndexMask::to_spans_and_ranges() const
{
Vector<std::variant<IndexRange, IndexMaskSegment>, N> segments;
this->foreach_segment_optimized([&](const auto segment) { segments.append(segment); });
return segments;
}
} // namespace blender::index_mask
namespace blender {
using index_mask::IndexMask;
using index_mask::IndexMaskFromSegment;
using index_mask::IndexMaskMemory;
using index_mask::IndexMaskSegment;
} // namespace blender

View File

@@ -1,79 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
/** \file
* \ingroup bli
*
* This is separate from `BLI_index_mask.hh` because it includes headers just `IndexMask` shouldn't
* depend on.
*/
#include "BLI_enumerable_thread_specific.hh"
#include "BLI_index_mask.hh"
#include "BLI_task.hh"
#include "BLI_vector.hh"
#include "BLI_virtual_array.hh"
namespace blender::index_mask_ops {
namespace detail {
IndexMask find_indices_based_on_predicate__merge(
IndexMask indices_to_check,
threading::EnumerableThreadSpecific<Vector<Vector<int64_t>>> &sub_masks,
Vector<int64_t> &r_indices);
} // namespace detail
/**
* Evaluate the #predicate for all indices in #indices_to_check and return a mask that contains all
* indices where the predicate was true.
*
* #r_indices indices is only used if necessary.
*/
template<typename Predicate>
inline IndexMask find_indices_based_on_predicate(const IndexMask indices_to_check,
const int64_t parallel_grain_size,
Vector<int64_t> &r_indices,
const Predicate &predicate)
{
/* Evaluate predicate in parallel. Since the size of the final mask is not known yet, many
* smaller vectors have to be filled with all indices where the predicate is true. Those smaller
* vectors are joined afterwards. */
threading::EnumerableThreadSpecific<Vector<Vector<int64_t>>> sub_masks;
threading::parallel_for(
indices_to_check.index_range(), parallel_grain_size, [&](const IndexRange range) {
const IndexMask sub_mask = indices_to_check.slice(range);
Vector<int64_t> masked_indices;
for (const int64_t i : sub_mask) {
if (predicate(i)) {
masked_indices.append(i);
}
}
if (!masked_indices.is_empty()) {
sub_masks.local().append(std::move(masked_indices));
}
});
/* This part doesn't have to be in the header. */
return detail::find_indices_based_on_predicate__merge(indices_to_check, sub_masks, r_indices);
}
/**
* Find the true indices in a virtual array. This is a version of
* #find_indices_based_on_predicate optimized for a virtual array input.
*
* \param parallel_grain_size: The grain size for when the virtual array isn't a span or a single
* value internally. This should be adjusted based on the expected cost of evaluating the virtual
* array-- more expensive virtual arrays should have smaller grain sizes.
*/
IndexMask find_indices_from_virtual_array(IndexMask indices_to_check,
const VArray<bool> &virtual_array,
int64_t parallel_grain_size,
Vector<int64_t> &r_indices);
/**
* Find the true indices in a boolean span.
*/
IndexMask find_indices_from_array(Span<bool> array, Vector<int64_t> &r_indices);
} // namespace blender::index_mask_ops

View File

@@ -45,23 +45,23 @@ template<typename T>
inline void interpolate_to_masked(const Span<T> src,
const Span<int> indices,
const Span<float> factors,
const IndexMask dst_mask,
const IndexMask &dst_mask,
MutableSpan<T> dst)
{
BLI_assert(indices.size() == factors.size());
BLI_assert(indices.size() == dst_mask.size());
const int last_src_index = src.size() - 1;
dst_mask.to_best_mask_type([&](auto dst_mask) {
for (const int i : IndexRange(dst_mask.size())) {
const int prev_index = indices[i];
const float factor = factors[i];
dst_mask.foreach_segment_optimized([&](const auto dst_segment, const int64_t dst_segment_pos) {
for (const int i : dst_segment.index_range()) {
const int prev_index = indices[dst_segment_pos + i];
const float factor = factors[dst_segment_pos + i];
const bool is_cyclic_case = prev_index == last_src_index;
if (is_cyclic_case) {
dst[dst_mask[i]] = math::interpolate(src.last(), src.first(), factor);
dst[dst_segment[i]] = math::interpolate(src.last(), src.first(), factor);
}
else {
dst[dst_mask[i]] = math::interpolate(src[prev_index], src[prev_index + 1], factor);
dst[dst_segment[i]] = math::interpolate(src[prev_index], src[prev_index + 1], factor);
}
}
});

View File

@@ -0,0 +1,108 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
#include "BLI_span.hh"
namespace blender {
/**
* An #OffsetSpan is a #Span with a constant offset that is added to every value when accessed.
* This allows e.g. storing multiple `int64_t` indices as an array of `int16_t` with an additional
* `int64_t` offset.
*/
template<typename T, typename BaseT> class OffsetSpan {
private:
/** Value that is added to every element in #data_ when accessed. */
T offset_ = 0;
/** Original span where each element is offset by #offset_. */
Span<BaseT> data_;
public:
OffsetSpan() = default;
OffsetSpan(const T offset, const Span<BaseT> data) : offset_(offset), data_(data) {}
/** \return Underlying span containing the values that are not offset. */
Span<BaseT> base_span() const
{
return data_;
}
T offset() const
{
return offset_;
}
bool is_empty() const
{
return data_.is_empty();
}
int64_t size() const
{
return data_.size();
}
T last(const int64_t n = 0) const
{
return offset_ + data_.last(n);
}
IndexRange index_range() const
{
return data_.index_range();
}
T operator[](const int64_t i) const
{
return T(data_[i]) + offset_;
}
OffsetSpan slice(const IndexRange &range) const
{
return {offset_, data_.slice(range)};
}
OffsetSpan slice(const int64_t start, const int64_t size) const
{
return {offset_, data_.slice(start, size)};
}
class Iterator {
private:
T offset_;
const BaseT *data_;
public:
Iterator(const T offset, const BaseT *data) : offset_(offset), data_(data) {}
Iterator &operator++()
{
data_++;
return *this;
}
T operator*() const
{
return T(*data_) + offset_;
}
friend bool operator!=(const Iterator &a, const Iterator &b)
{
BLI_assert(a.offset_ == b.offset_);
return a.data_ != b.data_;
}
};
Iterator begin() const
{
return {offset_, data_.begin()};
}
Iterator end() const
{
return {offset_, data_.end()};
}
};
} // namespace blender

View File

@@ -35,6 +35,19 @@
#include "BLI_lazy_threading.hh"
#include "BLI_utildefines.h"
namespace blender {
/**
* Wrapper type around an integer to differentiate it from other parameters in a function call.
*/
struct GrainSize {
int64_t value;
explicit constexpr GrainSize(const int64_t grain_size) : value(grain_size) {}
};
} // namespace blender
namespace blender::threading {
template<typename Range, typename Function>

View File

@@ -0,0 +1,151 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
/** \file
* \ingroup bli
*
* This file provides functions that deal with integer arrays fulfilling two constraints:
* - Values are sorted in ascending order, e.g. [2, 3, 6, 8].
* - The array doesn't have any duplicate elements, so [3, 4, 4, 5] is not allowed.
*
* Arrays satisfying these constraints are useful to "mask" indices that should be processed for
* two main reasons:
* - The sorted order makes hardware prefetching work best, because memory access patterns are
* more predictable (unless the indices are too far apart).
* - One can check in constant time whether an array of indices contains consecutive integers which
* can be represented more efficiently with an #IndexRange.
*
* Just using a single array as a mask works well as long as the number of indices is not too
* large. For potentially larger masks it's better to use #IndexMask which allows for better
* multi-threading.
*/
#include <optional>
#include <variant>
#include "BLI_binary_search.hh"
#include "BLI_vector.hh"
namespace blender::unique_sorted_indices {
/**
* \return True when the indices are consecutive and can be encoded as #IndexRange.
*/
template<typename T> inline bool non_empty_is_range(const Span<T> indices)
{
BLI_assert(!indices.is_empty());
return indices.last() - indices.first() == indices.size() - 1;
}
/**
* \return The range encoded by the indices. It is assumed that all indices are consecutive.
*/
template<typename T> inline IndexRange non_empty_as_range(const Span<T> indices)
{
BLI_assert(!indices.is_empty());
BLI_assert(non_empty_is_range(indices));
return IndexRange(indices.first(), indices.size());
}
/**
* \return The range encoded by the indices if all indices are consecutive. Otherwise none.
*/
template<typename T> inline std::optional<IndexRange> non_empty_as_range_try(const Span<T> indices)
{
if (non_empty_is_range(indices)) {
return non_empty_as_range(indices);
}
return std::nullopt;
}
/**
* \return Amount of consecutive indices at the start of the span. This takes O(log #indices) time.
*
* Example:
* [3, 4, 5, 6, 8, 9, 10]
* ^ Range ends here because 6 and 8 are not consecutive.
*/
template<typename T> inline int64_t find_size_of_next_range(const Span<T> indices)
{
BLI_assert(!indices.is_empty());
return binary_search::find_predicate_begin(indices,
[indices, offset = indices[0]](const T &value) {
const int64_t index = &value - indices.begin();
return value - offset > index;
});
}
/**
* \return Amount of non-consecutive indices until the next encoded range of at least
* #min_range_size elements starts. This takes O(size_until_next_range) time.
*
* Example:
* [1, 2, 4, 6, 7, 8, 9, 10, 13];
* ^ Range of at least size 4 starts here.
*/
template<typename T>
inline int64_t find_size_until_next_range(const Span<T> indices, const int64_t min_range_size)
{
BLI_assert(!indices.is_empty());
int64_t current_range_size = 1;
int64_t last_value = indices[0];
for (const int64_t i : indices.index_range().drop_front(1)) {
const T current_value = indices[i];
if (current_value == last_value + 1) {
current_range_size++;
if (current_range_size >= min_range_size) {
return i - min_range_size + 1;
}
}
else {
current_range_size = 1;
}
last_value = current_value;
}
return indices.size();
}
/**
* Split the indices up into segments, where each segment is either a range (because the indices
* are consecutive) or not. There are two opposing goals: The number of segments should be
* minimized while the amount of indices in a range should be maximized. The #range_threshold
* allows the caller to balance these goals.
*/
template<typename T, int64_t InlineBufferSize>
inline int64_t split_to_ranges_and_spans(
const Span<T> indices,
const int64_t range_threshold,
Vector<std::variant<IndexRange, Span<T>>, InlineBufferSize> &r_segments)
{
BLI_assert(range_threshold >= 1);
const int64_t old_segments_num = r_segments.size();
Span<T> remaining_indices = indices;
while (!remaining_indices.is_empty()) {
if (const std::optional<IndexRange> range = non_empty_as_range_try(remaining_indices)) {
/* All remaining indices are range. */
r_segments.append(*range);
break;
}
if (non_empty_is_range(remaining_indices.take_front(range_threshold))) {
/* Next segment is a range. Now find the place where the range ends. */
const int64_t segment_size = find_size_of_next_range(remaining_indices);
r_segments.append(IndexRange(remaining_indices[0], segment_size));
remaining_indices = remaining_indices.drop_front(segment_size);
continue;
}
/* Next segment is just indices. Now find the place where the next range starts. */
const int64_t segment_size = find_size_until_next_range(remaining_indices, range_threshold);
const Span<T> segment_indices = remaining_indices.take_front(segment_size);
if (const std::optional<IndexRange> range = non_empty_as_range_try(segment_indices)) {
r_segments.append(*range);
}
else {
r_segments.append(segment_indices);
}
remaining_indices = remaining_indices.drop_front(segment_size);
}
return r_segments.size() - old_segments_num;
}
} // namespace blender::unique_sorted_indices

View File

@@ -107,7 +107,7 @@ template<typename T> class VArrayImpl {
* Copy values from the virtual array into the provided span. The index of the value in the
* virtual array is the same as the index in the span.
*/
virtual void materialize(IndexMask mask, T *dst) const
virtual void materialize(const IndexMask &mask, T *dst) const
{
mask.foreach_index([&](const int64_t i) { dst[i] = this->get(i); });
}
@@ -115,7 +115,7 @@ template<typename T> class VArrayImpl {
/**
* Same as #materialize but #r_span is expected to be uninitialized.
*/
virtual void materialize_to_uninitialized(IndexMask mask, T *dst) const
virtual void materialize_to_uninitialized(const IndexMask &mask, T *dst) const
{
mask.foreach_index([&](const int64_t i) { new (dst + i) T(this->get(i)); });
}
@@ -125,25 +125,18 @@ template<typename T> class VArrayImpl {
* in virtual array is not the same as the index in the output span. Instead, the span is filled
* without gaps.
*/
virtual void materialize_compressed(IndexMask mask, T *dst) const
virtual void materialize_compressed(const IndexMask &mask, T *dst) const
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
dst[i] = this->get(best_mask[i]);
}
});
mask.foreach_index([&](const int64_t i, const int64_t pos) { dst[pos] = this->get(i); });
}
/**
* Same as #materialize_compressed but #r_span is expected to be uninitialized.
*/
virtual void materialize_compressed_to_uninitialized(IndexMask mask, T *dst) const
virtual void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
new (dst + i) T(this->get(best_mask[i]));
}
});
mask.foreach_index(
[&](const int64_t i, const int64_t pos) { new (dst + pos) T(this->get(i)); });
}
/**
@@ -227,32 +220,26 @@ template<typename T> class VArrayImpl_For_Span : public VMutableArrayImpl<T> {
return CommonVArrayInfo(CommonVArrayInfo::Type::Span, true, data_);
}
void materialize(IndexMask mask, T *dst) const override
void materialize(const IndexMask &mask, T *dst) const override
{
mask.foreach_index([&](const int64_t i) { dst[i] = data_[i]; });
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { dst[i] = data_[i]; });
}
void materialize_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
mask.foreach_index([&](const int64_t i) { new (dst + i) T(data_[i]); });
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { new (dst + i) T(data_[i]); });
}
void materialize_compressed(IndexMask mask, T *dst) const override
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
dst[i] = data_[best_mask[i]];
}
});
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { dst[pos] = data_[i]; });
}
void materialize_compressed_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
new (dst + i) T(data_[best_mask[i]]);
}
});
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { new (dst + pos) T(data_[i]); });
}
};
@@ -325,22 +312,22 @@ template<typename T> class VArrayImpl_For_Single final : public VArrayImpl<T> {
return CommonVArrayInfo(CommonVArrayInfo::Type::Single, true, &value_);
}
void materialize(IndexMask mask, T *dst) const override
void materialize(const IndexMask &mask, T *dst) const override
{
mask.foreach_index([&](const int64_t i) { dst[i] = value_; });
}
void materialize_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
mask.foreach_index([&](const int64_t i) { new (dst + i) T(value_); });
}
void materialize_compressed(IndexMask mask, T *dst) const override
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
initialized_fill_n(dst, mask.size(), value_);
}
void materialize_compressed_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
uninitialized_fill_n(dst, mask.size(), value_);
}
@@ -369,32 +356,25 @@ template<typename T, typename GetFunc> class VArrayImpl_For_Func final : public
return get_func_(index);
}
void materialize(IndexMask mask, T *dst) const override
void materialize(const IndexMask &mask, T *dst) const override
{
mask.foreach_index([&](const int64_t i) { dst[i] = get_func_(i); });
}
void materialize_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
mask.foreach_index([&](const int64_t i) { new (dst + i) T(get_func_(i)); });
}
void materialize_compressed(IndexMask mask, T *dst) const override
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
dst[i] = get_func_(best_mask[i]);
}
});
mask.foreach_index([&](const int64_t i, const int64_t pos) { dst[pos] = get_func_(i); });
}
void materialize_compressed_to_uninitialized(IndexMask mask, T *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
new (dst + i) T(get_func_(best_mask[i]));
}
});
mask.foreach_index(
[&](const int64_t i, const int64_t pos) { new (dst + pos) T(get_func_(i)); });
}
};
@@ -432,32 +412,27 @@ class VArrayImpl_For_DerivedSpan final : public VMutableArrayImpl<ElemT> {
SetFunc(data_[index], std::move(value));
}
void materialize(IndexMask mask, ElemT *dst) const override
void materialize(const IndexMask &mask, ElemT *dst) const override
{
mask.foreach_index([&](const int64_t i) { dst[i] = GetFunc(data_[i]); });
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { dst[i] = GetFunc(data_[i]); });
}
void materialize_to_uninitialized(IndexMask mask, ElemT *dst) const override
void materialize_to_uninitialized(const IndexMask &mask, ElemT *dst) const override
{
mask.foreach_index([&](const int64_t i) { new (dst + i) ElemT(GetFunc(data_[i])); });
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i) { new (dst + i) ElemT(GetFunc(data_[i])); });
}
void materialize_compressed(IndexMask mask, ElemT *dst) const override
void materialize_compressed(const IndexMask &mask, ElemT *dst) const override
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
dst[i] = GetFunc(data_[best_mask[i]]);
}
});
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { dst[pos] = GetFunc(data_[i]); });
}
void materialize_compressed_to_uninitialized(IndexMask mask, ElemT *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, ElemT *dst) const override
{
mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
new (dst + i) ElemT(GetFunc(data_[best_mask[i]]));
}
});
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { new (dst + pos) ElemT(GetFunc(data_[i])); });
}
};
@@ -745,7 +720,7 @@ template<typename T> class VArrayCommon {
}
/** Copy some indices of the virtual array into a span. */
void materialize(IndexMask mask, MutableSpan<T> r_span) const
void materialize(const IndexMask &mask, MutableSpan<T> r_span) const
{
BLI_assert(mask.min_array_size() <= this->size());
impl_->materialize(mask, r_span.data());
@@ -756,19 +731,19 @@ template<typename T> class VArrayCommon {
this->materialize_to_uninitialized(IndexMask(this->size()), r_span);
}
void materialize_to_uninitialized(IndexMask mask, MutableSpan<T> r_span) const
void materialize_to_uninitialized(const IndexMask &mask, MutableSpan<T> r_span) const
{
BLI_assert(mask.min_array_size() <= this->size());
impl_->materialize_to_uninitialized(mask, r_span.data());
}
/** Copy some elements of the virtual array into a span. */
void materialize_compressed(IndexMask mask, MutableSpan<T> r_span) const
void materialize_compressed(const IndexMask &mask, MutableSpan<T> r_span) const
{
impl_->materialize_compressed(mask, r_span.data());
}
void materialize_compressed_to_uninitialized(IndexMask mask, MutableSpan<T> r_span) const
void materialize_compressed_to_uninitialized(const IndexMask &mask, MutableSpan<T> r_span) const
{
impl_->materialize_compressed_to_uninitialized(mask, r_span.data());
}

View File

@@ -182,6 +182,7 @@ set(SRC
BLI_assert.h
BLI_astar.h
BLI_atomic_disjoint_set.hh
BLI_binary_search.hh
BLI_bit_group_vector.hh
BLI_bit_ref.hh
BLI_bit_span.hh
@@ -250,7 +251,6 @@ set(SRC
BLI_implicit_sharing.hh
BLI_implicit_sharing_ptr.hh
BLI_index_mask.hh
BLI_index_mask_ops.hh
BLI_index_range.hh
BLI_inplace_priority_queue.hh
BLI_iterator.h
@@ -318,6 +318,7 @@ set(SRC
BLI_noise.h
BLI_noise.hh
BLI_offset_indices.hh
BLI_offset_span.hh
BLI_parameter_pack_utils.hh
BLI_path_util.h
BLI_polyfill_2d.h
@@ -362,6 +363,7 @@ set(SRC
BLI_timecode.h
BLI_timeit.hh
BLI_timer.h
BLI_unique_sorted_indices.hh
BLI_utildefines.h
BLI_utildefines_iter.h
BLI_utildefines_stack.h
@@ -479,6 +481,7 @@ if(WITH_GTESTS)
tests/BLI_array_store_test.cc
tests/BLI_array_test.cc
tests/BLI_array_utils_test.cc
tests/BLI_binary_search_test.cc
tests/BLI_bit_group_vector_test.cc
tests/BLI_bit_ref_test.cc
tests/BLI_bit_span_test.cc
@@ -546,6 +549,7 @@ if(WITH_GTESTS)
tests/BLI_task_graph_test.cc
tests/BLI_task_test.cc
tests/BLI_tempfile_test.cc
tests/BLI_unique_sorted_indices_test.cc
tests/BLI_utildefines_test.cc
tests/BLI_uuid_test.cc
tests/BLI_vector_set_test.cc

View File

@@ -14,7 +14,7 @@ void copy(const GVArray &src, GMutableSpan dst, const int64_t grain_size)
}
void copy(const GVArray &src,
const IndexMask selection,
const IndexMask &selection,
GMutableSpan dst,
const int64_t grain_size)
{
@@ -27,7 +27,7 @@ void copy(const GVArray &src,
}
void gather(const GVArray &src,
const IndexMask indices,
const IndexMask &indices,
GMutableSpan dst,
const int64_t grain_size)
{
@@ -38,7 +38,7 @@ void gather(const GVArray &src,
});
}
void gather(const GSpan src, const IndexMask indices, GMutableSpan dst, const int64_t grain_size)
void gather(const GSpan src, const IndexMask &indices, GMutableSpan dst, const int64_t grain_size)
{
gather(GVArray::ForSpan(src), indices, dst, grain_size);
}

View File

@@ -47,27 +47,27 @@ void GVectorArray::extend(const int64_t index, const GSpan values)
this->extend(index, GVArray::ForSpan(values));
}
void GVectorArray::extend(IndexMask mask, const GVVectorArray &values)
void GVectorArray::extend(const IndexMask &mask, const GVVectorArray &values)
{
for (const int i : mask) {
mask.foreach_index([&](const int64_t i) {
GVArray_For_GVVectorArrayIndex array{values, i};
this->extend(i, GVArray(&array));
}
});
}
void GVectorArray::extend(IndexMask mask, const GVectorArray &values)
void GVectorArray::extend(const IndexMask &mask, const GVectorArray &values)
{
GVVectorArray_For_GVectorArray virtual_values{values};
this->extend(mask, virtual_values);
}
void GVectorArray::clear(IndexMask mask)
void GVectorArray::clear(const IndexMask &mask)
{
for (const int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
Item &item = items_[i];
type_.destruct_n(item.start, item.length);
item.length = 0;
}
});
}
GMutableSpan GVectorArray::operator[](const int64_t index)

View File

@@ -8,36 +8,36 @@ namespace blender {
/** \name #GVArrayImpl
* \{ */
void GVArrayImpl::materialize(const IndexMask mask, void *dst) const
void GVArrayImpl::materialize(const IndexMask &mask, void *dst) const
{
for (const int64_t i : mask) {
mask.foreach_index_optimized<int64_t>([&](const int64_t i) {
void *elem_dst = POINTER_OFFSET(dst, type_->size() * i);
this->get(i, elem_dst);
}
});
}
void GVArrayImpl::materialize_to_uninitialized(const IndexMask mask, void *dst) const
void GVArrayImpl::materialize_to_uninitialized(const IndexMask &mask, void *dst) const
{
for (const int64_t i : mask) {
mask.foreach_index_optimized<int64_t>([&](const int64_t i) {
void *elem_dst = POINTER_OFFSET(dst, type_->size() * i);
this->get_to_uninitialized(i, elem_dst);
}
});
}
void GVArrayImpl::materialize_compressed(IndexMask mask, void *dst) const
void GVArrayImpl::materialize_compressed(const IndexMask &mask, void *dst) const
{
for (const int64_t i : mask.index_range()) {
void *elem_dst = POINTER_OFFSET(dst, type_->size() * i);
this->get(mask[i], elem_dst);
}
mask.foreach_index_optimized<int64_t>([&](const int64_t i, const int64_t pos) {
void *elem_dst = POINTER_OFFSET(dst, type_->size() * pos);
this->get(i, elem_dst);
});
}
void GVArrayImpl::materialize_compressed_to_uninitialized(IndexMask mask, void *dst) const
void GVArrayImpl::materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const
{
for (const int64_t i : mask.index_range()) {
void *elem_dst = POINTER_OFFSET(dst, type_->size() * i);
this->get_to_uninitialized(mask[i], elem_dst);
}
mask.foreach_index_optimized<int64_t>([&](const int64_t i, const int64_t pos) {
void *elem_dst = POINTER_OFFSET(dst, type_->size() * pos);
this->get_to_uninitialized(i, elem_dst);
});
}
void GVArrayImpl::get(const int64_t index, void *r_value) const
@@ -143,22 +143,22 @@ CommonVArrayInfo GVArrayImpl_For_GSpan::common_info() const
return CommonVArrayInfo{CommonVArrayInfo::Type::Span, true, data_};
}
void GVArrayImpl_For_GSpan::materialize(const IndexMask mask, void *dst) const
void GVArrayImpl_For_GSpan::materialize(const IndexMask &mask, void *dst) const
{
type_->copy_assign_indices(data_, dst, mask);
}
void GVArrayImpl_For_GSpan::materialize_to_uninitialized(const IndexMask mask, void *dst) const
void GVArrayImpl_For_GSpan::materialize_to_uninitialized(const IndexMask &mask, void *dst) const
{
type_->copy_construct_indices(data_, dst, mask);
}
void GVArrayImpl_For_GSpan::materialize_compressed(const IndexMask mask, void *dst) const
void GVArrayImpl_For_GSpan::materialize_compressed(const IndexMask &mask, void *dst) const
{
type_->copy_assign_compressed(data_, dst, mask);
}
void GVArrayImpl_For_GSpan::materialize_compressed_to_uninitialized(const IndexMask mask,
void GVArrayImpl_For_GSpan::materialize_compressed_to_uninitialized(const IndexMask &mask,
void *dst) const
{
type_->copy_construct_compressed(data_, dst, mask);
@@ -187,23 +187,23 @@ CommonVArrayInfo GVArrayImpl_For_SingleValueRef::common_info() const
return CommonVArrayInfo{CommonVArrayInfo::Type::Single, true, value_};
}
void GVArrayImpl_For_SingleValueRef::materialize(const IndexMask mask, void *dst) const
void GVArrayImpl_For_SingleValueRef::materialize(const IndexMask &mask, void *dst) const
{
type_->fill_assign_indices(value_, dst, mask);
}
void GVArrayImpl_For_SingleValueRef::materialize_to_uninitialized(const IndexMask mask,
void GVArrayImpl_For_SingleValueRef::materialize_to_uninitialized(const IndexMask &mask,
void *dst) const
{
type_->fill_construct_indices(value_, dst, mask);
}
void GVArrayImpl_For_SingleValueRef::materialize_compressed(const IndexMask mask, void *dst) const
void GVArrayImpl_For_SingleValueRef::materialize_compressed(const IndexMask &mask, void *dst) const
{
type_->fill_assign_n(value_, dst, mask.size());
}
void GVArrayImpl_For_SingleValueRef::materialize_compressed_to_uninitialized(const IndexMask mask,
void GVArrayImpl_For_SingleValueRef::materialize_compressed_to_uninitialized(const IndexMask &mask,
void *dst) const
{
type_->fill_construct_n(value_, dst, mask.size());
@@ -495,20 +495,15 @@ class GVArrayImpl_For_SlicedGVArray : public GVArrayImpl {
return {};
}
void materialize_compressed_to_uninitialized(const IndexMask mask, void *dst) const override
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override
{
if (mask.is_range()) {
const IndexRange mask_range = mask.as_range();
const IndexRange offset_mask_range{mask_range.start() + offset_, mask_range.size()};
varray_.materialize_compressed_to_uninitialized(offset_mask_range, dst);
}
else {
Vector<int64_t, 32> offset_mask_indices(mask.size());
for (const int64_t i : mask.index_range()) {
offset_mask_indices[i] = mask[i] + offset_;
}
varray_.materialize_compressed_to_uninitialized(offset_mask_indices.as_span(), dst);
}
IndexMaskFromSegment mask_from_segment;
mask.foreach_segment([&](const IndexMaskSegment segment, const int64_t start) {
const IndexMask &segment_mask = mask_from_segment.update(
{segment.offset() + offset_, segment.base_span()});
varray_.materialize_compressed_to_uninitialized(segment_mask,
POINTER_OFFSET(dst, type_->size() * start));
});
}
};
@@ -549,7 +544,7 @@ void GVArrayCommon::materialize(void *dst) const
this->materialize(IndexMask(impl_->size()), dst);
}
void GVArrayCommon::materialize(const IndexMask mask, void *dst) const
void GVArrayCommon::materialize(const IndexMask &mask, void *dst) const
{
impl_->materialize(mask, dst);
}
@@ -559,18 +554,18 @@ void GVArrayCommon::materialize_to_uninitialized(void *dst) const
this->materialize_to_uninitialized(IndexMask(impl_->size()), dst);
}
void GVArrayCommon::materialize_to_uninitialized(const IndexMask mask, void *dst) const
void GVArrayCommon::materialize_to_uninitialized(const IndexMask &mask, void *dst) const
{
BLI_assert(mask.min_array_size() <= impl_->size());
impl_->materialize_to_uninitialized(mask, dst);
}
void GVArrayCommon::materialize_compressed(IndexMask mask, void *dst) const
void GVArrayCommon::materialize_compressed(const IndexMask &mask, void *dst) const
{
impl_->materialize_compressed(mask, dst);
}
void GVArrayCommon::materialize_compressed_to_uninitialized(IndexMask mask, void *dst) const
void GVArrayCommon::materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const
{
impl_->materialize_compressed_to_uninitialized(mask, dst);
}

View File

@@ -1,249 +1,575 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include <mutex>
#include "BLI_array.hh"
#include "BLI_bit_vector.hh"
#include "BLI_enumerable_thread_specific.hh"
#include "BLI_index_mask.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_set.hh"
#include "BLI_sort.hh"
#include "BLI_strict_flags.h"
#include "BLI_task.hh"
#include "BLI_threads.h"
#include "BLI_timeit.hh"
#include "BLI_virtual_array.hh"
namespace blender {
namespace blender::index_mask {
IndexMask IndexMask::slice_safe(int64_t start, int64_t size) const
std::array<int16_t, max_segment_size> build_static_indices_array()
{
return this->slice_safe(IndexRange(start, size));
std::array<int16_t, max_segment_size> data;
for (int16_t i = 0; i < max_segment_size; i++) {
data[size_t(i)] = i;
}
return data;
}
IndexMask IndexMask::slice_safe(IndexRange slice) const
const IndexMask &get_static_index_mask_for_min_size(const int64_t min_size)
{
return IndexMask(indices_.slice_safe(slice));
static constexpr int64_t size_shift = 31;
static constexpr int64_t max_size = (int64_t(1) << size_shift); /* 2'147'483'648 */
static constexpr int64_t segments_num = max_size / max_segment_size; /* 131'072 */
/* Make sure we are never requesting a size that's larger than what was statically allocated.
* If that's ever needed, we can either increase #size_shift or dynamically allocate an even
* larger mask. */
BLI_assert(min_size <= max_size);
UNUSED_VARS_NDEBUG(min_size);
static IndexMask static_mask = []() {
static Array<const int16_t *> indices_by_segment(segments_num);
/* The offsets and cumulative segment sizes array contain the same values here, so just use a
* single array for both. */
static Array<int64_t> segment_offsets(segments_num + 1);
static const int16_t *static_offsets = get_static_indices_array().data();
/* Isolate because the mutex protecting the initialization of #static_mask is locked. */
threading::isolate_task([&]() {
threading::parallel_for(IndexRange(segments_num), 1024, [&](const IndexRange range) {
for (const int64_t segment_i : range) {
indices_by_segment[segment_i] = static_offsets;
segment_offsets[segment_i] = segment_i * max_segment_size;
}
});
});
segment_offsets.last() = max_size;
IndexMask mask;
IndexMaskData &data = mask.data_for_inplace_construction();
data.indices_num_ = max_size;
data.segments_num_ = segments_num;
data.indices_by_segment_ = indices_by_segment.data();
data.segment_offsets_ = segment_offsets.data();
data.cumulative_segment_sizes_ = segment_offsets.data();
data.begin_index_in_segment_ = 0;
data.end_index_in_segment_ = max_segment_size;
return mask;
}();
return static_mask;
}
IndexMask IndexMask::slice_and_offset(const IndexRange slice, Vector<int64_t> &r_new_indices) const
std::ostream &operator<<(std::ostream &stream, const IndexMask &mask)
{
const int slice_size = slice.size();
if (slice_size == 0) {
Array<int64_t> indices(mask.size());
mask.to_indices<int64_t>(indices);
Vector<std::variant<IndexRange, Span<int64_t>>> segments;
unique_sorted_indices::split_to_ranges_and_spans<int64_t>(indices, 8, segments);
std::cout << "(Size: " << mask.size() << " | ";
for (const std::variant<IndexRange, Span<int64_t>> &segment : segments) {
if (std::holds_alternative<IndexRange>(segment)) {
const IndexRange range = std::get<IndexRange>(segment);
std::cout << range;
}
else {
const Span<int64_t> segment_indices = std::get<Span<int64_t>>(segment);
std::cout << "[";
for (const int64_t index : segment_indices) {
std::cout << index << ",";
}
std::cout << "]";
}
std::cout << ", ";
}
std::cout << ")";
return stream;
}
IndexMask IndexMask::slice(const int64_t start, const int64_t size) const
{
if (size == 0) {
return {};
}
IndexMask sliced_mask{indices_.slice(slice)};
if (sliced_mask.is_range()) {
return IndexMask(slice_size);
const RawMaskIterator first_it = this->index_to_iterator(start);
const RawMaskIterator last_it = this->index_to_iterator(start + size - 1);
IndexMask sliced = *this;
sliced.indices_num_ = size;
sliced.segments_num_ = last_it.segment_i - first_it.segment_i + 1;
sliced.indices_by_segment_ += first_it.segment_i;
sliced.segment_offsets_ += first_it.segment_i;
sliced.cumulative_segment_sizes_ += first_it.segment_i;
sliced.begin_index_in_segment_ = first_it.index_in_segment;
sliced.end_index_in_segment_ = last_it.index_in_segment + 1;
return sliced;
}
IndexMask IndexMask::slice_and_offset(const IndexRange range,
const int64_t offset,
IndexMaskMemory &memory) const
{
return this->slice_and_offset(range.start(), range.size(), offset, memory);
}
IndexMask IndexMask::slice_and_offset(const int64_t start,
const int64_t size,
const int64_t offset,
IndexMaskMemory &memory) const
{
if (size == 0) {
return {};
}
const int64_t offset = sliced_mask.indices().first();
if (std::optional<IndexRange> range = this->to_range()) {
return range->slice(start, size).shift(offset);
}
const IndexMask sliced_mask = this->slice(start, size);
if (offset == 0) {
return sliced_mask;
}
r_new_indices.resize(slice_size);
for (const int i : IndexRange(slice_size)) {
r_new_indices[i] = sliced_mask[i] - offset;
if (std::optional<IndexRange> range = sliced_mask.to_range()) {
return range->shift(offset);
}
return IndexMask(r_new_indices.as_span());
MutableSpan<int64_t> new_segment_offsets = memory.allocate_array<int64_t>(segments_num_);
for (const int64_t i : IndexRange(segments_num_)) {
new_segment_offsets[i] = segment_offsets_[i] + offset;
}
IndexMask offset_mask = *this;
offset_mask.segment_offsets_ = new_segment_offsets.data();
return offset_mask;
}
IndexMask IndexMask::invert(const IndexRange full_range, Vector<int64_t> &r_new_indices) const
IndexMask IndexMask::complement(const IndexRange universe, IndexMaskMemory &memory) const
{
BLI_assert(this->contained_in(full_range));
if (full_range.size() == indices_.size()) {
/* TODO: Implement more efficient solution. */
return IndexMask::from_predicate(universe, GrainSize(512), memory, [&](const int64_t index) {
return !this->contains(index);
});
}
/**
* Merges consecutive segments in some cases. Having fewer but larger segments generally allows for
* better performance when using the mask later on.
*/
static void consolidate_segments(Vector<IndexMaskSegment, 16> &segments,
IndexMaskMemory & /*memory*/)
{
if (segments.is_empty()) {
return;
}
const Span<int16_t> static_indices = get_static_indices_array();
/* TODO: Support merging non-range segments in some cases as well. */
int64_t group_start_segment_i = 0;
int64_t group_first = segments[0][0];
int64_t group_last = segments[0].last();
bool group_as_range = unique_sorted_indices::non_empty_is_range(segments[0].base_span());
auto finish_group = [&](const int64_t last_segment_i) {
if (group_start_segment_i == last_segment_i) {
return;
}
/* Join multiple ranges together into a bigger range. */
const IndexRange range{group_first, group_last + 1 - group_first};
segments[group_start_segment_i] = IndexMaskSegment(range[0],
static_indices.take_front(range.size()));
for (int64_t i = group_start_segment_i + 1; i <= last_segment_i; i++) {
segments[i] = {};
}
};
for (const int64_t segment_i : segments.index_range().drop_front(1)) {
const IndexMaskSegment segment = segments[segment_i];
const std::optional<IndexRange> segment_base_range =
unique_sorted_indices::non_empty_as_range_try(segment.base_span());
const bool segment_is_range = segment_base_range.has_value();
if (group_as_range && segment_is_range) {
if (group_last + 1 == segment[0]) {
if (segment.last() - group_first + 1 < max_segment_size) {
/* Can combine previous and current range. */
group_last = segment.last();
continue;
}
}
}
finish_group(segment_i - 1);
group_start_segment_i = segment_i;
group_first = segment[0];
group_last = segment.last();
group_as_range = segment_is_range;
}
finish_group(segments.size() - 1);
/* Remove all segments that have been merged into previous segments. */
segments.remove_if([](const IndexMaskSegment segment) { return segment.is_empty(); });
}
/**
* Create a new #IndexMask from the given segments. The provided segments are expected to be
* owned by #memory already.
*/
static IndexMask mask_from_segments(const Span<IndexMaskSegment> segments, IndexMaskMemory &memory)
{
if (segments.is_empty()) {
return {};
}
if (indices_.is_empty()) {
return full_range;
}
r_new_indices.clear();
const int64_t segments_num = segments.size();
const Vector<IndexRange> ranges = this->extract_ranges_invert(full_range, nullptr);
for (const IndexRange &range : ranges) {
for (const int64_t index : range) {
r_new_indices.append(index);
}
/* Allocate buffers for the mask. */
MutableSpan<const int16_t *> indices_by_segment = memory.allocate_array<const int16_t *>(
segments_num);
MutableSpan<int64_t> segment_offsets = memory.allocate_array<int64_t>(segments_num);
MutableSpan<int64_t> cumulative_segment_sizes = memory.allocate_array<int64_t>(segments_num + 1);
/* Fill buffers. */
cumulative_segment_sizes[0] = 0;
for (const int64_t segment_i : segments.index_range()) {
const IndexMaskSegment segment = segments[segment_i];
indices_by_segment[segment_i] = segment.base_span().data();
segment_offsets[segment_i] = segment.offset();
cumulative_segment_sizes[segment_i + 1] = cumulative_segment_sizes[segment_i] + segment.size();
}
return r_new_indices.as_span();
/* Initialize mask. */
IndexMask mask;
IndexMaskData &data = mask.data_for_inplace_construction();
data.indices_num_ = cumulative_segment_sizes.last();
data.segments_num_ = segments_num;
data.indices_by_segment_ = indices_by_segment.data();
data.segment_offsets_ = segment_offsets.data();
data.cumulative_segment_sizes_ = cumulative_segment_sizes.data();
data.begin_index_in_segment_ = 0;
data.end_index_in_segment_ = segments.last().size();
return mask;
}
Vector<IndexRange> IndexMask::extract_ranges() const
/**
* Split the indices into segments. Afterwards, the indices referenced by #r_segments are either
* owned by #allocator or statically allocated.
*/
template<typename T, int64_t InlineBufferSize>
static void segments_from_indices(const Span<T> indices,
LinearAllocator<> &allocator,
Vector<IndexMaskSegment, InlineBufferSize> &r_segments)
{
Vector<std::variant<IndexRange, Span<T>>, 16> segments;
for (int64_t start = 0; start < indices.size(); start += max_segment_size) {
/* Slice to make sure that each segment is no longer than #max_segment_size. */
const Span<T> indices_slice = indices.slice_safe(start, max_segment_size);
unique_sorted_indices::split_to_ranges_and_spans<T>(indices_slice, 64, segments);
}
const Span<int16_t> static_indices = get_static_indices_array();
for (const auto &segment : segments) {
if (std::holds_alternative<IndexRange>(segment)) {
const IndexRange segment_range = std::get<IndexRange>(segment);
r_segments.append_as(segment_range.start(), static_indices.take_front(segment_range.size()));
}
else {
Span<T> segment_indices = std::get<Span<T>>(segment);
MutableSpan<int16_t> offset_indices = allocator.allocate_array<int16_t>(
segment_indices.size());
while (!segment_indices.is_empty()) {
const int64_t offset = segment_indices[0];
const int64_t next_segment_size = binary_search::find_predicate_begin(
segment_indices.take_front(max_segment_size),
[&](const T value) { return value - offset >= max_segment_size; });
for (const int64_t i : IndexRange(next_segment_size)) {
const int64_t offset_index = segment_indices[i] - offset;
BLI_assert(offset_index < max_segment_size);
offset_indices[i] = int16_t(offset_index);
}
r_segments.append_as(offset, offset_indices.take_front(next_segment_size));
segment_indices = segment_indices.drop_front(next_segment_size);
offset_indices = offset_indices.drop_front(next_segment_size);
}
}
}
}
/**
* Utility to generate segments on multiple threads and to reduce the result in the end.
*/
struct ParallelSegmentsCollector {
struct LocalData {
LinearAllocator<> allocator;
Vector<IndexMaskSegment, 16> segments;
};
threading::EnumerableThreadSpecific<LocalData> data_by_thread;
/**
* Move ownership of memory allocated from all threads to #main_allocator. Also, extend
* #main_segments with the segments created on each thread. The segments are also sorted to make
* sure that they are in the correct order.
*/
void reduce(LinearAllocator<> &main_allocator, Vector<IndexMaskSegment, 16> &main_segments)
{
for (LocalData &data : this->data_by_thread) {
main_allocator.transfer_ownership_from(data.allocator);
main_segments.extend(data.segments);
}
parallel_sort(main_segments.begin(),
main_segments.end(),
[](const IndexMaskSegment a, const IndexMaskSegment b) { return a[0] < b[0]; });
}
};
template<typename T>
IndexMask IndexMask::from_indices(const Span<T> indices, IndexMaskMemory &memory)
{
if (indices.is_empty()) {
return {};
}
if (const std::optional<IndexRange> range = unique_sorted_indices::non_empty_as_range_try(
indices)) {
/* Fast case when the indices encode a single range. */
return *range;
}
Vector<IndexMaskSegment, 16> segments;
constexpr int64_t min_grain_size = 4096;
constexpr int64_t max_grain_size = max_segment_size;
if (indices.size() <= min_grain_size) {
segments_from_indices(indices, memory, segments);
}
else {
const int64_t threads_num = BLI_system_thread_count();
/* Can be faster with a larger grain size, but only when there are enough indices. */
const int64_t grain_size = std::clamp(
indices.size() / (threads_num * 4), min_grain_size, max_grain_size);
ParallelSegmentsCollector segments_collector;
threading::parallel_for(indices.index_range(), grain_size, [&](const IndexRange range) {
ParallelSegmentsCollector::LocalData &local_data = segments_collector.data_by_thread.local();
segments_from_indices(indices.slice(range), local_data.allocator, local_data.segments);
});
segments_collector.reduce(memory, segments);
}
consolidate_segments(segments, memory);
return mask_from_segments(segments, memory);
}
IndexMask IndexMask::from_bits(const BitSpan bits, IndexMaskMemory &memory)
{
return IndexMask::from_bits(bits.index_range(), bits, memory);
}
IndexMask IndexMask::from_bits(const IndexMask &universe,
const BitSpan bits,
IndexMaskMemory &memory)
{
return IndexMask::from_predicate(universe, GrainSize(1024), memory, [bits](const int64_t index) {
return bits[index].test();
});
}
IndexMask IndexMask::from_bools(Span<bool> bools, IndexMaskMemory &memory)
{
return IndexMask::from_bools(bools.index_range(), bools, memory);
}
IndexMask IndexMask::from_bools(const VArray<bool> &bools, IndexMaskMemory &memory)
{
return IndexMask::from_bools(bools.index_range(), bools, memory);
}
IndexMask IndexMask::from_bools(const IndexMask &universe,
Span<bool> bools,
IndexMaskMemory &memory)
{
return IndexMask::from_predicate(
universe, GrainSize(1024), memory, [bools](const int64_t index) { return bools[index]; });
}
IndexMask IndexMask::from_bools(const IndexMask &universe,
const VArray<bool> &bools,
IndexMaskMemory &memory)
{
const CommonVArrayInfo info = bools.common_info();
if (info.type == CommonVArrayInfo::Type::Single) {
return *static_cast<const bool *>(info.data) ? universe : IndexMask();
}
if (info.type == CommonVArrayInfo::Type::Span) {
const Span<bool> span(static_cast<const bool *>(info.data), bools.size());
return IndexMask::from_bools(universe, span, memory);
}
return IndexMask::from_predicate(
universe, GrainSize(512), memory, [&](const int64_t index) { return bools[index]; });
}
template<typename T> void IndexMask::to_indices(MutableSpan<T> r_indices) const
{
BLI_assert(this->size() == r_indices.size());
this->foreach_index_optimized<int64_t>(
GrainSize(1024), [r_indices = r_indices.data()](const int64_t i, const int64_t pos) {
r_indices[pos] = T(i);
});
}
void IndexMask::to_bits(MutableBitSpan r_bits) const
{
BLI_assert(r_bits.size() >= this->min_array_size());
r_bits.reset_all();
this->foreach_segment_optimized([&](const auto segment) {
if constexpr (std::is_same_v<std::decay_t<decltype(segment)>, IndexRange>) {
const IndexRange range = segment;
r_bits.slice(range).set_all();
}
else {
for (const int64_t i : segment) {
r_bits[i].set();
}
}
});
}
void IndexMask::to_bools(MutableSpan<bool> r_bools) const
{
BLI_assert(r_bools.size() >= this->min_array_size());
r_bools.fill(false);
this->foreach_index_optimized<int64_t>(GrainSize(2048),
[&](const int64_t i) { r_bools[i] = true; });
}
Vector<IndexRange> IndexMask::to_ranges() const
{
Vector<IndexRange> ranges;
int64_t range_start = 0;
while (range_start < indices_.size()) {
int64_t current_range_end = range_start + 1;
int64_t step_size = 1;
while (true) {
const int64_t possible_range_end = current_range_end + step_size;
if (possible_range_end > indices_.size()) {
break;
}
if (!this->slice(range_start, possible_range_end - range_start).is_range()) {
break;
}
current_range_end = possible_range_end;
step_size *= 2;
}
/* This step size was tried already, no need to try it again. */
step_size /= 2;
while (step_size > 0) {
const int64_t possible_range_end = current_range_end + step_size;
step_size /= 2;
if (possible_range_end > indices_.size()) {
continue;
}
if (!this->slice(range_start, possible_range_end - range_start).is_range()) {
continue;
}
current_range_end = possible_range_end;
}
ranges.append(IndexRange{indices_[range_start], current_range_end - range_start});
range_start = current_range_end;
}
this->foreach_range([&](const IndexRange range) { ranges.append(range); });
return ranges;
}
Vector<IndexRange> IndexMask::extract_ranges_invert(const IndexRange full_range,
Vector<int64_t> *r_skip_amounts) const
Vector<IndexRange> IndexMask::to_ranges_invert(const IndexRange universe) const
{
BLI_assert(this->contained_in(full_range));
const Vector<IndexRange> ranges = this->extract_ranges();
Vector<IndexRange> inverted_ranges;
int64_t skip_amount = 0;
int64_t next_start = full_range.start();
for (const int64_t i : ranges.index_range()) {
const IndexRange range = ranges[i];
if (range.start() > next_start) {
inverted_ranges.append({next_start, range.start() - next_start});
if (r_skip_amounts != nullptr) {
r_skip_amounts->append(skip_amount);
}
}
next_start = range.one_after_last();
skip_amount += range.size();
}
if (next_start < full_range.one_after_last()) {
inverted_ranges.append({next_start, full_range.one_after_last() - next_start});
if (r_skip_amounts != nullptr) {
r_skip_amounts->append(skip_amount);
}
}
return inverted_ranges;
IndexMaskMemory memory;
return this->complement(universe, memory).to_ranges();
}
} // namespace blender
namespace blender::index_mask_ops {
namespace detail {
IndexMask find_indices_based_on_predicate__merge(
IndexMask indices_to_check,
threading::EnumerableThreadSpecific<Vector<Vector<int64_t>>> &sub_masks,
Vector<int64_t> &r_indices)
/**
* Filter the indices from #universe_segment using #filter_indices. Store the resulting indices as
* segments.
*/
static void segments_from_predicate_filter(
const IndexMaskSegment universe_segment,
LinearAllocator<> &allocator,
const FunctionRef<int64_t(IndexMaskSegment indices, int16_t *r_true_indices)> filter_indices,
Vector<IndexMaskSegment, 16> &r_segments)
{
/* Gather vectors that have been generated by possibly multiple threads. */
Vector<Vector<int64_t> *> all_vectors;
int64_t result_mask_size = 0;
for (Vector<Vector<int64_t>> &local_sub_masks : sub_masks) {
for (Vector<int64_t> &sub_mask : local_sub_masks) {
BLI_assert(!sub_mask.is_empty());
all_vectors.append(&sub_mask);
result_mask_size += sub_mask.size();
std::array<int16_t, max_segment_size> indices_array;
const int64_t true_indices_num = filter_indices(universe_segment, indices_array.data());
if (true_indices_num == 0) {
return;
}
const Span<int16_t> true_indices{indices_array.data(), true_indices_num};
Vector<std::variant<IndexRange, Span<int16_t>>> true_segments;
unique_sorted_indices::split_to_ranges_and_spans<int16_t>(true_indices, 64, true_segments);
const Span<int16_t> static_indices = get_static_indices_array();
for (const auto &true_segment : true_segments) {
if (std::holds_alternative<IndexRange>(true_segment)) {
const IndexRange segment_range = std::get<IndexRange>(true_segment);
r_segments.append_as(universe_segment.offset(), static_indices.slice(segment_range));
}
else {
const Span<int16_t> segment_indices = std::get<Span<int16_t>>(true_segment);
r_segments.append_as(universe_segment.offset(),
allocator.construct_array_copy(segment_indices));
}
}
}
if (all_vectors.is_empty()) {
/* Special case when the predicate was false for all elements. */
IndexMask from_predicate_impl(
const IndexMask &universe,
const GrainSize grain_size,
IndexMaskMemory &memory,
const FunctionRef<int64_t(IndexMaskSegment indices, int16_t *r_true_indices)> filter_indices)
{
if (universe.is_empty()) {
return {};
}
if (result_mask_size == indices_to_check.size()) {
/* Special case when the predicate was true for all elements. */
return indices_to_check;
}
if (all_vectors.size() == 1) {
/* Special case when all indices for which the predicate is true happen to be in a single
* vector. */
r_indices = std::move(*all_vectors[0]);
return r_indices.as_span();
}
/* Indices in separate vectors don't overlap. So it is ok to sort the vectors just by looking at
* the first element. */
std::sort(all_vectors.begin(),
all_vectors.end(),
[](const Vector<int64_t> *a, const Vector<int64_t> *b) { return (*a)[0] < (*b)[0]; });
/* Precompute the offsets for the individual vectors, so that the indices can be copied into the
* final vector in parallel. */
Vector<int64_t> offsets;
offsets.reserve(all_vectors.size() + 1);
offsets.append(0);
for (Vector<int64_t> *vector : all_vectors) {
offsets.append(offsets.last() + vector->size());
}
r_indices.resize(result_mask_size);
/* Fill the final index mask in parallel again. */
threading::parallel_for(all_vectors.index_range(), 100, [&](const IndexRange all_vectors_range) {
for (const int64_t vector_index : all_vectors_range) {
Vector<int64_t> &vector = *all_vectors[vector_index];
const int64_t offset = offsets[vector_index];
threading::parallel_for(vector.index_range(), 1024, [&](const IndexRange range) {
initialized_copy_n(vector.data() + range.start(),
range.size(),
r_indices.data() + offset + range.start());
});
Vector<IndexMaskSegment, 16> segments;
if (universe.size() <= grain_size.value) {
for (const int64_t segment_i : IndexRange(universe.segments_num())) {
const IndexMaskSegment universe_segment = universe.segment(segment_i);
segments_from_predicate_filter(universe_segment, memory, filter_indices, segments);
}
});
}
else {
ParallelSegmentsCollector segments_collector;
universe.foreach_segment(grain_size, [&](const IndexMaskSegment universe_segment) {
ParallelSegmentsCollector::LocalData &data = segments_collector.data_by_thread.local();
segments_from_predicate_filter(
universe_segment, data.allocator, filter_indices, data.segments);
});
segments_collector.reduce(memory, segments);
}
return r_indices.as_span();
consolidate_segments(segments, memory);
return mask_from_segments(segments, memory);
}
} // namespace detail
IndexMask find_indices_from_virtual_array(const IndexMask indices_to_check,
const VArray<bool> &virtual_array,
const int64_t parallel_grain_size,
Vector<int64_t> &r_indices)
std::optional<RawMaskIterator> IndexMask::find(const int64_t query_index) const
{
if (virtual_array.is_single()) {
return virtual_array.get_internal_single() ? indices_to_check : IndexMask(0);
if (this->is_empty()) {
return std::nullopt;
}
if (virtual_array.is_span()) {
const Span<bool> span = virtual_array.get_internal_span();
return find_indices_from_array(span, r_indices);
if (query_index < this->first()) {
return std::nullopt;
}
if (query_index > this->last()) {
return std::nullopt;
}
threading::EnumerableThreadSpecific<Vector<bool>> materialize_buffers;
threading::EnumerableThreadSpecific<Vector<Vector<int64_t>>> sub_masks;
const int64_t segment_i = -1 + binary_search::find_predicate_begin(
IndexRange(segments_num_), [&](const int64_t value) {
return query_index < this->segment(value)[0];
});
threading::parallel_for(
indices_to_check.index_range(), parallel_grain_size, [&](const IndexRange range) {
const IndexMask sliced_mask = indices_to_check.slice(range);
/* To avoid virtual function call overhead from accessing the virtual array,
* materialize the necessary indices for this chunk into a reused buffer. */
Vector<bool> &buffer = materialize_buffers.local();
buffer.reinitialize(sliced_mask.size());
virtual_array.materialize_compressed(sliced_mask, buffer);
Vector<int64_t> masked_indices;
sliced_mask.to_best_mask_type([&](auto best_mask) {
for (const int64_t i : IndexRange(best_mask.size())) {
if (buffer[i]) {
masked_indices.append(best_mask[i]);
}
}
});
if (!masked_indices.is_empty()) {
sub_masks.local().append(std::move(masked_indices));
}
});
return detail::find_indices_based_on_predicate__merge(indices_to_check, sub_masks, r_indices);
const IndexMaskSegment segment = this->segment(segment_i);
const Span<int16_t> local_segment = segment.base_span();
const int64_t local_query_index = query_index - segment.offset();
if (local_query_index > local_segment.last()) {
return std::nullopt;
}
const int64_t index_in_segment = -1 + binary_search::find_predicate_begin(
local_segment, [&](const int16_t value) {
return local_query_index < value;
});
if (local_segment[index_in_segment] != local_query_index) {
return std::nullopt;
}
return RawMaskIterator{segment_i, int16_t(index_in_segment)};
}
IndexMask find_indices_from_array(const Span<bool> array, Vector<int64_t> &r_indices)
bool IndexMask::contains(const int64_t query_index) const
{
return find_indices_based_on_predicate(
array.index_range(), 4096, r_indices, [array](const int64_t i) { return array[i]; });
return this->find(query_index).has_value();
}
} // namespace blender::index_mask_ops
template IndexMask IndexMask::from_indices(Span<int32_t>, IndexMaskMemory &);
template IndexMask IndexMask::from_indices(Span<int64_t>, IndexMaskMemory &);
template void IndexMask::to_indices(MutableSpan<int32_t>) const;
template void IndexMask::to_indices(MutableSpan<int64_t>) const;
} // namespace blender::index_mask

View File

@@ -0,0 +1,59 @@
/* SPDX-License-Identifier: Apache-2.0 */
#include "BLI_binary_search.hh"
#include "BLI_vector.hh"
#include "testing/testing.h"
namespace blender::binary_search::tests {
TEST(binary_search, Empty)
{
const Vector<int> vec;
const int64_t index = find_predicate_begin(vec, [](const int /*value*/) { return true; });
EXPECT_EQ(index, 0);
}
TEST(binary_search, One)
{
const Vector<int> vec = {5};
{
const int64_t index = find_predicate_begin(vec, [](const int /*value*/) { return false; });
EXPECT_EQ(index, 1);
}
{
const int64_t index = find_predicate_begin(vec, [](const int /*value*/) { return true; });
EXPECT_EQ(index, 0);
}
}
TEST(binary_search, Multiple)
{
const Vector<int> vec{4, 5, 7, 9, 10, 20, 30};
{
const int64_t index = find_predicate_begin(vec, [](const int value) { return value > 0; });
EXPECT_EQ(index, 0);
}
{
const int64_t index = find_predicate_begin(vec, [](const int value) { return value > 4; });
EXPECT_EQ(index, 1);
}
{
const int64_t index = find_predicate_begin(vec, [](const int value) { return value > 10; });
EXPECT_EQ(index, 5);
}
{
const int64_t index = find_predicate_begin(vec, [](const int value) { return value >= 25; });
EXPECT_EQ(index, 6);
}
{
const int64_t index = find_predicate_begin(vec, [](const int value) { return value >= 30; });
EXPECT_EQ(index, 6);
}
{
const int64_t index = find_predicate_begin(vec, [](const int value) { return value > 30; });
EXPECT_EQ(index, 7);
}
}
} // namespace blender::binary_search::tests

View File

@@ -109,7 +109,9 @@ TEST(cpp_type, DefaultConstruction)
EXPECT_EQ(buffer[1], default_constructed_value);
EXPECT_EQ(buffer[2], default_constructed_value);
EXPECT_EQ(buffer[3], 0);
CPPType_TestType.default_construct_indices((void *)buffer, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.default_construct_indices((void *)buffer,
IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer[2], default_constructed_value);
EXPECT_EQ(buffer[4], 0);
EXPECT_EQ(buffer[5], default_constructed_value);
@@ -136,7 +138,9 @@ TEST(cpp_type, ValueInitialize)
EXPECT_EQ(buffer[1], default_constructed_value);
EXPECT_EQ(buffer[2], default_constructed_value);
EXPECT_EQ(buffer[3], 0);
CPPType_TestType.value_initialize_indices((void *)buffer, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.value_initialize_indices((void *)buffer,
IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer[2], default_constructed_value);
EXPECT_EQ(buffer[4], 0);
EXPECT_EQ(buffer[5], default_constructed_value);
@@ -163,7 +167,9 @@ TEST(cpp_type, Destruct)
EXPECT_EQ(buffer[1], destructed_value);
EXPECT_EQ(buffer[2], destructed_value);
EXPECT_EQ(buffer[3], 0);
CPPType_TestType.destruct_indices((void *)buffer, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.destruct_indices((void *)buffer,
IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer[2], destructed_value);
EXPECT_EQ(buffer[4], 0);
EXPECT_EQ(buffer[5], destructed_value);
@@ -188,7 +194,9 @@ TEST(cpp_type, CopyToUninitialized)
EXPECT_EQ(buffer2[2], copy_constructed_value);
EXPECT_EQ(buffer1[3], 0);
EXPECT_EQ(buffer2[3], 0);
CPPType_TestType.copy_construct_indices((void *)buffer1, (void *)buffer2, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.copy_construct_indices(
(void *)buffer1, (void *)buffer2, IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer1[2], copy_constructed_from_value);
EXPECT_EQ(buffer2[2], copy_constructed_value);
EXPECT_EQ(buffer1[4], 0);
@@ -219,7 +227,9 @@ TEST(cpp_type, CopyToInitialized)
EXPECT_EQ(buffer2[2], copy_assigned_value);
EXPECT_EQ(buffer1[3], 0);
EXPECT_EQ(buffer2[3], 0);
CPPType_TestType.copy_assign_indices((void *)buffer1, (void *)buffer2, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.copy_assign_indices(
(void *)buffer1, (void *)buffer2, IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer1[2], copy_assigned_from_value);
EXPECT_EQ(buffer2[2], copy_assigned_value);
EXPECT_EQ(buffer1[4], 0);
@@ -250,7 +260,9 @@ TEST(cpp_type, RelocateToUninitialized)
EXPECT_EQ(buffer2[2], move_constructed_value);
EXPECT_EQ(buffer1[3], 0);
EXPECT_EQ(buffer2[3], 0);
CPPType_TestType.relocate_construct_indices((void *)buffer1, (void *)buffer2, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.relocate_construct_indices(
(void *)buffer1, (void *)buffer2, IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer1[2], destructed_value);
EXPECT_EQ(buffer2[2], move_constructed_value);
EXPECT_EQ(buffer1[4], 0);
@@ -281,7 +293,9 @@ TEST(cpp_type, RelocateToInitialized)
EXPECT_EQ(buffer2[2], move_assigned_value);
EXPECT_EQ(buffer1[3], 0);
EXPECT_EQ(buffer2[3], 0);
CPPType_TestType.relocate_assign_indices((void *)buffer1, (void *)buffer2, {2, 5, 7});
IndexMaskMemory memory;
CPPType_TestType.relocate_assign_indices(
(void *)buffer1, (void *)buffer2, IndexMask::from_indices<int>({2, 5, 7}, memory));
EXPECT_EQ(buffer1[2], destructed_value);
EXPECT_EQ(buffer2[2], move_assigned_value);
EXPECT_EQ(buffer1[4], 0);
@@ -308,7 +322,9 @@ TEST(cpp_type, FillInitialized)
EXPECT_EQ(buffer2[3], 0);
buffer1 = 0;
CPPType_TestType.fill_assign_indices((void *)&buffer1, (void *)buffer2, {1, 6, 8});
IndexMaskMemory memory;
CPPType_TestType.fill_assign_indices(
(void *)&buffer1, (void *)buffer2, IndexMask::from_indices<int>({1, 6, 8}, memory));
EXPECT_EQ(buffer1, copy_assigned_from_value);
EXPECT_EQ(buffer2[0], copy_assigned_value);
EXPECT_EQ(buffer2[1], copy_assigned_value);
@@ -334,7 +350,9 @@ TEST(cpp_type, FillUninitialized)
EXPECT_EQ(buffer2[3], 0);
buffer1 = 0;
CPPType_TestType.fill_construct_indices((void *)&buffer1, (void *)buffer2, {1, 6, 8});
IndexMaskMemory memory;
CPPType_TestType.fill_construct_indices(
(void *)&buffer1, (void *)buffer2, IndexMask::from_indices<int>({1, 6, 8}, memory));
EXPECT_EQ(buffer1, copy_constructed_from_value);
EXPECT_EQ(buffer2[0], copy_constructed_value);
EXPECT_EQ(buffer2[1], copy_constructed_value);
@@ -385,7 +403,9 @@ TEST(cpp_type, CopyAssignCompressed)
{
std::array<std::string, 5> array = {"a", "b", "c", "d", "e"};
std::array<std::string, 3> array_compressed;
CPPType::get<std::string>().copy_assign_compressed(&array, &array_compressed, {0, 2, 3});
IndexMaskMemory memory;
CPPType::get<std::string>().copy_assign_compressed(
&array, &array_compressed, IndexMask::from_indices<int>({0, 2, 3}, memory));
EXPECT_EQ(array_compressed[0], "a");
EXPECT_EQ(array_compressed[1], "c");
EXPECT_EQ(array_compressed[2], "d");

View File

@@ -1,216 +1,226 @@
/* SPDX-License-Identifier: Apache-2.0 */
#include "BLI_array.hh"
#include "BLI_index_mask.hh"
#include "BLI_rand.hh"
#include "BLI_set.hh"
#include "BLI_strict_flags.h"
#include "BLI_timeit.hh"
#include "testing/testing.h"
namespace blender::tests {
namespace blender::index_mask::tests {
TEST(index_mask, IndicesToMask)
{
IndexMaskMemory memory;
Array<int> data = {
5, 100, 16383, 16384, 16385, 20000, 20001, 50000, 50001, 50002, 100000, 101000};
IndexMask mask = IndexMask::from_indices<int>(data, memory);
EXPECT_EQ(mask.first(), 5);
EXPECT_EQ(mask.last(), 101000);
EXPECT_EQ(mask.min_array_size(), 101001);
}
TEST(index_mask, FromBits)
{
IndexMaskMemory memory;
const uint64_t bits =
0b0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'1111'0010'0000;
const IndexMask mask = IndexMask::from_bits(BitSpan(&bits, IndexRange(2, 40)), memory);
Array<int> indices(5);
mask.to_indices<int>(indices);
EXPECT_EQ(indices[0], 3);
EXPECT_EQ(indices[1], 6);
EXPECT_EQ(indices[2], 7);
EXPECT_EQ(indices[3], 8);
EXPECT_EQ(indices[4], 9);
}
TEST(index_mask, FromSize)
{
{
const IndexMask mask(5);
Vector<IndexMaskSegment> segments;
mask.foreach_segment([&](const IndexMaskSegment segment) { segments.append(segment); });
EXPECT_EQ(segments.size(), 1);
EXPECT_EQ(segments[0].size(), 5);
EXPECT_EQ(mask.first(), 0);
EXPECT_EQ(mask.last(), 4);
EXPECT_EQ(mask.min_array_size(), 5);
}
{
const IndexMask mask(max_segment_size);
Vector<IndexMaskSegment> segments;
mask.foreach_segment([&](const IndexMaskSegment segment) { segments.append(segment); });
EXPECT_EQ(segments.size(), 1);
EXPECT_EQ(segments[0].size(), max_segment_size);
EXPECT_EQ(mask.first(), 0);
EXPECT_EQ(mask.last(), max_segment_size - 1);
EXPECT_EQ(mask.min_array_size(), max_segment_size);
}
}
TEST(index_mask, DefaultConstructor)
{
IndexMask mask;
EXPECT_EQ(mask.min_array_size(), 0);
EXPECT_EQ(mask.size(), 0);
EXPECT_EQ(mask.min_array_size(), 0);
}
TEST(index_mask, ArrayConstructor)
TEST(index_mask, ForeachRange)
{
[](IndexMask mask) {
EXPECT_EQ(mask.size(), 4);
EXPECT_EQ(mask.min_array_size(), 8);
EXPECT_FALSE(mask.is_range());
EXPECT_EQ(mask[0], 3);
EXPECT_EQ(mask[1], 5);
EXPECT_EQ(mask[2], 6);
EXPECT_EQ(mask[3], 7);
}({3, 5, 6, 7});
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({2, 3, 4, 10, 40, 41}, memory);
Vector<IndexRange> ranges;
mask.foreach_range([&](const IndexRange range) { ranges.append(range); });
EXPECT_EQ(ranges.size(), 3);
EXPECT_EQ(ranges[0], IndexRange(2, 3));
EXPECT_EQ(ranges[1], IndexRange(10, 1));
EXPECT_EQ(ranges[2], IndexRange(40, 2));
}
TEST(index_mask, RangeConstructor)
TEST(index_mask, ToRange)
{
IndexMask mask = IndexRange(3, 5);
EXPECT_EQ(mask.size(), 5);
EXPECT_EQ(mask.min_array_size(), 8);
EXPECT_EQ(mask.last(), 7);
EXPECT_TRUE(mask.is_range());
EXPECT_EQ(mask.as_range().first(), 3);
EXPECT_EQ(mask.as_range().last(), 7);
Span<int64_t> indices = mask.indices();
EXPECT_EQ(indices[0], 3);
EXPECT_EQ(indices[1], 4);
EXPECT_EQ(indices[2], 5);
IndexMaskMemory memory;
{
const IndexMask mask = IndexMask::from_indices<int>({4, 5, 6, 7}, memory);
EXPECT_TRUE(mask.to_range().has_value());
EXPECT_EQ(*mask.to_range(), IndexRange(4, 4));
}
{
const IndexMask mask = IndexMask::from_indices<int>({}, memory);
EXPECT_TRUE(mask.to_range().has_value());
EXPECT_EQ(*mask.to_range(), IndexRange());
}
{
const IndexMask mask = IndexMask::from_indices<int>({0, 1, 3, 4}, memory);
EXPECT_FALSE(mask.to_range().has_value());
}
{
const IndexRange range{16000, 40000};
const IndexMask mask{range};
EXPECT_TRUE(mask.to_range().has_value());
EXPECT_EQ(*mask.to_range(), range);
}
}
TEST(index_mask, SliceAndOffset)
TEST(index_mask, FromRange)
{
const auto test_range = [](const IndexRange range) {
const IndexMask mask = range;
EXPECT_EQ(mask.to_range(), range);
};
test_range({0, 0});
test_range({0, 10});
test_range({0, 16384});
test_range({16320, 64});
test_range({16384, 64});
test_range({0, 100000});
test_range({100000, 100000});
test_range({688064, 64});
}
TEST(index_mask, FromPredicate)
{
IndexMaskMemory memory;
{
const IndexRange range{20'000, 50'000};
const IndexMask mask = IndexMask::from_predicate(
IndexRange(100'000), GrainSize(1024), memory, [&](const int64_t i) {
return range.contains(i);
});
EXPECT_EQ(mask.to_range(), range);
}
{
const Vector<int64_t> indices = {0, 500, 20'000, 50'000};
const IndexMask mask = IndexMask::from_predicate(
IndexRange(100'000), GrainSize(1024), memory, [&](const int64_t i) {
return indices.contains(i);
});
EXPECT_EQ(mask.size(), indices.size());
Vector<int64_t> new_indices(mask.size());
mask.to_indices<int64_t>(new_indices);
EXPECT_EQ(indices, new_indices);
}
}
TEST(index_mask, IndexIteratorConversionFuzzy)
{
RandomNumberGenerator rng;
Vector<int64_t> indices;
{
IndexMask mask{IndexRange(10)};
IndexMask new_mask = mask.slice_and_offset(IndexRange(3, 5), indices);
EXPECT_TRUE(new_mask.is_range());
EXPECT_EQ(new_mask.size(), 5);
EXPECT_EQ(new_mask[0], 0);
EXPECT_EQ(new_mask[1], 1);
indices.append(5);
for ([[maybe_unused]] const int64_t i : IndexRange(1000)) {
for ([[maybe_unused]] const int64_t j :
IndexRange(indices.last() + 1 + rng.get_int32(1000), rng.get_int32(64)))
{
indices.append(j);
}
}
{
Vector<int64_t> original_indices = {2, 3, 5, 7, 8, 9, 10};
IndexMask mask{original_indices.as_span()};
IndexMask new_mask = mask.slice_and_offset(IndexRange(1, 4), indices);
EXPECT_FALSE(new_mask.is_range());
EXPECT_EQ(new_mask.size(), 4);
EXPECT_EQ(new_mask[0], 0);
EXPECT_EQ(new_mask[1], 2);
EXPECT_EQ(new_mask[2], 4);
EXPECT_EQ(new_mask[3], 5);
}
}
TEST(index_mask, ExtractRanges)
{
{
Vector<int64_t> indices = {1, 2, 3, 5, 7, 8};
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges();
EXPECT_EQ(ranges.size(), 3);
EXPECT_EQ(ranges[0], IndexRange(1, 3));
EXPECT_EQ(ranges[1], IndexRange(5, 1));
EXPECT_EQ(ranges[2], IndexRange(7, 2));
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int64_t>(indices, memory);
EXPECT_EQ(mask.size(), indices.size());
for ([[maybe_unused]] const int64_t _ : IndexRange(100)) {
const int64_t index = rng.get_int32(int(indices.size()));
const RawMaskIterator it = mask.index_to_iterator(index);
EXPECT_EQ(mask[it], indices[index]);
const int64_t new_index = mask.iterator_to_index(it);
EXPECT_EQ(index, new_index);
}
{
Vector<int64_t> indices;
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges();
EXPECT_EQ(ranges.size(), 0);
for ([[maybe_unused]] const int64_t _ : IndexRange(100)) {
const int64_t start = rng.get_int32(int(indices.size() - 1));
const int64_t size = 1 + rng.get_int32(int(indices.size() - start - 1));
const IndexMask sub_mask = mask.slice(start, size);
const int64_t index = rng.get_int32(int(sub_mask.size()));
const RawMaskIterator it = sub_mask.index_to_iterator(index);
EXPECT_EQ(sub_mask[it], indices[start + index]);
const int64_t new_index = sub_mask.iterator_to_index(it);
EXPECT_EQ(index, new_index);
}
{
Vector<int64_t> indices = {5, 6, 7, 8, 9, 10};
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges();
EXPECT_EQ(ranges.size(), 1);
EXPECT_EQ(ranges[0], IndexRange(5, 6));
}
{
Vector<int64_t> indices = {1, 3, 6, 8};
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges();
EXPECT_EQ(ranges.size(), 4);
EXPECT_EQ(ranges[0], IndexRange(1, 1));
EXPECT_EQ(ranges[1], IndexRange(3, 1));
EXPECT_EQ(ranges[2], IndexRange(6, 1));
EXPECT_EQ(ranges[3], IndexRange(8, 1));
}
{
Vector<int64_t> indices;
IndexRange range1{4, 10};
IndexRange range2{20, 30};
IndexRange range3{100, 1};
IndexRange range4{150, 100};
for (const IndexRange &range : {range1, range2, range3, range4}) {
for (const int64_t i : range) {
indices.append(i);
for ([[maybe_unused]] const int64_t _ : IndexRange(100)) {
const int64_t index = rng.get_int32(int(indices.size() - 1000));
for (const int64_t offset : {0, 1, 2, 100, 500}) {
const int64_t index_to_search = indices[index] + offset;
const bool contained = std::binary_search(indices.begin(), indices.end(), index_to_search);
const std::optional<RawMaskIterator> it = mask.find(index_to_search);
EXPECT_EQ(contained, it.has_value());
if (contained) {
EXPECT_EQ(index_to_search, mask[*it]);
}
}
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges();
EXPECT_EQ(ranges.size(), 4);
EXPECT_EQ(ranges[0], range1);
EXPECT_EQ(ranges[1], range2);
EXPECT_EQ(ranges[2], range3);
EXPECT_EQ(ranges[3], range4);
}
{
const int64_t max_test_range_size = 50;
Vector<int64_t> indices;
int64_t offset = 0;
for (const int64_t range_size : IndexRange(1, max_test_range_size)) {
for (const int i : IndexRange(range_size)) {
indices.append(offset + i);
}
offset += range_size + 1;
}
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges();
EXPECT_EQ(ranges.size(), max_test_range_size);
for (const int64_t range_size : IndexRange(1, max_test_range_size)) {
const IndexRange range = ranges[range_size - 1];
EXPECT_EQ(range.size(), range_size);
}
}
}
TEST(index_mask, Invert)
TEST(index_mask, FromPredicateFuzzy)
{
{
Vector<int64_t> indices;
Vector<int64_t> new_indices;
IndexMask inverted_mask = IndexMask(indices).invert(IndexRange(10), new_indices);
EXPECT_EQ(inverted_mask.size(), 10);
EXPECT_TRUE(new_indices.is_empty());
RandomNumberGenerator rng;
Set<int> values;
for ([[maybe_unused]] const int64_t _ : IndexRange(10000)) {
values.add(rng.get_int32(100'000));
}
{
Vector<int64_t> indices = {3, 4, 5, 6};
Vector<int64_t> new_indices;
IndexMask inverted_mask = IndexMask(indices).invert(IndexRange(3, 4), new_indices);
EXPECT_TRUE(inverted_mask.is_empty());
}
{
Vector<int64_t> indices = {5};
Vector<int64_t> new_indices;
IndexMask inverted_mask = IndexMask(indices).invert(IndexRange(10), new_indices);
EXPECT_EQ(inverted_mask.size(), 9);
EXPECT_EQ(inverted_mask.indices(), Span<int64_t>({0, 1, 2, 3, 4, 6, 7, 8, 9}));
}
{
Vector<int64_t> indices = {0, 1, 2, 6, 7, 9};
Vector<int64_t> new_indices;
IndexMask inverted_mask = IndexMask(indices).invert(IndexRange(10), new_indices);
EXPECT_EQ(inverted_mask.size(), 4);
EXPECT_EQ(inverted_mask.indices(), Span<int64_t>({3, 4, 5, 8}));
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_predicate(
IndexRange(110'000), GrainSize(1024), memory, [&](const int64_t i) {
return values.contains(int(i));
});
EXPECT_EQ(mask.size(), values.size());
for (const int index : values) {
EXPECT_TRUE(mask.contains(index));
}
mask.foreach_index([&](const int64_t index, const int64_t pos) {
EXPECT_TRUE(values.contains(int(index)));
EXPECT_EQ(index, mask[pos]);
});
}
TEST(index_mask, ExtractRangesInvert)
{
{
Vector<int64_t> indices;
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges_invert(IndexRange(10), nullptr);
EXPECT_EQ(ranges.size(), 1);
EXPECT_EQ(ranges[0], IndexRange(10));
}
{
Vector<int64_t> indices = {1, 2, 3, 6, 7};
Vector<int64_t> skip_amounts;
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges_invert(IndexRange(10),
&skip_amounts);
EXPECT_EQ(ranges.size(), 3);
EXPECT_EQ(ranges[0], IndexRange(0, 1));
EXPECT_EQ(ranges[1], IndexRange(4, 2));
EXPECT_EQ(ranges[2], IndexRange(8, 2));
EXPECT_EQ(skip_amounts[0], 0);
EXPECT_EQ(skip_amounts[1], 3);
EXPECT_EQ(skip_amounts[2], 5);
}
{
Vector<int64_t> indices = {0, 1, 2, 3, 4};
Vector<int64_t> skip_amounts;
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges_invert(IndexRange(5),
&skip_amounts);
EXPECT_TRUE(ranges.is_empty());
EXPECT_TRUE(skip_amounts.is_empty());
}
{
Vector<int64_t> indices = {5, 6, 7, 10, 11};
Vector<int64_t> skip_amounts;
Vector<IndexRange> ranges = IndexMask(indices).extract_ranges_invert(IndexRange(5, 20),
&skip_amounts);
EXPECT_EQ(ranges.size(), 2);
EXPECT_EQ(ranges[0], IndexRange(8, 2));
EXPECT_EQ(ranges[1], IndexRange(12, 13));
EXPECT_EQ(skip_amounts[0], 3);
EXPECT_EQ(skip_amounts[1], 5);
}
}
TEST(index_mask, ContainedIn)
{
EXPECT_TRUE(IndexMask({3, 4, 5}).contained_in(IndexRange(10)));
EXPECT_TRUE(IndexMask().contained_in(IndexRange(5, 0)));
EXPECT_FALSE(IndexMask({3}).contained_in(IndexRange(3)));
EXPECT_FALSE(IndexMask({4, 5, 6}).contained_in(IndexRange(5, 10)));
EXPECT_FALSE(IndexMask({5, 6}).contained_in(IndexRange()));
}
} // namespace blender::tests
} // namespace blender::index_mask::tests

View File

@@ -0,0 +1,64 @@
/* SPDX-License-Identifier: Apache-2.0 */
#include "BLI_array.hh"
#include "BLI_unique_sorted_indices.hh"
#include "testing/testing.h"
namespace blender::unique_sorted_indices::tests {
TEST(unique_sorted_indices, FindRangeEnd)
{
EXPECT_EQ(find_size_of_next_range<int>({4}), 1);
EXPECT_EQ(find_size_of_next_range<int>({4, 5, 6, 7}), 4);
EXPECT_EQ(find_size_of_next_range<int>({4, 5, 6, 8, 9}), 3);
}
TEST(unique_sorted_indices, NonEmptyIsRange)
{
EXPECT_TRUE(non_empty_is_range<int>({0, 1, 2}));
EXPECT_TRUE(non_empty_is_range<int>({5}));
EXPECT_TRUE(non_empty_is_range<int>({7, 8, 9, 10}));
EXPECT_FALSE(non_empty_is_range<int>({3, 5}));
EXPECT_FALSE(non_empty_is_range<int>({3, 4, 5, 6, 8, 9}));
}
TEST(unique_sorted_indices, NonEmptyAsRange)
{
EXPECT_EQ(non_empty_as_range<int>({0, 1, 2}), IndexRange(0, 3));
EXPECT_EQ(non_empty_as_range<int>({5}), IndexRange(5, 1));
EXPECT_EQ(non_empty_as_range<int>({10, 11}), IndexRange(10, 2));
}
TEST(unique_sorted_indices, FindSizeOfNextRange)
{
EXPECT_EQ(find_size_of_next_range<int>({0, 3, 4}), 1);
EXPECT_EQ(find_size_of_next_range<int>({4, 5, 6, 7}), 4);
EXPECT_EQ(find_size_of_next_range<int>({4}), 1);
EXPECT_EQ(find_size_of_next_range<int>({5, 6, 7, 10, 11, 100}), 3);
}
TEST(unique_sorted_indices, FindStartOfNextRange)
{
EXPECT_EQ(find_size_until_next_range<int>({4}, 3), 1);
EXPECT_EQ(find_size_until_next_range<int>({4, 5}, 3), 2);
EXPECT_EQ(find_size_until_next_range<int>({4, 5, 6}, 3), 0);
EXPECT_EQ(find_size_until_next_range<int>({4, 5, 6, 7}, 3), 0);
EXPECT_EQ(find_size_until_next_range<int>({0, 1, 3, 5, 10, 11, 12, 20}, 3), 4);
}
TEST(unique_sorted_indices, SplitToRangesAndSpans)
{
Array<int> data = {1, 2, 3, 4, 7, 9, 10, 13, 14, 15, 20, 21, 22, 23, 24};
Vector<std::variant<IndexRange, Span<int>>> parts;
const int64_t parts_num = split_to_ranges_and_spans<int>(data, 3, parts);
EXPECT_EQ(parts_num, 4);
EXPECT_EQ(parts.size(), 4);
EXPECT_EQ(std::get<IndexRange>(parts[0]), IndexRange(1, 4));
EXPECT_EQ(std::get<Span<int>>(parts[1]), Span<int>({7, 9, 10}));
EXPECT_EQ(std::get<IndexRange>(parts[2]), IndexRange(13, 3));
EXPECT_EQ(std::get<IndexRange>(parts[3]), IndexRange(20, 5));
}
} // namespace blender::unique_sorted_indices::tests

View File

@@ -183,15 +183,18 @@ TEST(virtual_array, MutableToImmutable)
TEST(virtual_array, MaterializeCompressed)
{
IndexMaskMemory memory;
{
std::array<int, 10> array = {0, 10, 20, 30, 40, 50, 60, 70, 80, 90};
VArray<int> varray = VArray<int>::ForSpan(array);
std::array<int, 3> compressed_array;
varray.materialize_compressed({3, 6, 7}, compressed_array);
varray.materialize_compressed(IndexMask::from_indices<int>({3, 6, 7}, memory),
compressed_array);
EXPECT_EQ(compressed_array[0], 30);
EXPECT_EQ(compressed_array[1], 60);
EXPECT_EQ(compressed_array[2], 70);
varray.materialize_compressed_to_uninitialized({2, 8, 9}, compressed_array);
varray.materialize_compressed_to_uninitialized(IndexMask::from_indices<int>({2, 8, 9}, memory),
compressed_array);
EXPECT_EQ(compressed_array[0], 20);
EXPECT_EQ(compressed_array[1], 80);
EXPECT_EQ(compressed_array[2], 90);
@@ -199,12 +202,14 @@ TEST(virtual_array, MaterializeCompressed)
{
VArray<int> varray = VArray<int>::ForSingle(4, 10);
std::array<int, 3> compressed_array;
varray.materialize_compressed({2, 6, 7}, compressed_array);
varray.materialize_compressed(IndexMask::from_indices<int>({2, 6, 7}, memory),
compressed_array);
EXPECT_EQ(compressed_array[0], 4);
EXPECT_EQ(compressed_array[1], 4);
EXPECT_EQ(compressed_array[2], 4);
compressed_array.fill(0);
varray.materialize_compressed_to_uninitialized({0, 1, 2}, compressed_array);
varray.materialize_compressed_to_uninitialized(IndexMask::from_indices<int>({0, 1, 2}, memory),
compressed_array);
EXPECT_EQ(compressed_array[0], 4);
EXPECT_EQ(compressed_array[1], 4);
EXPECT_EQ(compressed_array[2], 4);
@@ -212,11 +217,13 @@ TEST(virtual_array, MaterializeCompressed)
{
VArray<int> varray = VArray<int>::ForFunc(10, [](const int64_t i) { return int(i * i); });
std::array<int, 3> compressed_array;
varray.materialize_compressed({5, 7, 8}, compressed_array);
varray.materialize_compressed(IndexMask::from_indices<int>({5, 7, 8}, memory),
compressed_array);
EXPECT_EQ(compressed_array[0], 25);
EXPECT_EQ(compressed_array[1], 49);
EXPECT_EQ(compressed_array[2], 64);
varray.materialize_compressed_to_uninitialized({1, 2, 3}, compressed_array);
varray.materialize_compressed_to_uninitialized(IndexMask::from_indices<int>({1, 2, 3}, memory),
compressed_array);
EXPECT_EQ(compressed_array[0], 1);
EXPECT_EQ(compressed_array[1], 4);
EXPECT_EQ(compressed_array[2], 9);

View File

@@ -14,21 +14,19 @@ namespace blender::ed::curves {
void transverts_from_curves_positions_create(bke::CurvesGeometry &curves, TransVertStore *tvs)
{
Vector<int64_t> selected_indices;
IndexMask selection = retrieve_selected_points(curves, selected_indices);
IndexMaskMemory memory;
IndexMask selection = retrieve_selected_points(curves, memory);
MutableSpan<float3> positions = curves.positions_for_write();
tvs->transverts = static_cast<TransVert *>(
MEM_calloc_arrayN(selection.size(), sizeof(TransVert), __func__));
tvs->transverts_tot = selection.size();
threading::parallel_for(selection.index_range(), 1024, [&](const IndexRange selection_range) {
for (const int point_i : selection_range) {
TransVert &tv = tvs->transverts[point_i];
tv.loc = positions[selection[point_i]];
tv.flag = SELECT;
copy_v3_v3(tv.oldloc, tv.loc);
}
selection.foreach_index(GrainSize(1024), [&](const int64_t i, const int64_t pos) {
TransVert &tv = tvs->transverts[pos];
tv.loc = positions[i];
tv.flag = SELECT;
copy_v3_v3(tv.oldloc, tv.loc);
});
}

View File

@@ -4,8 +4,6 @@
* \ingroup edcurves
*/
#include "BLI_index_mask_ops.hh"
#include "BKE_curves.hh"
#include "ED_curves.h"
@@ -18,9 +16,8 @@ bool remove_selection(bke::CurvesGeometry &curves, const eAttrDomain selection_d
const VArray<bool> selection = *attributes.lookup_or_default<bool>(
".selection", selection_domain, true);
const int domain_size_orig = attributes.domain_size(selection_domain);
Vector<int64_t> indices;
const IndexMask mask = index_mask_ops::find_indices_from_virtual_array(
selection.index_range(), selection, 4096, indices);
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_bools(selection, memory);
switch (selection_domain) {
case ATTR_DOMAIN_POINT:
curves.remove_points(mask);

View File

@@ -8,7 +8,6 @@
#include "BLI_array_utils.hh"
#include "BLI_devirtualize_parameters.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_kdtree.h"
#include "BLI_math_matrix.hh"
#include "BLI_rand.hh"

View File

@@ -5,7 +5,6 @@
*/
#include "BLI_array_utils.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_lasso_2d.h"
#include "BLI_rand.hh"
#include "BLI_rect.h"
@@ -22,7 +21,7 @@
namespace blender::ed::curves {
static IndexMask retrieve_selected_curves(const bke::CurvesGeometry &curves,
Vector<int64_t> &r_indices)
IndexMaskMemory &memory)
{
const IndexRange curves_range = curves.curves_range();
const bke::AttributeAccessor attributes = curves.attributes();
@@ -40,8 +39,8 @@ static IndexMask retrieve_selected_curves(const bke::CurvesGeometry &curves,
return selection.get_internal_single() ? IndexMask(curves_range) : IndexMask();
}
const OffsetIndices points_by_curve = curves.points_by_curve();
return index_mask_ops::find_indices_based_on_predicate(
curves_range, 512, r_indices, [&](const int64_t curve_i) {
return IndexMask::from_predicate(
curves_range, GrainSize(512), memory, [&](const int64_t curve_i) {
const IndexRange points = points_by_curve[curve_i];
/* The curve is selected if any of its points are selected. */
Array<bool, 32> point_selection(points.size());
@@ -51,28 +50,25 @@ static IndexMask retrieve_selected_curves(const bke::CurvesGeometry &curves,
}
const VArray<bool> selection = *attributes.lookup_or_default<bool>(
".selection", ATTR_DOMAIN_CURVE, true);
return index_mask_ops::find_indices_from_virtual_array(curves_range, selection, 2048, r_indices);
return IndexMask::from_bools(curves_range, selection, memory);
}
IndexMask retrieve_selected_curves(const Curves &curves_id, Vector<int64_t> &r_indices)
IndexMask retrieve_selected_curves(const Curves &curves_id, IndexMaskMemory &memory)
{
const bke::CurvesGeometry &curves = curves_id.geometry.wrap();
return retrieve_selected_curves(curves, r_indices);
return retrieve_selected_curves(curves, memory);
}
IndexMask retrieve_selected_points(const bke::CurvesGeometry &curves, Vector<int64_t> &r_indices)
IndexMask retrieve_selected_points(const bke::CurvesGeometry &curves, IndexMaskMemory &memory)
{
return index_mask_ops::find_indices_from_virtual_array(
curves.points_range(),
*curves.attributes().lookup_or_default<bool>(".selection", ATTR_DOMAIN_POINT, true),
2048,
r_indices);
return IndexMask::from_bools(
*curves.attributes().lookup_or_default<bool>(".selection", ATTR_DOMAIN_POINT, true), memory);
}
IndexMask retrieve_selected_points(const Curves &curves_id, Vector<int64_t> &r_indices)
IndexMask retrieve_selected_points(const Curves &curves_id, IndexMaskMemory &memory)
{
const bke::CurvesGeometry &curves = curves_id.geometry.wrap();
return retrieve_selected_points(curves, r_indices);
return retrieve_selected_points(curves, memory);
}
bke::GSpanAttributeWriter ensure_selection_attribute(bke::CurvesGeometry &curves,

View File

@@ -117,14 +117,14 @@ bool has_anything_selected(const VArray<bool> &varray, IndexRange range_to_check
* Find curves that have any point selected (a selection factor greater than zero),
* or curves that have their own selection factor greater than zero.
*/
IndexMask retrieve_selected_curves(const Curves &curves_id, Vector<int64_t> &r_indices);
IndexMask retrieve_selected_curves(const Curves &curves_id, IndexMaskMemory &memory);
/**
* Find points that are selected (a selection factor greater than zero),
* or points in curves with a selection factor greater than zero).
*/
IndexMask retrieve_selected_points(const bke::CurvesGeometry &curves, Vector<int64_t> &r_indices);
IndexMask retrieve_selected_points(const Curves &curves_id, Vector<int64_t> &r_indices);
IndexMask retrieve_selected_points(const bke::CurvesGeometry &curves, IndexMaskMemory &memory);
IndexMask retrieve_selected_points(const Curves &curves_id, IndexMaskMemory &memory);
/**
* If the ".selection" attribute doesn't exist, create it with the requested type (bool or float).

View File

@@ -14,7 +14,6 @@
#include "DNA_view3d_types.h"
#include "BLI_array.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_math.h"
#include "BLI_utildefines.h"
@@ -1346,9 +1345,8 @@ void ED_mesh_split_faces(Mesh *mesh)
}
});
Vector<int64_t> split_indices;
const IndexMask split_mask = index_mask_ops::find_indices_from_virtual_array(
sharp_edges.index_range(), VArray<bool>::ForSpan(sharp_edges), 4096, split_indices);
IndexMaskMemory memory;
const IndexMask split_mask = IndexMask::from_bools(sharp_edges, memory);
if (split_mask.is_empty()) {
return;
}

View File

@@ -435,7 +435,7 @@ void report_invalid_uv_map(ReportList *reports)
}
void CurvesConstraintSolver::initialize(const bke::CurvesGeometry &curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const bool use_surface_collision)
{
use_surface_collision_ = use_surface_collision;
@@ -448,7 +448,7 @@ void CurvesConstraintSolver::initialize(const bke::CurvesGeometry &curves,
}
void CurvesConstraintSolver::solve_step(bke::CurvesGeometry &curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const Mesh *surface,
const CurvesSurfaceTransforms &transforms)
{

View File

@@ -4,7 +4,6 @@
#include "curves_sculpt_intern.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_kdtree.h"
#include "BLI_math_matrix_types.hh"
#include "BLI_rand.hh"
@@ -54,7 +53,6 @@
namespace blender::ed::sculpt_paint {
using blender::bke::CurvesGeometry;
using threading::EnumerableThreadSpecific;
/**
* Moves individual points under the brush and does a length preservation step afterwards.
@@ -99,7 +97,7 @@ struct CombOperationExecutor {
CurvesGeometry *curves_orig_ = nullptr;
VArray<float> point_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
float2 brush_pos_prev_re_;
@@ -135,7 +133,7 @@ struct CombOperationExecutor {
point_factors_ = *curves_orig_->attributes().lookup_or_default<float>(
".selection", ATTR_DOMAIN_POINT, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_orig_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_orig_, selected_curve_memory_);
brush_pos_prev_re_ = self_->brush_pos_last_re_;
brush_pos_re_ = stroke_extension.mouse_position;
@@ -151,8 +149,8 @@ struct CombOperationExecutor {
self_->curve_lengths_.reinitialize(curves_orig_->curves_num());
const Span<float> segment_lengths = self_->constraint_solver_.segment_lengths();
const OffsetIndices points_by_curve = curves_orig_->points_by_curve();
threading::parallel_for(curve_selection_.index_range(), 512, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
const Span<float> lengths = segment_lengths.slice(points.drop_back(1));
self_->curve_lengths_[curve_i] = std::accumulate(lengths.begin(), lengths.end(), 0.0f);
@@ -178,9 +176,8 @@ struct CombOperationExecutor {
static_cast<Mesh *>(curves_id_orig_->surface->data) :
nullptr;
Vector<int64_t> indices;
const IndexMask changed_curves_mask = index_mask_ops::find_indices_from_array(changed_curves,
indices);
IndexMaskMemory memory;
const IndexMask changed_curves_mask = IndexMask::from_bools(changed_curves, memory);
self_->constraint_solver_.solve_step(*curves_orig_, changed_curves_mask, surface, transforms_);
curves_orig_->tag_positions_changed();
@@ -222,8 +219,8 @@ struct CombOperationExecutor {
const Span<float> segment_lengths = self_->constraint_solver_.segment_lengths();
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
bool curve_changed = false;
const IndexRange points = points_by_curve[curve_i];
@@ -341,8 +338,8 @@ struct CombOperationExecutor {
const OffsetIndices points_by_curve = curves_orig_->points_by_curve();
const Span<float> segment_lengths = self_->constraint_solver_.segment_lengths();
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
bool curve_changed = false;
const IndexRange points = points_by_curve[curve_i];

View File

@@ -4,7 +4,6 @@
#include "curves_sculpt_intern.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_kdtree.h"
#include "BLI_math_matrix_types.hh"
#include "BLI_rand.hh"
@@ -72,7 +71,7 @@ struct DeleteOperationExecutor {
Curves *curves_id_ = nullptr;
CurvesGeometry *curves_ = nullptr;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
const CurvesSculpt *curves_sculpt_ = nullptr;
@@ -94,8 +93,7 @@ struct DeleteOperationExecutor {
curves_id_ = static_cast<Curves *>(object_->data);
curves_ = &curves_id_->geometry.wrap();
selected_curve_indices_.clear();
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
curves_sculpt_ = ctx_.scene->toolsettings->curves_sculpt;
brush_ = BKE_paint_brush_for_read(&curves_sculpt_->paint);
@@ -129,14 +127,11 @@ struct DeleteOperationExecutor {
BLI_assert_unreachable();
}
Vector<int64_t> indices;
const IndexMask mask_to_delete = index_mask_ops::find_indices_based_on_predicate(
curves_->curves_range(), 4096, indices, [&](const int curve_i) {
return curves_to_delete[curve_i];
});
IndexMaskMemory mask_memory;
const IndexMask mask_to_delete = IndexMask::from_bools(curves_to_delete, mask_memory);
/* Remove deleted curves from the stored deformed positions. */
const Vector<IndexRange> ranges_to_keep = mask_to_delete.extract_ranges_invert(
const Vector<IndexRange> ranges_to_keep = mask_to_delete.to_ranges_invert(
curves_->curves_range());
const OffsetIndices points_by_curve = curves_->points_by_curve();
Vector<float3> new_deformed_positions;
@@ -173,8 +168,8 @@ struct DeleteOperationExecutor {
const float brush_radius_sq_re = pow2f(brush_radius_re);
const OffsetIndices points_by_curve = curves_->points_by_curve();
threading::parallel_for(curve_selection_.index_range(), 512, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
if (points.size() == 1) {
const float3 pos_cu = math::transform_point(brush_transform_inv,
@@ -237,8 +232,8 @@ struct DeleteOperationExecutor {
const float brush_radius_sq_cu = pow2f(brush_radius_cu);
const OffsetIndices points_by_curve = curves_->points_by_curve();
threading::parallel_for(curve_selection_.index_range(), 512, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
if (points.size() == 1) {

View File

@@ -20,7 +20,7 @@
#include "DEG_depsgraph.h"
#include "DEG_depsgraph_query.h"
#include "BLI_index_mask_ops.hh"
#include "BLI_enumerable_thread_specific.hh"
#include "BLI_kdtree.h"
#include "BLI_rand.hh"
#include "BLI_task.hh"
@@ -505,7 +505,7 @@ struct DensitySubtractOperationExecutor {
Curves *curves_id_ = nullptr;
CurvesGeometry *curves_ = nullptr;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
Object *surface_ob_orig_ = nullptr;
@@ -568,7 +568,7 @@ struct DensitySubtractOperationExecutor {
minimum_distance_ = brush_->curves_sculpt_settings->minimum_distance;
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
transforms_ = CurvesSurfaceTransforms(*object_, curves_id_->surface);
const eBrushFalloffShape falloff_shape = static_cast<eBrushFalloffShape>(
@@ -585,10 +585,10 @@ struct DensitySubtractOperationExecutor {
root_points_kdtree_ = BLI_kdtree_3d_new(curve_selection_.size());
BLI_SCOPED_DEFER([&]() { BLI_kdtree_3d_free(root_points_kdtree_); });
for (const int curve_i : curve_selection_) {
curve_selection_.foreach_index([&](const int curve_i) {
const float3 &pos_cu = self_->deformed_root_positions_[curve_i];
BLI_kdtree_3d_insert(root_points_kdtree_, curve_i, pos_cu);
}
});
BLI_kdtree_3d_balance(root_points_kdtree_);
/* Find all curves that should be deleted. */
@@ -603,14 +603,11 @@ struct DensitySubtractOperationExecutor {
BLI_assert_unreachable();
}
Vector<int64_t> indices;
const IndexMask mask_to_delete = index_mask_ops::find_indices_based_on_predicate(
curves_->curves_range(), 4096, indices, [&](const int curve_i) {
return curves_to_delete[curve_i];
});
IndexMaskMemory mask_memory;
const IndexMask mask_to_delete = IndexMask::from_bools(curves_to_delete, mask_memory);
/* Remove deleted curves from the stored deformed root positions. */
const Vector<IndexRange> ranges_to_keep = mask_to_delete.extract_ranges_invert(
const Vector<IndexRange> ranges_to_keep = mask_to_delete.to_ranges_invert(
curves_->curves_range());
BLI_assert(curves_->curves_num() == self_->deformed_root_positions_.size());
Vector<float3> new_deformed_positions;
@@ -676,35 +673,37 @@ struct DensitySubtractOperationExecutor {
});
/* Detect curves that are too close to other existing curves. */
for (const int curve_i : curve_selection_) {
if (curves_to_delete[curve_i]) {
continue;
}
if (!allow_remove_curve[curve_i]) {
continue;
}
const float3 orig_pos_cu = self_->deformed_root_positions_[curve_i];
const float3 pos_cu = math::transform_point(brush_transform, orig_pos_cu);
float2 pos_re;
ED_view3d_project_float_v2_m4(ctx_.region, pos_cu, pos_re, projection.ptr());
const float dist_to_brush_sq_re = math::distance_squared(brush_pos_re_, pos_re);
if (dist_to_brush_sq_re > brush_radius_sq_re) {
continue;
}
BLI_kdtree_3d_range_search_cb_cpp(
root_points_kdtree_,
orig_pos_cu,
minimum_distance_,
[&](const int other_curve_i, const float * /*co*/, float /*dist_sq*/) {
if (other_curve_i == curve_i) {
curve_selection_.foreach_segment([&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
if (curves_to_delete[curve_i]) {
continue;
}
if (!allow_remove_curve[curve_i]) {
continue;
}
const float3 orig_pos_cu = self_->deformed_root_positions_[curve_i];
const float3 pos_cu = math::transform_point(brush_transform, orig_pos_cu);
float2 pos_re;
ED_view3d_project_float_v2_m4(ctx_.region, pos_cu, pos_re, projection.ptr());
const float dist_to_brush_sq_re = math::distance_squared(brush_pos_re_, pos_re);
if (dist_to_brush_sq_re > brush_radius_sq_re) {
continue;
}
BLI_kdtree_3d_range_search_cb_cpp(
root_points_kdtree_,
orig_pos_cu,
minimum_distance_,
[&](const int other_curve_i, const float * /*co*/, float /*dist_sq*/) {
if (other_curve_i == curve_i) {
return true;
}
if (allow_remove_curve[other_curve_i]) {
curves_to_delete[other_curve_i] = true;
}
return true;
}
if (allow_remove_curve[other_curve_i]) {
curves_to_delete[other_curve_i] = true;
}
return true;
});
}
});
}
});
}
void reduce_density_spherical_with_symmetry(MutableSpan<bool> curves_to_delete)
@@ -763,33 +762,35 @@ struct DensitySubtractOperationExecutor {
});
/* Detect curves that are too close to other existing curves. */
for (const int curve_i : curve_selection_) {
if (curves_to_delete[curve_i]) {
continue;
}
if (!allow_remove_curve[curve_i]) {
continue;
}
const float3 &pos_cu = self_->deformed_root_positions_[curve_i];
const float dist_to_brush_sq_cu = math::distance_squared(pos_cu, brush_pos_cu);
if (dist_to_brush_sq_cu > brush_radius_sq_cu) {
continue;
}
curve_selection_.foreach_segment([&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
if (curves_to_delete[curve_i]) {
continue;
}
if (!allow_remove_curve[curve_i]) {
continue;
}
const float3 &pos_cu = self_->deformed_root_positions_[curve_i];
const float dist_to_brush_sq_cu = math::distance_squared(pos_cu, brush_pos_cu);
if (dist_to_brush_sq_cu > brush_radius_sq_cu) {
continue;
}
BLI_kdtree_3d_range_search_cb_cpp(
root_points_kdtree_,
pos_cu,
minimum_distance_,
[&](const int other_curve_i, const float * /*co*/, float /*dist_sq*/) {
if (other_curve_i == curve_i) {
BLI_kdtree_3d_range_search_cb_cpp(
root_points_kdtree_,
pos_cu,
minimum_distance_,
[&](const int other_curve_i, const float * /*co*/, float /*dist_sq*/) {
if (other_curve_i == curve_i) {
return true;
}
if (allow_remove_curve[other_curve_i]) {
curves_to_delete[other_curve_i] = true;
}
return true;
}
if (allow_remove_curve[other_curve_i]) {
curves_to_delete[other_curve_i] = true;
}
return true;
});
}
});
}
});
}
};

View File

@@ -240,7 +240,7 @@ struct CurvesEffectOperationExecutor {
CurvesGeometry *curves_ = nullptr;
VArray<float> curve_selection_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
const Brush *brush_ = nullptr;
@@ -279,7 +279,7 @@ struct CurvesEffectOperationExecutor {
curve_selection_factors_ = *curves_->attributes().lookup_or_default(
".selection", ATTR_DOMAIN_CURVE, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
const CurvesSculpt &curves_sculpt = *ctx_.scene->toolsettings->curves_sculpt;
brush_ = BKE_paint_brush_for_read(&curves_sculpt.paint);

View File

@@ -155,11 +155,11 @@ struct CurvesConstraintSolver {
public:
void initialize(const bke::CurvesGeometry &curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const bool use_surface_collision);
void solve_step(bke::CurvesGeometry &curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const Mesh *surface,
const CurvesSurfaceTransforms &transforms);

View File

@@ -527,8 +527,8 @@ namespace select_grow {
struct GrowOperatorDataPerCurve : NonCopyable, NonMovable {
Curves *curves_id;
Vector<int64_t> selected_point_indices;
Vector<int64_t> unselected_point_indices;
IndexMaskMemory selected_points_memory;
IndexMaskMemory unselected_points_memory;
IndexMask selected_points;
IndexMask unselected_points;
Array<float> distances_to_selected;
@@ -548,36 +548,24 @@ static void update_points_selection(const GrowOperatorDataPerCurve &data,
MutableSpan<float> points_selection)
{
if (distance > 0.0f) {
threading::parallel_for(
data.unselected_points.index_range(), 256, [&](const IndexRange range) {
for (const int i : range) {
const int point_i = data.unselected_points[i];
const float distance_to_selected = data.distances_to_selected[i];
const float selection = distance_to_selected <= distance ? 1.0f : 0.0f;
points_selection[point_i] = selection;
}
data.unselected_points.foreach_index(
GrainSize(256), [&](const int point_i, const int index_pos) {
const float distance_to_selected = data.distances_to_selected[index_pos];
const float selection = distance_to_selected <= distance ? 1.0f : 0.0f;
points_selection[point_i] = selection;
});
threading::parallel_for(data.selected_points.index_range(), 512, [&](const IndexRange range) {
for (const int point_i : data.selected_points.slice(range)) {
points_selection[point_i] = 1.0f;
}
});
data.selected_points.foreach_index(
GrainSize(512), [&](const int point_i) { points_selection[point_i] = 1.0f; });
}
else {
threading::parallel_for(data.selected_points.index_range(), 256, [&](const IndexRange range) {
for (const int i : range) {
const int point_i = data.selected_points[i];
const float distance_to_unselected = data.distances_to_unselected[i];
const float selection = distance_to_unselected <= -distance ? 0.0f : 1.0f;
points_selection[point_i] = selection;
}
});
threading::parallel_for(
data.unselected_points.index_range(), 512, [&](const IndexRange range) {
for (const int point_i : data.unselected_points.slice(range)) {
points_selection[point_i] = 0.0f;
}
data.selected_points.foreach_index(
GrainSize(256), [&](const int point_i, const int index_pos) {
const float distance_to_unselected = data.distances_to_unselected[index_pos];
const float selection = distance_to_unselected <= -distance ? 0.0f : 1.0f;
points_selection[point_i] = selection;
});
data.unselected_points.foreach_index(
GrainSize(512), [&](const int point_i) { points_selection[point_i] = 0.0f; });
}
}
@@ -646,9 +634,9 @@ static void select_grow_invoke_per_curve(const Curves &curves_id,
/* Find indices of selected and unselected points. */
curve_op_data.selected_points = curves::retrieve_selected_points(
curves_id, curve_op_data.selected_point_indices);
curve_op_data.unselected_points = curve_op_data.selected_points.invert(
curves.points_range(), curve_op_data.unselected_point_indices);
curves_id, curve_op_data.selected_points_memory);
curve_op_data.unselected_points = curve_op_data.selected_points.complement(
curves.points_range(), curve_op_data.unselected_points_memory);
threading::parallel_invoke(
1024 < curve_op_data.selected_points.size() + curve_op_data.unselected_points.size(),
@@ -656,10 +644,10 @@ static void select_grow_invoke_per_curve(const Curves &curves_id,
/* Build KD-tree for the selected points. */
KDTree_3d *kdtree = BLI_kdtree_3d_new(curve_op_data.selected_points.size());
BLI_SCOPED_DEFER([&]() { BLI_kdtree_3d_free(kdtree); });
for (const int point_i : curve_op_data.selected_points) {
curve_op_data.selected_points.foreach_index([&](const int point_i) {
const float3 &position = positions[point_i];
BLI_kdtree_3d_insert(kdtree, point_i, position);
}
});
BLI_kdtree_3d_balance(kdtree);
/* For each unselected point, compute the distance to the closest selected point. */
@@ -679,10 +667,10 @@ static void select_grow_invoke_per_curve(const Curves &curves_id,
/* Build KD-tree for the unselected points. */
KDTree_3d *kdtree = BLI_kdtree_3d_new(curve_op_data.unselected_points.size());
BLI_SCOPED_DEFER([&]() { BLI_kdtree_3d_free(kdtree); });
for (const int point_i : curve_op_data.unselected_points) {
curve_op_data.unselected_points.foreach_index([&](const int point_i) {
const float3 &position = positions[point_i];
BLI_kdtree_3d_insert(kdtree, point_i, position);
}
});
BLI_kdtree_3d_balance(kdtree);
/* For each selected point, compute the distance to the closest unselected point. */

View File

@@ -4,7 +4,6 @@
#include "curves_sculpt_intern.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_math_matrix_types.hh"
#include "BLI_task.hh"
#include "BLI_vector.hh"
@@ -67,7 +66,7 @@ struct PinchOperationExecutor {
CurvesGeometry *curves_ = nullptr;
VArray<float> point_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
CurvesSurfaceTransforms transforms_;
@@ -107,7 +106,7 @@ struct PinchOperationExecutor {
point_factors_ = *curves_->attributes().lookup_or_default<float>(
".selection", ATTR_DOMAIN_POINT, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
brush_pos_re_ = stroke_extension.mouse_position;
const eBrushFalloffShape falloff_shape = static_cast<eBrushFalloffShape>(
@@ -139,9 +138,8 @@ struct PinchOperationExecutor {
BLI_assert_unreachable();
}
Vector<int64_t> indices;
const IndexMask changed_curves_mask = index_mask_ops::find_indices_from_array(changed_curves,
indices);
IndexMaskMemory memory;
const IndexMask changed_curves_mask = IndexMask::from_bools(changed_curves, memory);
const Mesh *surface = curves_id_->surface && curves_id_->surface->type == OB_MESH ?
static_cast<const Mesh *>(curves_id_->surface->data) :
nullptr;
@@ -176,8 +174,8 @@ struct PinchOperationExecutor {
const float brush_radius_re = brush_radius_base_re_ * brush_radius_factor_;
const float brush_radius_sq_re = pow2f(brush_radius_re);
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
for (const int point_i : points.drop_front(1)) {
const float3 old_pos_cu = deformation.positions[point_i];
@@ -249,8 +247,8 @@ struct PinchOperationExecutor {
bke::crazyspace::get_evaluated_curves_deformation(*ctx_.depsgraph, *object_);
const OffsetIndices points_by_curve = curves_->points_by_curve();
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
curve_selection_.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
for (const int point_i : points.drop_front(1)) {
const float3 old_pos_cu = deformation.positions[point_i];

View File

@@ -19,7 +19,6 @@
#include "WM_api.h"
#include "BLI_index_mask_ops.hh"
#include "BLI_length_parameterize.hh"
#include "BLI_math_matrix.hh"
#include "BLI_task.hh"
@@ -57,7 +56,7 @@ struct PuffOperationExecutor {
CurvesGeometry *curves_ = nullptr;
VArray<float> point_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
const CurvesSculpt *curves_sculpt_ = nullptr;
@@ -106,7 +105,7 @@ struct PuffOperationExecutor {
point_factors_ = *curves_->attributes().lookup_or_default<float>(
".selection", ATTR_DOMAIN_POINT, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
falloff_shape_ = static_cast<eBrushFalloffShape>(brush_->falloff_shape);
@@ -164,9 +163,11 @@ struct PuffOperationExecutor {
changed_curves_indices.append(curve_selection_[select_i]);
}
}
IndexMaskMemory memory;
const IndexMask changed_curves_mask = IndexMask::from_indices<int64_t>(changed_curves_indices,
memory);
self_->constraint_solver_.solve_step(
*curves_, IndexMask(changed_curves_indices), surface_, transforms_);
self_->constraint_solver_.solve_step(*curves_, changed_curves_mask, surface_, transforms_);
curves_->tag_positions_changed();
DEG_id_tag_update(&curves_id_->id, ID_RECALC_GEOMETRY);

View File

@@ -1,7 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_index_mask_ops.hh"
#include "BKE_curves.hh"
#include "curves_sculpt_intern.hh"

View File

@@ -111,7 +111,7 @@ struct SlideOperationExecutor {
BVHTreeFromMesh surface_bvh_eval_;
VArray<float> curve_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
float2 brush_pos_re_;
@@ -157,7 +157,7 @@ struct SlideOperationExecutor {
curve_factors_ = *curves_orig_->attributes().lookup_or_default(
".selection", ATTR_DOMAIN_CURVE, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_orig_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_orig_, selected_curve_memory_);
brush_pos_re_ = stroke_extension.mouse_position;
@@ -269,35 +269,37 @@ struct SlideOperationExecutor {
const float brush_radius_sq_cu = pow2f(brush_radius_cu);
const Span<int> offsets = curves_orig_->offsets();
for (const int curve_i : curve_selection_) {
const int first_point_i = offsets[curve_i];
const float3 old_pos_cu = self_->initial_deformed_positions_cu_[first_point_i];
const float dist_to_brush_sq_cu = math::distance_squared(old_pos_cu, brush_pos_cu);
if (dist_to_brush_sq_cu > brush_radius_sq_cu) {
/* Root point is too far away from curve center. */
continue;
}
const float dist_to_brush_cu = std::sqrt(dist_to_brush_sq_cu);
const float radius_falloff = BKE_brush_curve_strength(
brush_, dist_to_brush_cu, brush_radius_cu);
curve_selection_.foreach_segment([&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const int first_point_i = offsets[curve_i];
const float3 old_pos_cu = self_->initial_deformed_positions_cu_[first_point_i];
const float dist_to_brush_sq_cu = math::distance_squared(old_pos_cu, brush_pos_cu);
if (dist_to_brush_sq_cu > brush_radius_sq_cu) {
/* Root point is too far away from curve center. */
continue;
}
const float dist_to_brush_cu = std::sqrt(dist_to_brush_sq_cu);
const float radius_falloff = BKE_brush_curve_strength(
brush_, dist_to_brush_cu, brush_radius_cu);
const float2 uv = surface_uv_coords[curve_i];
ReverseUVSampler::Result result = reverse_uv_sampler_orig.sample(uv);
if (result.type != ReverseUVSampler::ResultType::Ok) {
/* The curve does not have a valid surface attachment. */
found_invalid_uv_mapping_.store(true);
continue;
}
/* Compute the normal at the initial surface position. */
const float3 point_no = geometry::compute_surface_point_normal(
surface_looptris_orig_[result.looptri_index],
result.bary_weights,
corner_normals_orig_su_);
const float3 normal_cu = math::normalize(
math::transform_point(transforms_.surface_to_curves_normal, point_no));
const float2 uv = surface_uv_coords[curve_i];
ReverseUVSampler::Result result = reverse_uv_sampler_orig.sample(uv);
if (result.type != ReverseUVSampler::ResultType::Ok) {
/* The curve does not have a valid surface attachment. */
found_invalid_uv_mapping_.store(true);
continue;
}
/* Compute the normal at the initial surface position. */
const float3 point_no = geometry::compute_surface_point_normal(
surface_looptris_orig_[result.looptri_index],
result.bary_weights,
corner_normals_orig_su_);
const float3 normal_cu = math::normalize(
math::transform_point(transforms_.surface_to_curves_normal, point_no));
r_curves_to_slide.append({curve_i, radius_falloff, normal_cu});
}
r_curves_to_slide.append({curve_i, radius_falloff, normal_cu});
}
});
}
void slide_with_symmetry()

View File

@@ -44,7 +44,7 @@ struct SmoothOperationExecutor {
CurvesGeometry *curves_ = nullptr;
VArray<float> point_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
const CurvesSculpt *curves_sculpt_ = nullptr;
@@ -79,7 +79,7 @@ struct SmoothOperationExecutor {
point_factors_ = *curves_->attributes().lookup_or_default<float>(
".selection", ATTR_DOMAIN_POINT, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
transforms_ = CurvesSurfaceTransforms(*object_, curves_id_->surface);
const eBrushFalloffShape falloff_shape = static_cast<eBrushFalloffShape>(
@@ -139,29 +139,27 @@ struct SmoothOperationExecutor {
bke::crazyspace::get_evaluated_curves_deformation(*ctx_.depsgraph, *object_);
const OffsetIndices points_by_curve = curves_->points_by_curve();
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
const IndexRange points = points_by_curve[curve_i];
for (const int point_i : points) {
const float3 &pos_cu = math::transform_point(brush_transform_inv,
deformation.positions[point_i]);
float2 pos_re;
ED_view3d_project_float_v2_m4(ctx_.region, pos_cu, pos_re, projection.ptr());
const float dist_to_brush_sq_re = math::distance_squared(pos_re, brush_pos_re_);
if (dist_to_brush_sq_re > brush_radius_sq_re) {
continue;
}
const float dist_to_brush_re = std::sqrt(dist_to_brush_sq_re);
const float radius_falloff = BKE_brush_curve_strength(
brush_, dist_to_brush_re, brush_radius_re);
/* Used to make the brush easier to use. Otherwise a strength of 1 would be way too
* large. */
const float weight_factor = 0.1f;
const float weight = weight_factor * brush_strength_ * radius_falloff *
point_factors_[point_i];
math::max_inplace(r_point_smooth_factors[point_i], weight);
curve_selection_.foreach_index(GrainSize(256), [&](const int curve_i) {
const IndexRange points = points_by_curve[curve_i];
for (const int point_i : points) {
const float3 &pos_cu = math::transform_point(brush_transform_inv,
deformation.positions[point_i]);
float2 pos_re;
ED_view3d_project_float_v2_m4(ctx_.region, pos_cu, pos_re, projection.ptr());
const float dist_to_brush_sq_re = math::distance_squared(pos_re, brush_pos_re_);
if (dist_to_brush_sq_re > brush_radius_sq_re) {
continue;
}
const float dist_to_brush_re = std::sqrt(dist_to_brush_sq_re);
const float radius_falloff = BKE_brush_curve_strength(
brush_, dist_to_brush_re, brush_radius_re);
/* Used to make the brush easier to use. Otherwise a strength of 1 would be way too
* large. */
const float weight_factor = 0.1f;
const float weight = weight_factor * brush_strength_ * radius_falloff *
point_factors_[point_i];
math::max_inplace(r_point_smooth_factors[point_i], weight);
}
});
}
@@ -199,26 +197,24 @@ struct SmoothOperationExecutor {
bke::crazyspace::get_evaluated_curves_deformation(*ctx_.depsgraph, *object_);
const OffsetIndices points_by_curve = curves_->points_by_curve();
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection_.slice(range)) {
const IndexRange points = points_by_curve[curve_i];
for (const int point_i : points) {
const float3 &pos_cu = deformation.positions[point_i];
const float dist_to_brush_sq_cu = math::distance_squared(pos_cu, brush_pos_cu);
if (dist_to_brush_sq_cu > brush_radius_sq_cu) {
continue;
}
const float dist_to_brush_cu = std::sqrt(dist_to_brush_sq_cu);
const float radius_falloff = BKE_brush_curve_strength(
brush_, dist_to_brush_cu, brush_radius_cu);
/* Used to make the brush easier to use. Otherwise a strength of 1 would be way too
* large. */
const float weight_factor = 0.1f;
const float weight = weight_factor * brush_strength_ * radius_falloff *
point_factors_[point_i];
math::max_inplace(r_point_smooth_factors[point_i], weight);
curve_selection_.foreach_index(GrainSize(256), [&](const int curve_i) {
const IndexRange points = points_by_curve[curve_i];
for (const int point_i : points) {
const float3 &pos_cu = deformation.positions[point_i];
const float dist_to_brush_sq_cu = math::distance_squared(pos_cu, brush_pos_cu);
if (dist_to_brush_sq_cu > brush_radius_sq_cu) {
continue;
}
const float dist_to_brush_cu = std::sqrt(dist_to_brush_sq_cu);
const float radius_falloff = BKE_brush_curve_strength(
brush_, dist_to_brush_cu, brush_radius_cu);
/* Used to make the brush easier to use. Otherwise a strength of 1 would be way too
* large. */
const float weight_factor = 0.1f;
const float weight = weight_factor * brush_strength_ * radius_falloff *
point_factors_[point_i];
math::max_inplace(r_point_smooth_factors[point_i], weight);
}
});
}
@@ -227,9 +223,10 @@ struct SmoothOperationExecutor {
{
const OffsetIndices points_by_curve = curves_->points_by_curve();
MutableSpan<float3> positions = curves_->positions_for_write();
threading::parallel_for(curve_selection_.index_range(), 256, [&](const IndexRange range) {
curve_selection_.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
Vector<float3> old_positions;
for (const int curve_i : curve_selection_.slice(range)) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
old_positions.clear();
old_positions.extend(positions.slice(points));

View File

@@ -4,7 +4,6 @@
#include "curves_sculpt_intern.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_kdtree.h"
#include "BLI_length_parameterize.hh"
#include "BLI_math_matrix_types.hh"
@@ -86,7 +85,7 @@ struct SnakeHookOperatorExecutor {
CurvesGeometry *curves_ = nullptr;
VArray<float> curve_factors_;
Vector<int64_t> selected_curve_indices_;
IndexMaskMemory selected_curve_memory_;
IndexMask curve_selection_;
CurvesSurfaceTransforms transforms_;
@@ -125,7 +124,7 @@ struct SnakeHookOperatorExecutor {
curve_factors_ = *curves_->attributes().lookup_or_default(
".selection", ATTR_DOMAIN_CURVE, 1.0f);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_indices_);
curve_selection_ = curves::retrieve_selected_curves(*curves_id_, selected_curve_memory_);
brush_pos_prev_re_ = self.last_mouse_position_re_;
brush_pos_re_ = stroke_extension.mouse_position;

View File

@@ -13,7 +13,6 @@
#include "BLI_array.hh"
#include "BLI_function_ref.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_math_base.h"
#include "BLI_math_color.h"
#include "BLI_vector.hh"
@@ -42,6 +41,7 @@ using blender::ColorGeometry4f;
using blender::FunctionRef;
using blender::GMutableSpan;
using blender::IndexMask;
using blender::IndexMaskMemory;
using blender::IndexRange;
using blender::Vector;
@@ -158,7 +158,7 @@ void PAINT_OT_vertex_color_from_weight(wmOperatorType *ot)
static IndexMask get_selected_indices(const Mesh &mesh,
const eAttrDomain domain,
Vector<int64_t> &indices)
IndexMaskMemory &memory)
{
using namespace blender;
const bke::AttributeAccessor attributes = mesh.attributes();
@@ -166,14 +166,12 @@ static IndexMask get_selected_indices(const Mesh &mesh,
if (mesh.editflag & ME_EDIT_PAINT_FACE_SEL) {
const VArray<bool> selection = *attributes.lookup_or_default<bool>(
".select_poly", domain, false);
return index_mask_ops::find_indices_from_virtual_array(
selection.index_range(), selection, 4096, indices);
return IndexMask::from_bools(selection, memory);
}
if (mesh.editflag & ME_EDIT_PAINT_VERT_SEL) {
const VArray<bool> selection = *attributes.lookup_or_default<bool>(
".select_vert", domain, false);
return index_mask_ops::find_indices_from_virtual_array(
selection.index_range(), selection, 4096, indices);
return IndexMask::from_bools(selection, memory);
}
return IndexMask(attributes.domain_size(domain));
}
@@ -207,8 +205,8 @@ static bool vertex_color_smooth(Object *ob)
return false;
}
Vector<int64_t> indices;
const IndexMask selection = get_selected_indices(*me, ATTR_DOMAIN_CORNER, indices);
IndexMaskMemory memory;
const IndexMask selection = get_selected_indices(*me, ATTR_DOMAIN_CORNER, memory);
face_corner_color_equalize_verts(*me, selection);
@@ -265,15 +263,15 @@ static void transform_active_color_data(
return;
}
Vector<int64_t> indices;
const IndexMask selection = get_selected_indices(mesh, color_attribute.domain, indices);
IndexMaskMemory memory;
const IndexMask selection = get_selected_indices(mesh, color_attribute.domain, memory);
threading::parallel_for(selection.index_range(), 1024, [&](IndexRange range) {
selection.foreach_segment(GrainSize(1024), [&](const IndexMaskSegment segment) {
color_attribute.varray.type().to_static_type_tag<ColorGeometry4f, ColorGeometry4b>(
[&](auto type_tag) {
using namespace blender;
using T = typename decltype(type_tag)::type;
for ([[maybe_unused]] const int i : selection.slice(range)) {
for ([[maybe_unused]] const int i : segment) {
if constexpr (std::is_void_v<T>) {
BLI_assert_unreachable();
}

View File

@@ -1,6 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_index_mask_ops.hh"
#include "BLI_math_matrix.hh"
#include "BLI_virtual_array.hh"
@@ -326,7 +325,7 @@ bool GeometryDataSource::has_selection_filter() const
}
}
IndexMask GeometryDataSource::apply_selection_filter(Vector<int64_t> &indices) const
IndexMask GeometryDataSource::apply_selection_filter(IndexMaskMemory &memory) const
{
std::lock_guard lock{mutex_};
const IndexMask full_range(this->tot_rows());
@@ -363,8 +362,7 @@ IndexMask GeometryDataSource::apply_selection_filter(Vector<int64_t> &indices) c
}),
ATTR_DOMAIN_POINT,
domain_);
return index_mask_ops::find_indices_from_virtual_array(
full_range, selection, 1024, indices);
return IndexMask::from_bools(selection, memory);
}
if (mesh_eval->totvert == bm->totvert) {
@@ -377,8 +375,7 @@ IndexMask GeometryDataSource::apply_selection_filter(Vector<int64_t> &indices) c
}),
ATTR_DOMAIN_POINT,
domain_);
return index_mask_ops::find_indices_from_virtual_array(
full_range, selection, 2048, indices);
return IndexMask::from_bools(selection, memory);
}
return full_range;
@@ -390,9 +387,9 @@ IndexMask GeometryDataSource::apply_selection_filter(Vector<int64_t> &indices) c
const Curves &curves_id = *component.get_for_read();
switch (domain_) {
case ATTR_DOMAIN_POINT:
return curves::retrieve_selected_points(curves_id, indices);
return curves::retrieve_selected_points(curves_id, memory);
case ATTR_DOMAIN_CURVE:
return curves::retrieve_selected_curves(curves_id, indices);
return curves::retrieve_selected_curves(curves_id, memory);
default:
BLI_assert_unreachable();
}

View File

@@ -69,7 +69,7 @@ class GeometryDataSource : public DataSource {
}
bool has_selection_filter() const override;
IndexMask apply_selection_filter(Vector<int64_t> &indices) const;
IndexMask apply_selection_filter(IndexMaskMemory &memory) const;
void foreach_default_column_ids(
FunctionRef<void(const SpreadsheetColumnID &, bool is_extra)> fn) const override;

View File

@@ -25,22 +25,19 @@
namespace blender::ed::spreadsheet {
template<typename T, typename OperationFn>
static void apply_filter_operation(const VArray<T> &data,
OperationFn check_fn,
const IndexMask mask,
Vector<int64_t> &new_indices)
static IndexMask apply_filter_operation(const VArray<T> &data,
OperationFn check_fn,
const IndexMask &mask,
IndexMaskMemory &memory)
{
for (const int64_t i : mask) {
if (check_fn(data[i])) {
new_indices.append(i);
}
}
return IndexMask::from_predicate(
mask, GrainSize(1024), memory, [&](const int64_t i) { return check_fn(data[i]); });
}
static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
const Map<StringRef, const ColumnValues *> &columns,
const IndexMask prev_mask,
Vector<int64_t> &new_indices)
static IndexMask apply_row_filter(const SpreadsheetRowFilter &row_filter,
const Map<StringRef, const ColumnValues *> &columns,
const IndexMask prev_mask,
IndexMaskMemory &memory)
{
const ColumnValues &column = *columns.lookup(row_filter.column_name);
const GVArray &column_data = column.data();
@@ -49,64 +46,63 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
switch (row_filter.operation) {
case SPREADSHEET_ROW_FILTER_EQUAL: {
const float threshold = row_filter.threshold;
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float>(),
[&](const float cell) { return std::abs(cell - value) < threshold; },
prev_mask,
new_indices);
break;
memory);
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float>(),
[&](const float cell) { return cell > value; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float>(),
[&](const float cell) { return cell < value; },
prev_mask,
new_indices);
memory);
break;
}
}
}
else if (column_data.type().is<bool>()) {
const bool value = (row_filter.flag & SPREADSHEET_ROW_FILTER_BOOL_VALUE) != 0;
apply_filter_operation(
return apply_filter_operation(
column_data.typed<bool>(),
[&](const bool cell) { return cell == value; },
prev_mask,
new_indices);
memory);
}
else if (column_data.type().is<int8_t>()) {
const int value = row_filter.value_int;
switch (row_filter.operation) {
case SPREADSHEET_ROW_FILTER_EQUAL: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<int8_t>(),
[&](const int cell) { return cell == value; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<int8_t>(),
[value](const int cell) { return cell > value; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<int8_t>(),
[&](const int cell) { return cell < value; },
prev_mask,
new_indices);
memory);
break;
}
}
@@ -115,27 +111,27 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
const int value = row_filter.value_int;
switch (row_filter.operation) {
case SPREADSHEET_ROW_FILTER_EQUAL: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<int>(),
[&](const int cell) { return cell == value; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<int>(),
[value](const int cell) { return cell > value; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<int>(),
[&](const int cell) { return cell < value; },
prev_mask,
new_indices);
memory);
break;
}
}
@@ -149,7 +145,7 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
column_data.typed<int2>(),
[&](const int2 cell) { return math::distance_squared(cell, value) <= threshold_sq; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
@@ -157,7 +153,7 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
column_data.typed<int2>(),
[&](const int2 cell) { return cell.x > value.x && cell.y > value.y; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
@@ -165,7 +161,7 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
column_data.typed<int2>(),
[&](const int2 cell) { return cell.x < value.x && cell.y < value.y; },
prev_mask,
new_indices);
memory);
break;
}
}
@@ -175,27 +171,27 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
switch (row_filter.operation) {
case SPREADSHEET_ROW_FILTER_EQUAL: {
const float threshold_sq = pow2f(row_filter.threshold);
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float2>(),
[&](const float2 cell) { return math::distance_squared(cell, value) <= threshold_sq; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float2>(),
[&](const float2 cell) { return cell.x > value.x && cell.y > value.y; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float2>(),
[&](const float2 cell) { return cell.x < value.x && cell.y < value.y; },
prev_mask,
new_indices);
memory);
break;
}
}
@@ -205,31 +201,31 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
switch (row_filter.operation) {
case SPREADSHEET_ROW_FILTER_EQUAL: {
const float threshold_sq = pow2f(row_filter.threshold);
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float3>(),
[&](const float3 cell) { return math::distance_squared(cell, value) <= threshold_sq; },
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float3>(),
[&](const float3 cell) {
return cell.x > value.x && cell.y > value.y && cell.z > value.z;
},
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<float3>(),
[&](const float3 cell) {
return cell.x < value.x && cell.y < value.y && cell.z < value.z;
},
prev_mask,
new_indices);
memory);
break;
}
}
@@ -239,33 +235,33 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
switch (row_filter.operation) {
case SPREADSHEET_ROW_FILTER_EQUAL: {
const float threshold_sq = pow2f(row_filter.threshold);
apply_filter_operation(
return apply_filter_operation(
column_data.typed<ColorGeometry4f>(),
[&](const ColorGeometry4f cell) {
return math::distance_squared(float4(cell), float4(value)) <= threshold_sq;
},
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<ColorGeometry4f>(),
[&](const ColorGeometry4f cell) {
return cell.r > value.r && cell.g > value.g && cell.b > value.b && cell.a > value.a;
},
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<ColorGeometry4f>(),
[&](const ColorGeometry4f cell) {
return cell.r < value.r && cell.g < value.g && cell.b < value.b && cell.a < value.a;
},
prev_mask,
new_indices);
memory);
break;
}
}
@@ -277,7 +273,7 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
const float4 value_floats = {
float(value.r), float(value.g), float(value.b), float(value.a)};
const float threshold_sq = pow2f(row_filter.threshold);
apply_filter_operation(
return apply_filter_operation(
column_data.typed<ColorGeometry4b>(),
[&](const ColorGeometry4b cell_bytes) {
const ColorGeometry4f cell = cell_bytes.decode();
@@ -286,36 +282,36 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
return math::distance_squared(value_floats, cell_floats) <= threshold_sq;
},
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_GREATER: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<ColorGeometry4b>(),
[&](const ColorGeometry4b cell_bytes) {
const ColorGeometry4f cell = cell_bytes.decode();
return cell.r > value.r && cell.g > value.g && cell.b > value.b && cell.a > value.a;
},
prev_mask,
new_indices);
memory);
break;
}
case SPREADSHEET_ROW_FILTER_LESS: {
apply_filter_operation(
return apply_filter_operation(
column_data.typed<ColorGeometry4b>(),
[&](const ColorGeometry4b cell_bytes) {
const ColorGeometry4f cell = cell_bytes.decode();
return cell.r < value.r && cell.g < value.g && cell.b < value.b && cell.a < value.a;
},
prev_mask,
new_indices);
memory);
break;
}
}
}
else if (column_data.type().is<bke::InstanceReference>()) {
const StringRef value = row_filter.value_string;
apply_filter_operation(
return apply_filter_operation(
column_data.typed<bke::InstanceReference>(),
[&](const bke::InstanceReference cell) {
switch (cell.type()) {
@@ -336,8 +332,9 @@ static void apply_row_filter(const SpreadsheetRowFilter &row_filter,
return false;
},
prev_mask,
new_indices);
memory);
}
return prev_mask;
}
static bool use_row_filters(const SpaceSpreadsheet &sspreadsheet)
@@ -378,15 +375,13 @@ IndexMask spreadsheet_filter_rows(const SpaceSpreadsheet &sspreadsheet,
return IndexMask(tot_rows);
}
IndexMaskMemory &mask_memory = scope.construct<IndexMaskMemory>();
IndexMask mask(tot_rows);
Vector<int64_t> mask_indices;
mask_indices.reserve(tot_rows);
if (use_selection) {
const GeometryDataSource *geometry_data_source = dynamic_cast<const GeometryDataSource *>(
&data_source);
mask = geometry_data_source->apply_selection_filter(mask_indices);
mask = geometry_data_source->apply_selection_filter(mask_memory);
}
if (use_filters) {
@@ -400,21 +395,12 @@ IndexMask spreadsheet_filter_rows(const SpaceSpreadsheet &sspreadsheet,
if (!columns.contains(row_filter->column_name)) {
continue;
}
Vector<int64_t> new_indices;
new_indices.reserve(mask_indices.size());
apply_row_filter(*row_filter, columns, mask, new_indices);
std::swap(new_indices, mask_indices);
mask = IndexMask(mask_indices);
mask = apply_row_filter(*row_filter, columns, mask, mask_memory);
}
}
}
if (mask_indices.is_empty()) {
BLI_assert(mask.is_empty() || mask.is_range());
return mask;
}
return IndexMask(scope.add_value(std::move(mask_indices)));
return mask;
}
SpreadsheetRowFilter *spreadsheet_row_filter_new()

View File

@@ -5,7 +5,6 @@
*/
#include "BLI_array.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_inplace_priority_queue.hh"
#include "BLI_span.hh"
@@ -60,7 +59,7 @@ static void calculate_curve_point_distances_for_proportional_editing(
static void createTransCurvesVerts(bContext * /*C*/, TransInfo *t)
{
MutableSpan<TransDataContainer> trans_data_contrainers(t->data_container, t->data_container_len);
Array<Vector<int64_t>> selected_indices_per_object(t->data_container_len);
IndexMaskMemory memory;
Array<IndexMask> selection_per_object(t->data_container_len);
const bool use_proportional_edit = (t->flag & T_PROP_EDIT_ALL) != 0;
const bool use_connected_only = (t->flag & T_PROP_CONNECTED) != 0;
@@ -75,8 +74,7 @@ static void createTransCurvesVerts(bContext * /*C*/, TransInfo *t)
tc.data_len = curves.point_num;
}
else {
selection_per_object[i] = ed::curves::retrieve_selected_points(
curves, selected_indices_per_object[i]);
selection_per_object[i] = ed::curves::retrieve_selected_points(curves, memory);
tc.data_len = selection_per_object[i].size();
}

View File

@@ -792,13 +792,13 @@ static int gizmo_3d_foreach_selected(const bContext *C,
mat_local = float4x4(obedit->world_to_object) * float4x4(ob_iter->object_to_world);
}
Vector<int64_t> indices;
const IndexMask selected_points = ed::curves::retrieve_selected_points(curves, indices);
IndexMaskMemory memory;
const IndexMask selected_points = ed::curves::retrieve_selected_points(curves, memory);
const Span<float3> positions = deformation.positions;
totsel += selected_points.size();
for (const int point_i : selected_points) {
selected_points.foreach_index([&](const int point_i) {
run_coord_with_matrix(positions[point_i], use_mat_local, mat_local.ptr());
}
});
}
FOREACH_EDIT_OBJECT_END();
}

View File

@@ -279,7 +279,7 @@ class FieldInput : public FieldNode {
* should live at least as long as the passed in #scope. May return null.
*/
virtual GVArray get_varray_for_context(const FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const = 0;
virtual std::string socket_inspection_name() const;
@@ -325,7 +325,7 @@ class FieldContext {
virtual ~FieldContext() = default;
virtual GVArray get_varray_for_input(const FieldInput &field_input,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const;
};
@@ -343,7 +343,7 @@ class FieldEvaluator : NonMovable, NonCopyable {
ResourceScope scope_;
const FieldContext &context_;
const IndexMask mask_;
const IndexMask &mask_;
Vector<GField> fields_to_evaluate_;
Vector<GVMutableArray> dst_varrays_;
Vector<GVArray> evaluated_varrays_;
@@ -361,7 +361,8 @@ class FieldEvaluator : NonMovable, NonCopyable {
}
/** Construct a field evaluator for all indices less than #size. */
FieldEvaluator(const FieldContext &context, const int64_t size) : context_(context), mask_(size)
FieldEvaluator(const FieldContext &context, const int64_t size)
: context_(context), mask_(scope_.construct<IndexMask>(size))
{
}
@@ -485,7 +486,7 @@ class FieldEvaluator : NonMovable, NonCopyable {
*/
Vector<GVArray> evaluate_fields(ResourceScope &scope,
Span<GFieldRef> fields_to_evaluate,
IndexMask mask,
const IndexMask &mask,
const FieldContext &context,
Span<GVMutableArray> dst_varrays = {});
@@ -527,10 +528,10 @@ class IndexFieldInput final : public FieldInput {
public:
IndexFieldInput();
static GVArray get_index_varray(IndexMask mask);
static GVArray get_index_varray(const IndexMask &mask);
GVArray get_varray_for_context(const FieldContext &context,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const final;
uint64_t hash() const override;

View File

@@ -50,8 +50,8 @@ class MultiFunction {
* - Automatic index mask offsetting to avoid large temporary intermediate arrays that are mostly
* unused.
*/
void call_auto(IndexMask mask, Params params, Context context) const;
virtual void call(IndexMask mask, Params params, Context context) const = 0;
void call_auto(const IndexMask &mask, Params params, Context context) const;
virtual void call(const IndexMask &mask, Params params, Context context) const = 0;
virtual uint64_t hash() const
{
@@ -136,11 +136,6 @@ class MultiFunction {
virtual ExecutionHints get_execution_hints() const;
};
inline ParamsBuilder::ParamsBuilder(const MultiFunction &fn, int64_t mask_size)
: ParamsBuilder(fn.signature(), IndexMask(mask_size))
{
}
inline ParamsBuilder::ParamsBuilder(const MultiFunction &fn, const IndexMask *mask)
: ParamsBuilder(fn.signature(), *mask)
{

View File

@@ -60,10 +60,9 @@ struct AllSpanOrSingle {
template<typename... ParamTags, typename... LoadedParams, size_t... I>
auto create_devirtualizers(TypeSequence<ParamTags...> /*param_tags*/,
std::index_sequence<I...> /*indices*/,
const IndexMask &mask,
const std::tuple<LoadedParams...> &loaded_params) const
{
return std::make_tuple(IndexMaskDevirtualizer<true, true>{mask}, [&]() {
return std::make_tuple([&]() {
typedef ParamTags ParamTag;
typedef typename ParamTag::base_type T;
if constexpr (ParamTag::category == ParamCategory::SingleInput) {
@@ -93,10 +92,9 @@ template<size_t... Indices> struct SomeSpanOrSingle {
template<typename... ParamTags, typename... LoadedParams, size_t... I>
auto create_devirtualizers(TypeSequence<ParamTags...> /*param_tags*/,
std::index_sequence<I...> /*indices*/,
const IndexMask &mask,
const std::tuple<LoadedParams...> &loaded_params) const
{
return std::make_tuple(IndexMaskDevirtualizer<true, true>{mask}, [&]() {
return std::make_tuple([&]() {
typedef ParamTags ParamTag;
typedef typename ParamTag::base_type T;
@@ -149,7 +147,7 @@ execute_array(TypeSequence<ParamTags...> /*param_tags*/,
}
}
else {
for (const int32_t i : mask) {
for (const int64_t i : mask) {
element_fn(args[i]...);
}
}
@@ -194,7 +192,7 @@ template<typename... ParamTags, size_t... I, typename ElementFn, typename... Loa
inline void execute_materialized(TypeSequence<ParamTags...> /* param_tags */,
std::index_sequence<I...> /* indices */,
const ElementFn element_fn,
const IndexMask mask,
const IndexMaskSegment mask,
const std::tuple<LoadedParams...> &loaded_params)
{
@@ -241,13 +239,17 @@ inline void execute_materialized(TypeSequence<ParamTags...> /* param_tags */,
}(),
...);
IndexMaskFromSegment index_mask_from_segment;
const int64_t segment_offset = mask.offset();
/* Outer loop over all chunks. */
for (int64_t chunk_start = 0; chunk_start < mask_size; chunk_start += MaxChunkSize) {
const int64_t chunk_end = std::min<int64_t>(chunk_start + MaxChunkSize, mask_size);
const int64_t chunk_size = chunk_end - chunk_start;
const IndexMask sliced_mask = mask.slice(chunk_start, chunk_size);
const IndexMaskSegment sliced_mask = mask.slice(chunk_start, chunk_size);
const int64_t mask_start = sliced_mask[0];
const bool sliced_mask_is_range = sliced_mask.is_range();
const bool sliced_mask_is_range = unique_sorted_indices::non_empty_is_range(
sliced_mask.base_span());
/* Move mutable data into temporary array. */
if (!sliced_mask_is_range) {
@@ -267,6 +269,8 @@ inline void execute_materialized(TypeSequence<ParamTags...> /* param_tags */,
...);
}
const IndexMask *current_segment_mask = nullptr;
execute_materialized_impl(
TypeSequence<ParamTags...>(),
element_fn,
@@ -289,9 +293,13 @@ inline void execute_materialized(TypeSequence<ParamTags...> /* param_tags */,
return arg_info.internal_span_data + mask_start;
}
const GVArrayImpl &varray_impl = *std::get<I>(loaded_params);
if (current_segment_mask == nullptr) {
current_segment_mask = &index_mask_from_segment.update(
{segment_offset, sliced_mask.base_span()});
}
/* As a fallback, do a virtual function call to retrieve all elements in the current
* chunk. The elements are stored in a temporary buffer reused for every chunk. */
varray_impl.materialize_compressed_to_uninitialized(sliced_mask, tmp_buffer);
varray_impl.materialize_compressed_to_uninitialized(*current_segment_mask, tmp_buffer);
/* Remember that this parameter has been materialized, so that the values are
* destructed properly when the chunk is done. */
arg_info.mode = MaterializeArgMode::Materialized;
@@ -373,7 +381,7 @@ inline void execute_materialized(TypeSequence<ParamTags...> /* param_tags */,
template<typename ElementFn, typename ExecPreset, typename... ParamTags, size_t... I>
inline void execute_element_fn_as_multi_function(const ElementFn element_fn,
const ExecPreset exec_preset,
const IndexMask mask,
const IndexMask &mask,
Params params,
TypeSequence<ParamTags...> /*param_tags*/,
std::index_sequence<I...> /*indices*/)
@@ -400,14 +408,32 @@ inline void execute_element_fn_as_multi_function(const ElementFn element_fn,
/* Try execute devirtualized if enabled and the input types allow it. */
bool executed_devirtualized = false;
if constexpr (ExecPreset::use_devirtualization) {
/* Get segments before devirtualization to avoid generating this code multiple times. */
const Vector<std::variant<IndexRange, IndexMaskSegment>, 16> mask_segments =
mask.to_spans_and_ranges<16>();
const auto devirtualizers = exec_preset.create_devirtualizers(
TypeSequence<ParamTags...>(), std::index_sequence<I...>(), mask, loaded_params);
TypeSequence<ParamTags...>(), std::index_sequence<I...>(), loaded_params);
executed_devirtualized = call_with_devirtualized_parameters(
devirtualizers, [&](auto &&...args) {
execute_array(TypeSequence<ParamTags...>(),
std::index_sequence<I...>(),
element_fn,
std::forward<decltype(args)>(args)...);
for (const std::variant<IndexRange, IndexMaskSegment> &segment : mask_segments) {
if (std::holds_alternative<IndexRange>(segment)) {
const auto segment_range = std::get<IndexRange>(segment);
execute_array(TypeSequence<ParamTags...>(),
std::index_sequence<I...>(),
element_fn,
segment_range,
std::forward<decltype(args)>(args)...);
}
else {
const auto segment_indices = std::get<IndexMaskSegment>(segment);
execute_array(TypeSequence<ParamTags...>(),
std::index_sequence<I...>(),
element_fn,
segment_indices,
std::forward<decltype(args)>(args)...);
}
}
});
}
else {
@@ -420,11 +446,13 @@ inline void execute_element_fn_as_multi_function(const ElementFn element_fn,
/* The materialized method is most common because it avoids most virtual function overhead but
* still instantiates the function only once. */
if constexpr (ExecPreset::fallback_mode == exec_presets::FallbackMode::Materialized) {
execute_materialized(TypeSequence<ParamTags...>(),
std::index_sequence<I...>(),
element_fn,
mask,
loaded_params);
mask.foreach_segment([&](const IndexMaskSegment segment) {
execute_materialized(TypeSequence<ParamTags...>(),
std::index_sequence<I...>(),
element_fn,
segment,
loaded_params);
});
}
else {
/* This fallback is slower because it uses virtual method calls for every element. */
@@ -460,7 +488,7 @@ inline auto build_multi_function_call_from_element_fn(const ElementFn element_fn
const ExecPreset exec_preset,
TypeSequence<ParamTags...> /*param_tags*/)
{
return [element_fn, exec_preset](const IndexMask mask, Params params) {
return [element_fn, exec_preset](const IndexMask &mask, Params params) {
execute_element_fn_as_multi_function(element_fn,
exec_preset,
mask,
@@ -488,7 +516,7 @@ template<typename CallFn, typename... ParamTags> class CustomMF : public MultiFu
this->set_signature(&signature_);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
call_fn_(mask, params);
}
@@ -637,7 +665,7 @@ class CustomMF_GenericConstant : public MultiFunction {
public:
CustomMF_GenericConstant(const CPPType &type, const void *value, bool make_value_copy);
~CustomMF_GenericConstant();
void call(IndexMask mask, Params params, Context context) const override;
void call(const IndexMask &mask, Params params, Context context) const override;
uint64_t hash() const override;
bool equals(const MultiFunction &other) const override;
};
@@ -653,7 +681,7 @@ class CustomMF_GenericConstantArray : public MultiFunction {
public:
CustomMF_GenericConstantArray(GSpan array);
void call(IndexMask mask, Params params, Context context) const override;
void call(const IndexMask &mask, Params params, Context context) const override;
};
/**
@@ -672,14 +700,10 @@ template<typename T> class CustomMF_Constant : public MultiFunction {
this->set_signature(&signature_);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
MutableSpan<T> output = params.uninitialized_single_output<T>(0);
mask.to_best_mask_type([&](const auto &mask) {
for (const int64_t i : mask) {
new (&output[i]) T(value_);
}
});
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { new (&output[i]) T(value_); });
}
uint64_t hash() const override
@@ -712,7 +736,7 @@ class CustomMF_DefaultOutput : public MultiFunction {
public:
CustomMF_DefaultOutput(Span<DataType> input_types, Span<DataType> output_types);
void call(IndexMask mask, Params params, Context context) const override;
void call(const IndexMask &mask, Params params, Context context) const override;
};
class CustomMF_GenericCopy : public MultiFunction {
@@ -721,7 +745,7 @@ class CustomMF_GenericCopy : public MultiFunction {
public:
CustomMF_GenericCopy(DataType data_type);
void call(IndexMask mask, Params params, Context context) const override;
void call(const IndexMask &mask, Params params, Context context) const override;
};
} // namespace blender::fn::multi_function

View File

@@ -27,21 +27,20 @@ class ParamsBuilder {
private:
std::unique_ptr<ResourceScope> scope_;
const Signature *signature_;
IndexMask mask_;
const IndexMask &mask_;
int64_t min_array_size_;
Vector<std::variant<GVArray, GMutableSpan, const GVVectorArray *, GVectorArray *>>
actual_params_;
friend class Params;
ParamsBuilder(const Signature &signature, const IndexMask mask)
ParamsBuilder(const Signature &signature, const IndexMask &mask)
: signature_(&signature), mask_(mask), min_array_size_(mask.min_array_size())
{
actual_params_.reserve(signature.params.size());
}
public:
ParamsBuilder(const class MultiFunction &fn, int64_t size);
/**
* The indices referenced by the #mask has to live longer than the params builder. This is
* because the it might have to destruct elements for all masked indices in the end.

View File

@@ -19,7 +19,7 @@ class ProcedureExecutor : public MultiFunction {
public:
ProcedureExecutor(const Procedure &procedure);
void call(IndexMask mask, Params params, Context context) const override;
void call(const IndexMask &mask, Params params, Context context) const override;
private:
ExecutionHints get_execution_hints() const override;

View File

@@ -1,7 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_array_utils.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_map.hh"
#include "BLI_multi_value_map.hh"
#include "BLI_set.hh"
@@ -84,7 +83,7 @@ static FieldTreeInfo preprocess_field_tree(Span<GFieldRef> entry_fields)
*/
static Vector<GVArray> get_field_context_inputs(
ResourceScope &scope,
const IndexMask mask,
const IndexMask &mask,
const FieldContext &context,
const Span<std::reference_wrapper<const FieldInput>> field_inputs)
{
@@ -279,7 +278,7 @@ static void build_multi_function_procedure_for_fields(mf::Procedure &procedure,
Vector<GVArray> evaluate_fields(ResourceScope &scope,
Span<GFieldRef> fields_to_evaluate,
IndexMask mask,
const IndexMask &mask,
const FieldContext &context,
Span<GVMutableArray> dst_varrays)
{
@@ -423,7 +422,8 @@ Vector<GVArray> evaluate_fields(ResourceScope &scope,
build_multi_function_procedure_for_fields(
procedure, scope, field_tree_info, constant_fields_to_evaluate);
mf::ProcedureExecutor procedure_executor{procedure};
mf::ParamsBuilder mf_params{procedure_executor, 1};
const IndexMask mask(1);
mf::ParamsBuilder mf_params{procedure_executor, &mask};
mf::ContextBuilder mf_context;
/* Provide inputs to the procedure executor. */
@@ -450,7 +450,7 @@ Vector<GVArray> evaluate_fields(ResourceScope &scope,
r_varrays[out_index] = GVArray::ForSingleRef(type, array_size, buffer);
}
procedure_executor.call(IndexRange(1), mf_params, mf_context);
procedure_executor.call(mask, mf_params, mf_context);
}
/* Copy data to supplied destination arrays if necessary. In some cases the evaluation above
@@ -479,10 +479,12 @@ Vector<GVArray> evaluate_fields(ResourceScope &scope,
const CPPType &type = computed_varray.type();
threading::parallel_for(mask.index_range(), 2048, [&](const IndexRange range) {
BUFFER_FOR_CPP_TYPE_VALUE(type, buffer);
for (const int i : mask.slice(range)) {
computed_varray.get_to_uninitialized(i, buffer);
dst_varray.set_by_relocate(i, buffer);
}
mask.slice(range).foreach_segment([&](auto segment) {
for (const int i : segment) {
computed_varray.get_to_uninitialized(i, buffer);
dst_varray.set_by_relocate(i, buffer);
}
});
});
}
r_varrays[out_index] = dst_varray;
@@ -533,7 +535,7 @@ GField make_constant_field(const CPPType &type, const void *value)
}
GVArray FieldContext::get_varray_for_input(const FieldInput &field_input,
IndexMask mask,
const IndexMask &mask,
ResourceScope &scope) const
{
/* By default ask the field input to create the varray. Another field context might overwrite
@@ -546,14 +548,14 @@ IndexFieldInput::IndexFieldInput() : FieldInput(CPPType::get<int>(), "Index")
category_ = Category::Generated;
}
GVArray IndexFieldInput::get_index_varray(IndexMask mask)
GVArray IndexFieldInput::get_index_varray(const IndexMask &mask)
{
auto index_func = [](int i) { return i; };
return VArray<int>::ForFunc(mask.min_array_size(), index_func);
}
GVArray IndexFieldInput::get_varray_for_context(const fn::FieldContext & /*context*/,
IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
/* TODO: Investigate a similar method to IndexRange::as_span() */
@@ -733,8 +735,7 @@ static IndexMask index_mask_from_selection(const IndexMask full_mask,
const VArray<bool> &selection,
ResourceScope &scope)
{
return index_mask_ops::find_indices_from_virtual_array(
full_mask, selection, 1024, scope.construct<Vector<int64_t>>());
return IndexMask::from_bools(full_mask, selection, scope.construct<IndexMaskMemory>());
}
int FieldEvaluator::add_with_destination(GField field, GVMutableArray dst)

View File

@@ -35,7 +35,7 @@ static bool supports_threading_by_slicing_params(const MultiFunction &fn)
return true;
}
static int64_t compute_grain_size(const ExecutionHints &hints, const IndexMask mask)
static int64_t compute_grain_size(const ExecutionHints &hints, const IndexMask &mask)
{
int64_t grain_size = hints.min_grain_size;
if (hints.uniform_execution_time) {
@@ -111,7 +111,7 @@ static void add_sliced_parameters(const Signature &signature,
}
}
void MultiFunction::call_auto(IndexMask mask, Params params, Context context) const
void MultiFunction::call_auto(const IndexMask &mask, Params params, Context context) const
{
if (mask.is_empty()) {
return;
@@ -148,10 +148,11 @@ void MultiFunction::call_auto(IndexMask mask, Params params, Context context) co
const int64_t input_slice_size = sliced_mask.last() - input_slice_start + 1;
const IndexRange input_slice_range{input_slice_start, input_slice_size};
Vector<int64_t> offset_mask_indices;
const IndexMask offset_mask = mask.slice_and_offset(sub_range, offset_mask_indices);
IndexMaskMemory memory;
const int64_t offset = -input_slice_start;
const IndexMask offset_mask = mask.slice_and_offset(sub_range, offset, memory);
ParamsBuilder sliced_params{*this, offset_mask.min_array_size()};
ParamsBuilder sliced_params{*this, &offset_mask};
add_sliced_parameters(*signature_ref_, params, input_slice_range, sliced_params);
this->call(offset_mask, sliced_params, context);
});

View File

@@ -31,7 +31,9 @@ CustomMF_GenericConstant::~CustomMF_GenericConstant()
}
}
void CustomMF_GenericConstant::call(IndexMask mask, Params params, Context /*context*/) const
void CustomMF_GenericConstant::call(const IndexMask &mask,
Params params,
Context /*context*/) const
{
GMutableSpan output = params.uninitialized_single_output(0);
type_.fill_construct_indices(value_, output.data(), mask);
@@ -62,12 +64,12 @@ CustomMF_GenericConstantArray::CustomMF_GenericConstantArray(GSpan array) : arra
this->set_signature(&signature_);
}
void CustomMF_GenericConstantArray::call(IndexMask mask, Params params, Context /*context*/) const
void CustomMF_GenericConstantArray::call(const IndexMask &mask,
Params params,
Context /*context*/) const
{
GVectorArray &vectors = params.vector_output(0);
for (int64_t i : mask) {
vectors.extend(i, array_);
}
mask.foreach_index([&](const int64_t i) { vectors.extend(i, array_); });
}
CustomMF_DefaultOutput::CustomMF_DefaultOutput(Span<DataType> input_types,
@@ -83,7 +85,7 @@ CustomMF_DefaultOutput::CustomMF_DefaultOutput(Span<DataType> input_types,
}
this->set_signature(&signature_);
}
void CustomMF_DefaultOutput::call(IndexMask mask, Params params, Context /*context*/) const
void CustomMF_DefaultOutput::call(const IndexMask &mask, Params params, Context /*context*/) const
{
for (int param_index : this->param_indices()) {
ParamType param_type = this->param_type(param_index);
@@ -107,7 +109,7 @@ CustomMF_GenericCopy::CustomMF_GenericCopy(DataType data_type)
this->set_signature(&signature_);
}
void CustomMF_GenericCopy::call(IndexMask mask, Params params, Context /*context*/) const
void CustomMF_GenericCopy::call(const IndexMask &mask, Params params, Context /*context*/) const
{
const DataType data_type = this->param_type(0).data_type();
switch (data_type.category()) {

View File

@@ -340,18 +340,18 @@ class VariableState : NonCopyable, NonMovable {
return false;
}
bool is_fully_initialized(const IndexMask full_mask)
bool is_fully_initialized(const IndexMask &full_mask)
{
return tot_initialized_ == full_mask.size();
}
bool is_fully_uninitialized(const IndexMask full_mask)
bool is_fully_uninitialized(const IndexMask &full_mask)
{
UNUSED_VARS(full_mask);
return tot_initialized_ == 0;
}
void add_as_input(ParamsBuilder &params, IndexMask mask, const DataType &data_type) const
void add_as_input(ParamsBuilder &params, const IndexMask &mask, const DataType &data_type) const
{
/* Sanity check to make sure that enough values are initialized. */
BLI_assert(mask.size() <= tot_initialized_);
@@ -390,7 +390,7 @@ class VariableState : NonCopyable, NonMovable {
}
}
void ensure_is_mutable(IndexMask full_mask,
void ensure_is_mutable(const IndexMask &full_mask,
const DataType &data_type,
ValueAllocator &value_allocator)
{
@@ -464,8 +464,8 @@ class VariableState : NonCopyable, NonMovable {
}
void add_as_mutable(ParamsBuilder &params,
IndexMask mask,
IndexMask full_mask,
const IndexMask &mask,
const IndexMask &full_mask,
const DataType &data_type,
ValueAllocator &value_allocator)
{
@@ -497,8 +497,8 @@ class VariableState : NonCopyable, NonMovable {
}
void add_as_output(ParamsBuilder &params,
IndexMask mask,
IndexMask full_mask,
const IndexMask &mask,
const IndexMask &full_mask,
const DataType &data_type,
ValueAllocator &value_allocator)
{
@@ -646,7 +646,7 @@ class VariableState : NonCopyable, NonMovable {
}
void add_as_output__one(ParamsBuilder &params,
IndexMask mask,
const IndexMask &mask,
const DataType &data_type,
ValueAllocator &value_allocator)
{
@@ -687,8 +687,8 @@ class VariableState : NonCopyable, NonMovable {
* \return True when all elements of this variable are initialized and the variable state can be
* released.
*/
bool destruct(IndexMask mask,
IndexMask full_mask,
bool destruct(const IndexMask &mask,
const IndexMask &full_mask,
const DataType &data_type,
ValueAllocator &value_allocator)
{
@@ -757,7 +757,7 @@ class VariableState : NonCopyable, NonMovable {
return should_self_destruct;
}
void indices_split(IndexMask mask, IndicesSplitVectors &r_indices)
void indices_split(const IndexMask &mask, IndicesSplitVectors &r_indices)
{
BLI_assert(mask.size() <= tot_initialized_);
BLI_assert(value_ != nullptr);
@@ -765,25 +765,22 @@ class VariableState : NonCopyable, NonMovable {
switch (value_->type) {
case ValueType::GVArray: {
const VArray<bool> varray = this->value_as<VariableValue_GVArray>()->data.typed<bool>();
for (const int i : mask) {
r_indices[varray[i]].append(i);
}
mask.foreach_index([&](const int64_t i) { r_indices[varray[i]].append(i); });
break;
}
case ValueType::Span: {
const Span<bool> span(
static_cast<const bool *>(this->value_as<VariableValue_Span>()->data),
mask.min_array_size());
for (const int i : mask) {
r_indices[span[i]].append(i);
}
mask.foreach_index([&](const int64_t i) { r_indices[span[i]].append(i); });
break;
}
case ValueType::OneSingle: {
auto *value_typed = this->value_as<VariableValue_OneSingle>();
BLI_assert(value_typed->is_initialized);
const bool condition = *static_cast<const bool *>(value_typed->data);
r_indices[condition].extend(mask);
Vector<int64_t> &indices = r_indices[condition];
mask.foreach_index([&](const int64_t i) { indices.append(i); });
break;
}
case ValueType::GVVectorArray:
@@ -817,12 +814,12 @@ class VariableStates {
const Procedure &procedure_;
/** The state of every variable, indexed by #Variable::index_in_procedure(). */
Array<VariableState> variable_states_;
IndexMask full_mask_;
const IndexMask &full_mask_;
public:
VariableStates(LinearAllocator<> &linear_allocator,
const Procedure &procedure,
IndexMask full_mask)
const IndexMask &full_mask)
: value_allocator_(linear_allocator),
procedure_(procedure),
variable_states_(procedure.variables().size()),
@@ -999,7 +996,7 @@ static void gather_parameter_variable_states(const MultiFunction &fn,
}
static void fill_params__one(const MultiFunction &fn,
const IndexMask mask,
const IndexMask &mask,
ParamsBuilder &params,
VariableStates &variable_states,
const Span<VariableState *> param_variable_states)
@@ -1017,7 +1014,7 @@ static void fill_params__one(const MultiFunction &fn,
}
static void fill_params(const MultiFunction &fn,
const IndexMask mask,
const IndexMask &mask,
ParamsBuilder &params,
VariableStates &variable_states,
const Span<VariableState *> param_variable_states)
@@ -1035,7 +1032,7 @@ static void fill_params(const MultiFunction &fn,
}
static void execute_call_instruction(const CallInstruction &instruction,
const IndexMask mask,
const IndexMask &mask,
VariableStates &variable_states,
const Context &context)
{
@@ -1048,11 +1045,12 @@ static void execute_call_instruction(const CallInstruction &instruction,
/* If all inputs to the function are constant, it's enough to call the function only once instead
* of for every index. */
if (evaluate_as_one(param_variable_states, mask, variable_states.full_mask())) {
ParamsBuilder params(fn, 1);
static const IndexMask one_mask(1);
ParamsBuilder params(fn, &one_mask);
fill_params__one(fn, mask, params, variable_states, param_variable_states);
try {
fn.call(IndexRange(1), params, context);
fn.call(one_mask, params, context);
}
catch (...) {
/* Multi-functions must not throw exceptions. */
@@ -1075,15 +1073,11 @@ static void execute_call_instruction(const CallInstruction &instruction,
/** An index mask, that might own the indices if necessary. */
struct InstructionIndices {
bool is_owned;
Vector<int64_t> owned_indices;
std::unique_ptr<IndexMaskMemory> memory;
IndexMask referenced_indices;
IndexMask mask() const
const IndexMask &mask() const
{
if (this->is_owned) {
return this->owned_indices.as_span();
}
return this->referenced_indices;
}
};
@@ -1093,7 +1087,7 @@ struct NextInstructionInfo {
const Instruction *instruction = nullptr;
InstructionIndices indices;
IndexMask mask() const
const IndexMask &mask() const
{
return this->indices.mask();
}
@@ -1115,13 +1109,12 @@ class InstructionScheduler {
public:
InstructionScheduler() = default;
void add_referenced_indices(const Instruction &instruction, IndexMask mask)
void add_referenced_indices(const Instruction &instruction, const IndexMask &mask)
{
if (mask.is_empty()) {
return;
}
InstructionIndices new_indices;
new_indices.is_owned = false;
new_indices.referenced_indices = mask;
next_instructions_.push({&instruction, std::move(new_indices)});
}
@@ -1131,11 +1124,11 @@ class InstructionScheduler {
if (indices.is_empty()) {
return;
}
BLI_assert(IndexMask::indices_are_valid_index_mask(indices));
InstructionIndices new_indices;
new_indices.is_owned = true;
new_indices.owned_indices = std::move(indices);
new_indices.memory = std::make_unique<IndexMaskMemory>();
new_indices.referenced_indices = IndexMask::from_indices<int64_t>(indices,
*new_indices.memory);
next_instructions_.push({&instruction, std::move(new_indices)});
}
@@ -1161,7 +1154,7 @@ class InstructionScheduler {
}
};
void ProcedureExecutor::call(IndexMask full_mask, Params params, Context context) const
void ProcedureExecutor::call(const IndexMask &full_mask, Params params, Context context) const
{
BLI_assert(procedure_.validate());

View File

@@ -31,7 +31,7 @@ class IndexFieldInput final : public FieldInput {
IndexFieldInput() : FieldInput(CPPType::get<int>(), "Index") {}
GVArray get_varray_for_context(const FieldContext & /*context*/,
IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const final
{
auto index_func = [](int i) { return i; };
@@ -58,7 +58,8 @@ TEST(field, VArrayInput)
Array<int> result_2(10);
const Array<int64_t> indices = {2, 4, 6, 8};
const IndexMask mask{indices};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int64_t>(indices, memory);
FieldEvaluator evaluator_2{context, &mask};
evaluator_2.add_with_destination(index_field, result_2.as_mutable_span());
@@ -79,7 +80,8 @@ TEST(field, VArrayInputMultipleOutputs)
Array<int> result_2(10);
const Array<int64_t> indices = {2, 4, 6, 8};
const IndexMask mask{indices};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int64_t>(indices, memory);
FieldContext context;
FieldEvaluator evaluator{context, &mask};
@@ -106,7 +108,8 @@ TEST(field, InputAndFunction)
Array<int> result(10);
const Array<int64_t> indices = {2, 4, 6, 8};
const IndexMask mask{indices};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int64_t>(indices, memory);
FieldContext context;
FieldEvaluator evaluator{context, &mask};
@@ -131,7 +134,8 @@ TEST(field, TwoFunctions)
Array<int> result(10);
const Array<int64_t> indices = {2, 4, 6, 8};
const IndexMask mask{indices};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int64_t>(indices, memory);
FieldContext context;
FieldEvaluator evaluator{context, &mask};
@@ -158,7 +162,7 @@ class TwoOutputFunction : public mf::MultiFunction {
this->set_signature(&signature_);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<int> &in1 = params.readonly_single_input<int>(0, "In1");
const VArray<int> &in2 = params.readonly_single_input<int>(1, "In2");
@@ -187,7 +191,8 @@ TEST(field, FunctionTwoOutputs)
Array<int> result_2(10);
const Array<int64_t> indices = {2, 4, 6, 8};
const IndexMask mask{indices};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int64_t>(indices, memory);
FieldContext context;
FieldEvaluator evaluator{context, &mask};
@@ -212,7 +217,8 @@ TEST(field, TwoFunctionsTwoOutputs)
std::make_unique<TwoOutputFunction>(), {index_field, index_field});
Array<int64_t> mask_indices = {2, 4, 6, 8};
IndexMask mask = mask_indices.as_span();
IndexMaskMemory memory;
IndexMask mask = IndexMask::from_indices<int64_t>(mask_indices, memory);
Field<int> result_field_1{fn, 0};
Field<int> intermediate_field{fn, 1};

View File

@@ -34,13 +34,14 @@ TEST(multi_function_procedure, ConstantOutput)
ProcedureExecutor executor{procedure};
ParamsBuilder params{executor, 2};
const IndexMask mask(2);
ParamsBuilder params{executor, &mask};
ContextBuilder context;
Array<int> output_array(2);
params.add_uninitialized_single_output(output_array.as_mutable_span());
executor.call(IndexRange(2), params, context);
executor.call(mask, params, context);
EXPECT_EQ(output_array[0], 10);
EXPECT_EQ(output_array[1], 10);
@@ -75,7 +76,8 @@ TEST(multi_function_procedure, SimpleTest)
ProcedureExecutor executor{procedure};
ParamsBuilder params{executor, 3};
const IndexMask mask(3);
ParamsBuilder params{executor, &mask};
ContextBuilder context;
Array<int> input_array = {1, 2, 3};
@@ -85,7 +87,7 @@ TEST(multi_function_procedure, SimpleTest)
Array<int> output_array(3);
params.add_uninitialized_single_output(output_array.as_mutable_span());
executor.call(IndexRange(3), params, context);
executor.call(mask, params, context);
EXPECT_EQ(output_array[0], 17);
EXPECT_EQ(output_array[1], 18);
@@ -126,7 +128,8 @@ TEST(multi_function_procedure, BranchTest)
EXPECT_TRUE(procedure.validate());
ProcedureExecutor procedure_fn{procedure};
ParamsBuilder params(procedure_fn, 5);
const IndexMask mask(IndexRange(1, 4));
ParamsBuilder params(procedure_fn, &mask);
Array<int> values_a = {1, 5, 3, 6, 2};
Array<bool> values_cond = {true, false, true, true, false};
@@ -135,7 +138,7 @@ TEST(multi_function_procedure, BranchTest)
params.add_readonly_single_input(values_cond.as_span());
ContextBuilder context;
procedure_fn.call({1, 2, 3, 4}, params, context);
procedure_fn.call(mask, params, context);
EXPECT_EQ(values_a[0], 1);
EXPECT_EQ(values_a[1], 25);
@@ -168,14 +171,16 @@ TEST(multi_function_procedure, EvaluateOne)
builder.add_output_parameter(*var2);
ProcedureExecutor procedure_fn{procedure};
ParamsBuilder params{procedure_fn, 5};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 1, 3, 4}, memory);
ParamsBuilder params{procedure_fn, &mask};
Array<int> values_out = {1, 2, 3, 4, 5};
params.add_readonly_single_input_value(1);
params.add_uninitialized_single_output(values_out.as_mutable_span());
ContextBuilder context;
procedure_fn.call({0, 1, 3, 4}, params, context);
procedure_fn.call(mask, params, context);
EXPECT_EQ(values_out[0], 11);
EXPECT_EQ(values_out[1], 11);
@@ -240,7 +245,9 @@ TEST(multi_function_procedure, SimpleLoop)
EXPECT_TRUE(procedure.validate());
ProcedureExecutor procedure_fn{procedure};
ParamsBuilder params{procedure_fn, 5};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 1, 3, 4}, memory);
ParamsBuilder params{procedure_fn, &mask};
Array<int> counts = {4, 3, 7, 6, 4};
Array<int> results(5, -1);
@@ -249,7 +256,7 @@ TEST(multi_function_procedure, SimpleLoop)
params.add_uninitialized_single_output(results.as_mutable_span());
ContextBuilder context;
procedure_fn.call({0, 1, 3, 4}, params, context);
procedure_fn.call(mask, params, context);
EXPECT_EQ(results[0], 1016);
EXPECT_EQ(results[1], 1008);
@@ -296,7 +303,9 @@ TEST(multi_function_procedure, Vectors)
EXPECT_TRUE(procedure.validate());
ProcedureExecutor procedure_fn{procedure};
ParamsBuilder params{procedure_fn, 5};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 1, 3, 4}, memory);
ParamsBuilder params{procedure_fn, &mask};
Array<int> v1 = {5, 2, 3};
GVectorArray v2{CPPType::get<int>(), 5};
@@ -311,7 +320,7 @@ TEST(multi_function_procedure, Vectors)
params.add_vector_output(v3);
ContextBuilder context;
procedure_fn.call({0, 1, 3, 4}, params, context);
procedure_fn.call(mask, params, context);
EXPECT_EQ(v2[0].size(), 6);
EXPECT_EQ(v2[1].size(), 4);
@@ -364,12 +373,15 @@ TEST(multi_function_procedure, BufferReuse)
Array<int> inputs = {4, 1, 6, 2, 3};
Array<int> results(5, -1);
ParamsBuilder params{procedure_fn, 5};
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 2, 3, 4}, memory);
ParamsBuilder params{procedure_fn, &mask};
params.add_readonly_single_input(inputs.as_span());
params.add_uninitialized_single_output(results.as_mutable_span());
ContextBuilder context;
procedure_fn.call({0, 2, 3, 4}, params, context);
procedure_fn.call(mask, params, context);
EXPECT_EQ(results[0], 54);
EXPECT_EQ(results[1], -1);
@@ -397,11 +409,12 @@ TEST(multi_function_procedure, OutputBufferReplaced)
ProcedureExecutor procedure_fn{procedure};
Array<int> output(3, 0);
mf::ParamsBuilder params(procedure_fn, output.size());
IndexMask mask(output.size());
mf::ParamsBuilder params(procedure_fn, &mask);
params.add_uninitialized_single_output(output.as_mutable_span());
mf::ContextBuilder context;
procedure_fn.call(IndexMask(output.size()), params, context);
procedure_fn.call(mask, params, context);
EXPECT_EQ(output[0], output_value);
EXPECT_EQ(output[1], output_value);

View File

@@ -24,15 +24,13 @@ class AddFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
const VArray<int> &a = params.readonly_single_input<int>(0, "A");
const VArray<int> &b = params.readonly_single_input<int>(1, "B");
MutableSpan<int> result = params.uninitialized_single_output<int>(2, "Result");
for (int64_t i : mask) {
result[i] = a[i] + b[i];
}
mask.foreach_index([&](const int64_t i) { result[i] = a[i] + b[i]; });
}
};
@@ -44,14 +42,16 @@ TEST(multi_function, AddFunction)
Array<int> input2 = {10, 20, 30};
Array<int> output(3, -1);
ParamsBuilder params(fn, 3);
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 2}, memory);
ParamsBuilder params(fn, &mask);
params.add_readonly_single_input(input1.as_span());
params.add_readonly_single_input(input2.as_span());
params.add_uninitialized_single_output(output.as_mutable_span());
ContextBuilder context;
fn.call({0, 2}, params, context);
fn.call(mask, params, context);
EXPECT_EQ(output[0], 14);
EXPECT_EQ(output[1], -1);
@@ -71,13 +71,15 @@ TEST(multi_function, AddPrefixFunction)
std::string prefix = "AB";
ParamsBuilder params(fn, strings.size());
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 2, 3}, memory);
ParamsBuilder params(fn, &mask);
params.add_readonly_single_input(&prefix);
params.add_single_mutable(strings.as_mutable_span());
ContextBuilder context;
fn.call({0, 2, 3}, params, context);
fn.call(mask, params, context);
EXPECT_EQ(strings[0], "ABHello");
EXPECT_EQ(strings[1], "World");
@@ -93,13 +95,15 @@ TEST(multi_function, CreateRangeFunction)
GVectorArray_TypedMutableRef<int> ranges_ref{ranges};
Array<int> sizes = {3, 0, 6, 1, 4};
ParamsBuilder params(fn, ranges.size());
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 1, 2, 3}, memory);
ParamsBuilder params(fn, &mask);
params.add_readonly_single_input(sizes.as_span());
params.add_vector_output(ranges);
ContextBuilder context;
fn.call({0, 1, 2, 3}, params, context);
fn.call(mask, params, context);
EXPECT_EQ(ranges[0].size(), 3);
EXPECT_EQ(ranges[1].size(), 0);
@@ -125,13 +129,14 @@ TEST(multi_function, GenericAppendFunction)
vectors_ref.append(2, 6);
Array<int> values = {5, 7, 3, 1};
ParamsBuilder params(fn, vectors.size());
const IndexMask mask(IndexRange(vectors.size()));
ParamsBuilder params(fn, &mask);
params.add_vector_mutable(vectors);
params.add_readonly_single_input(values.as_span());
ContextBuilder context;
fn.call(IndexRange(vectors.size()), params, context);
fn.call(mask, params, context);
EXPECT_EQ(vectors[0].size(), 3);
EXPECT_EQ(vectors[1].size(), 1);
@@ -153,12 +158,14 @@ TEST(multi_function, CustomMF_Constant)
Array<int> outputs(4, 0);
ParamsBuilder params(fn, outputs.size());
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 2, 3}, memory);
ParamsBuilder params(fn, &mask);
params.add_uninitialized_single_output(outputs.as_mutable_span());
ContextBuilder context;
fn.call({0, 2, 3}, params, context);
fn.call(mask, params, context);
EXPECT_EQ(outputs[0], 42);
EXPECT_EQ(outputs[1], 0);
@@ -173,12 +180,14 @@ TEST(multi_function, CustomMF_GenericConstant)
Array<int> outputs(4, 0);
ParamsBuilder params(fn, outputs.size());
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({0, 1, 2}, memory);
ParamsBuilder params(fn, &mask);
params.add_uninitialized_single_output(outputs.as_mutable_span());
ContextBuilder context;
fn.call({0, 1, 2}, params, context);
fn.call(mask, params, context);
EXPECT_EQ(outputs[0], 42);
EXPECT_EQ(outputs[1], 42);
@@ -194,12 +203,14 @@ TEST(multi_function, CustomMF_GenericConstantArray)
GVectorArray vector_array{CPPType::get<int32_t>(), 4};
GVectorArray_TypedMutableRef<int> vector_array_ref{vector_array};
ParamsBuilder params(fn, vector_array.size());
IndexMaskMemory memory;
const IndexMask mask = IndexMask::from_indices<int>({1, 2, 3}, memory);
ParamsBuilder params(fn, &mask);
params.add_vector_output(vector_array);
ContextBuilder context;
fn.call({1, 2, 3}, params, context);
fn.call(mask, params, context);
EXPECT_EQ(vector_array[0].size(), 0);
EXPECT_EQ(vector_array[1].size(), 4);
@@ -217,21 +228,23 @@ TEST(multi_function, IgnoredOutputs)
{
OptionalOutputsFunction fn;
{
ParamsBuilder params(fn, 10);
const IndexMask mask(10);
ParamsBuilder params(fn, &mask);
params.add_ignored_single_output("Out 1");
params.add_ignored_single_output("Out 2");
ContextBuilder context;
fn.call(IndexRange(10), params, context);
fn.call(mask, params, context);
}
{
Array<int> results_1(10);
Array<std::string> results_2(10, NoInitialization());
const IndexMask mask(10);
ParamsBuilder params(fn, 10);
ParamsBuilder params(fn, &mask);
params.add_uninitialized_single_output(results_1.as_mutable_span(), "Out 1");
params.add_uninitialized_single_output(results_2.as_mutable_span(), "Out 2");
ContextBuilder context;
fn.call(IndexRange(10), params, context);
fn.call(mask, params, context);
EXPECT_EQ(results_1[0], 5);
EXPECT_EQ(results_1[3], 5);

View File

@@ -18,14 +18,12 @@ class AddPrefixFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
const VArray<std::string> &prefixes = params.readonly_single_input<std::string>(0, "Prefix");
MutableSpan<std::string> strings = params.single_mutable<std::string>(1, "Strings");
for (int64_t i : mask) {
strings[i] = prefixes[i] + strings[i];
}
mask.foreach_index([&](const int64_t i) { strings[i] = prefixes[i] + strings[i]; });
}
};
@@ -43,17 +41,17 @@ class CreateRangeFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
const VArray<int> &sizes = params.readonly_single_input<int>(0, "Size");
GVectorArray &ranges = params.vector_output(1, "Range");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
int size = sizes[i];
for (int j : IndexRange(size)) {
ranges.append(i, &j);
}
}
});
}
};
@@ -70,17 +68,17 @@ class GenericAppendFunction : public MultiFunction {
this->set_signature(&signature_);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
GVectorArray &vectors = params.vector_mutable(0, "Vector");
const GVArray &values = params.readonly_single_input(1, "Value");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
BUFFER_FOR_CPP_TYPE_VALUE(values.type(), buffer);
values.get(i, buffer);
vectors.append(i, buffer);
values.type().destruct(buffer);
}
});
}
};
@@ -98,7 +96,7 @@ class ConcatVectorsFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
GVectorArray &a = params.vector_mutable(0);
const GVVectorArray &b = params.readonly_vector_input(1);
@@ -120,14 +118,12 @@ class AppendFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
GVectorArray_TypedMutableRef<int> vectors = params.vector_mutable<int>(0);
const VArray<int> &values = params.readonly_single_input<int>(1);
for (int64_t i : mask) {
vectors.append(i, values[i]);
}
mask.foreach_index([&](const int64_t i) { vectors.append(i, values[i]); });
}
};
@@ -145,18 +141,18 @@ class SumVectorFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
const VVectorArray<int> &vectors = params.readonly_vector_input<int>(0);
MutableSpan<int> sums = params.uninitialized_single_output<int>(1);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
int sum = 0;
for (int j : IndexRange(vectors.get_vector_size(i))) {
sum += vectors.get_vector_element(i, j);
}
sums[i] = sum;
}
});
}
};
@@ -174,16 +170,15 @@ class OptionalOutputsFunction : public MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, Params params, Context /*context*/) const override
void call(const IndexMask &mask, Params params, Context /*context*/) const override
{
if (params.single_output_is_required(0, "Out 1")) {
MutableSpan<int> values = params.uninitialized_single_output<int>(0, "Out 1");
values.fill_indices(mask.indices(), 5);
index_mask::masked_fill(values, 5, mask);
}
MutableSpan<std::string> values = params.uninitialized_single_output<std::string>(1, "Out 2");
for (const int i : mask) {
new (&values[i]) std::string("hello, this is a long string");
}
mask.foreach_index(
[&](const int i) { new (&values[i]) std::string("hello, this is a long string"); });
}
};

View File

@@ -8,16 +8,16 @@ namespace blender::geometry::curve_constraints {
void compute_segment_lengths(OffsetIndices<int> points_by_curve,
Span<float3> positions,
IndexMask curve_selection,
const IndexMask &curve_selection,
MutableSpan<float> r_segment_lengths);
void solve_length_constraints(OffsetIndices<int> points_by_curve,
IndexMask curve_selection,
const IndexMask &curve_selection,
Span<float> segment_lenghts,
MutableSpan<float3> positions);
void solve_length_and_collision_constraints(OffsetIndices<int> points_by_curve,
IndexMask curve_selection,
const IndexMask &curve_selection,
Span<float> segment_lengths,
Span<float3> start_positions,
const Mesh &surface,

View File

@@ -11,7 +11,7 @@ namespace blender::geometry {
bke::CurvesGeometry fillet_curves_poly(
const bke::CurvesGeometry &src_curves,
IndexMask curve_selection,
const IndexMask &curve_selection,
const VArray<float> &radius,
const VArray<int> &counts,
bool limit_radius,
@@ -19,7 +19,7 @@ bke::CurvesGeometry fillet_curves_poly(
bke::CurvesGeometry fillet_curves_bezier(
const bke::CurvesGeometry &src_curves,
IndexMask curve_selection,
const IndexMask &curve_selection,
const VArray<float> &radius,
bool limit_radius,
const bke::AnonymousAttributePropagationInfo &propagation_info);

View File

@@ -23,7 +23,7 @@ namespace blender::geometry {
* avoid copying the input. Otherwise returns the new mesh with merged geometry.
*/
std::optional<Mesh *> mesh_merge_by_distance_all(const Mesh &mesh,
IndexMask selection,
const IndexMask &selection,
float merge_distance);
/**

View File

@@ -11,7 +11,7 @@ struct Mesh;
namespace blender::geometry {
void split_edges(Mesh &mesh,
IndexMask mask,
const IndexMask &mask,
const bke::AnonymousAttributePropagationInfo &propagation_info);
} // namespace blender::geometry

View File

@@ -21,7 +21,7 @@ namespace blender::geometry {
*/
bke::CurvesGeometry mesh_to_curve_convert(
const Mesh &mesh,
const IndexMask selection,
const IndexMask &selection,
const bke::AnonymousAttributePropagationInfo &propagation_info);
bke::CurvesGeometry create_curve_from_vert_indices(

View File

@@ -22,7 +22,7 @@ namespace blender::geometry {
PointCloud *point_merge_by_distance(
const PointCloud &src_points,
const float merge_distance,
const IndexMask selection,
const IndexMask &selection,
const bke::AnonymousAttributePropagationInfo &propagation_info);
} // namespace blender::geometry

View File

@@ -18,7 +18,7 @@ namespace blender::geometry {
* \param get_writable_curves_fn: Should return the write-able curves to change directly if
* possible. This is a function in order to avoid the cost of retrieval when unnecessary.
*/
bool try_curves_conversion_in_place(IndexMask selection,
bool try_curves_conversion_in_place(const IndexMask &selection,
CurveType dst_type,
FunctionRef<bke::CurvesGeometry &()> get_writable_curves_fn);
@@ -26,7 +26,7 @@ bool try_curves_conversion_in_place(IndexMask selection,
* Change the types of the selected curves, potentially changing the total point count.
*/
bke::CurvesGeometry convert_curves(const bke::CurvesGeometry &src_curves,
IndexMask selection,
const IndexMask &selection,
CurveType dst_type,
const bke::AnonymousAttributePropagationInfo &propagation_info);

View File

@@ -19,7 +19,7 @@ namespace blender::geometry {
*/
bke::CurvesGeometry subdivide_curves(
const bke::CurvesGeometry &src_curves,
IndexMask selection,
const IndexMask &selection,
const VArray<int> &cuts,
const bke::AnonymousAttributePropagationInfo &propagation_info);

View File

@@ -16,7 +16,7 @@ namespace blender::geometry {
* between the start and end points.
*/
bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
IndexMask selection,
const IndexMask &selection,
const VArray<float> &starts,
const VArray<float> &ends,
GeometryNodeCurveSampleMode mode,

View File

@@ -18,13 +18,13 @@ namespace blender::geometry::curve_constraints {
void compute_segment_lengths(const OffsetIndices<int> points_by_curve,
const Span<float3> positions,
const IndexMask curve_selection,
const IndexMask &curve_selection,
MutableSpan<float> r_segment_lengths)
{
BLI_assert(r_segment_lengths.size() == points_by_curve.total_size());
threading::parallel_for(curve_selection.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection.slice(range)) {
curve_selection.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i].drop_back(1);
for (const int point_i : points) {
const float3 &p1 = positions[point_i];
@@ -37,14 +37,14 @@ void compute_segment_lengths(const OffsetIndices<int> points_by_curve,
}
void solve_length_constraints(const OffsetIndices<int> points_by_curve,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const Span<float> segment_lenghts,
MutableSpan<float3> positions)
{
BLI_assert(segment_lenghts.size() == points_by_curve.total_size());
threading::parallel_for(curve_selection.index_range(), 256, [&](const IndexRange range) {
for (const int curve_i : curve_selection.slice(range)) {
curve_selection.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i].drop_back(1);
for (const int point_i : points) {
const float3 &p1 = positions[point_i];
@@ -58,7 +58,7 @@ void solve_length_constraints(const OffsetIndices<int> points_by_curve,
}
void solve_length_and_collision_constraints(const OffsetIndices<int> points_by_curve,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const Span<float> segment_lengths_cu,
const Span<float3> start_positions_cu,
const Mesh &surface,
@@ -74,8 +74,8 @@ void solve_length_and_collision_constraints(const OffsetIndices<int> points_by_c
const float radius = 0.005f;
const int max_collisions = 5;
threading::parallel_for(curve_selection.index_range(), 64, [&](const IndexRange range) {
for (const int curve_i : curve_selection.slice(range)) {
curve_selection.foreach_segment(GrainSize(64), [&](const IndexMaskSegment segment) {
for (const int curve_i : segment) {
const IndexRange points = points_by_curve[curve_i];
/* Sometimes not all collisions can be handled. This happens relatively rarely, but if it

View File

@@ -28,26 +28,24 @@ static void threaded_slice_fill(const Span<T> src,
template<typename T>
static void duplicate_fillet_point_data(const OffsetIndices<int> src_points_by_curve,
const OffsetIndices<int> dst_points_by_curve,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const Span<int> all_point_offsets,
const Span<T> src,
MutableSpan<T> dst)
{
threading::parallel_for(curve_selection.index_range(), 512, [&](IndexRange range) {
for (const int curve_i : curve_selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
const IndexRange offsets_range = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const OffsetIndices<int> offsets(all_point_offsets.slice(offsets_range));
threaded_slice_fill(src.slice(src_points), offsets, dst.slice(dst_points));
}
curve_selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
const IndexRange offsets_range = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const OffsetIndices<int> offsets(all_point_offsets.slice(offsets_range));
threaded_slice_fill(src.slice(src_points), offsets, dst.slice(dst_points));
});
}
static void duplicate_fillet_point_data(const OffsetIndices<int> src_points_by_curve,
const OffsetIndices<int> dst_points_by_curve,
const IndexMask selection,
const IndexMask &selection,
const Span<int> all_point_offsets,
const GSpan src,
GMutableSpan dst)
@@ -64,7 +62,7 @@ static void duplicate_fillet_point_data(const OffsetIndices<int> src_points_by_c
}
static void calculate_result_offsets(const OffsetIndices<int> src_points_by_curve,
const IndexMask selection,
const IndexMask &selection,
const Span<IndexRange> unselected_ranges,
const VArray<float> &radii,
const VArray<int> &counts,
@@ -74,38 +72,36 @@ static void calculate_result_offsets(const OffsetIndices<int> src_points_by_curv
{
/* Fill the offsets array with the curve point counts, then accumulate them to form offsets. */
bke::curves::copy_curve_sizes(src_points_by_curve, unselected_ranges, dst_curve_offsets);
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange offsets_range = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange offsets_range = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
MutableSpan<int> point_offsets = dst_point_offsets.slice(offsets_range);
MutableSpan<int> point_counts = point_offsets.drop_back(1);
MutableSpan<int> point_offsets = dst_point_offsets.slice(offsets_range);
MutableSpan<int> point_counts = point_offsets.drop_back(1);
counts.materialize_compressed(src_points, point_counts);
for (int &count : point_counts) {
/* Make sure the number of cuts is greater than zero and add one for the existing point. */
count = std::max(count, 0) + 1;
}
if (!cyclic[curve_i]) {
/* Endpoints on non-cyclic curves cannot be filleted. */
point_counts.first() = 1;
point_counts.last() = 1;
}
/* Implicitly "deselect" points with zero radius. */
devirtualize_varray(radii, [&](const auto radii) {
for (const int i : IndexRange(src_points.size())) {
if (radii[src_points[i]] == 0.0f) {
point_counts[i] = 1;
}
}
});
offset_indices::accumulate_counts_to_offsets(point_offsets);
dst_curve_offsets[curve_i] = point_offsets.last();
counts.materialize_compressed(src_points, point_counts);
for (int &count : point_counts) {
/* Make sure the number of cuts is greater than zero and add one for the existing point. */
count = std::max(count, 0) + 1;
}
if (!cyclic[curve_i]) {
/* Endpoints on non-cyclic curves cannot be filleted. */
point_counts.first() = 1;
point_counts.last() = 1;
}
/* Implicitly "deselect" points with zero radius. */
devirtualize_varray(radii, [&](const auto radii) {
for (const int i : IndexRange(src_points.size())) {
if (radii[src_points[i]] == 0.0f) {
point_counts[i] = 1;
}
}
});
offset_indices::accumulate_counts_to_offsets(point_offsets);
dst_curve_offsets[curve_i] = point_offsets.last();
});
offset_indices::accumulate_counts_to_offsets(dst_curve_offsets);
}
@@ -397,7 +393,7 @@ static void calculate_bezier_handles_poly_mode(const Span<float3> src_handles_l,
static bke::CurvesGeometry fillet_curves(
const bke::CurvesGeometry &src_curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const VArray<float> &radius_input,
const VArray<int> &counts,
const bool limit_radius,
@@ -408,7 +404,7 @@ static bke::CurvesGeometry fillet_curves(
const Span<float3> positions = src_curves.positions();
const VArraySpan<bool> cyclic{src_curves.cyclic()};
const bke::AttributeAccessor src_attributes = src_curves.attributes();
const Vector<IndexRange> unselected_ranges = curve_selection.extract_ranges_invert(
const Vector<IndexRange> unselected_ranges = curve_selection.to_ranges_invert(
src_curves.curves_range());
bke::CurvesGeometry dst_curves = bke::curves::copy_only_curve_domain(src_curves);
@@ -450,13 +446,13 @@ static bke::CurvesGeometry fillet_curves(
dst_handles_r = dst_curves.handle_positions_right_for_write();
}
threading::parallel_for(curve_selection.index_range(), 512, [&](IndexRange range) {
curve_selection.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
Array<float3> directions;
Array<float> angles;
Array<float> radii;
Array<float> input_radii_buffer;
for (const int curve_i : curve_selection.slice(range)) {
for (const int curve_i : segment) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange offsets_range = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
@@ -553,7 +549,7 @@ static bke::CurvesGeometry fillet_curves(
bke::CurvesGeometry fillet_curves_poly(
const bke::CurvesGeometry &src_curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const VArray<float> &radius,
const VArray<int> &count,
const bool limit_radius,
@@ -565,7 +561,7 @@ bke::CurvesGeometry fillet_curves_poly(
bke::CurvesGeometry fillet_curves_bezier(
const bke::CurvesGeometry &src_curves,
const IndexMask curve_selection,
const IndexMask &curve_selection,
const VArray<float> &radius,
const bool limit_radius,
const bke::AnonymousAttributePropagationInfo &propagation_info)

View File

@@ -22,15 +22,13 @@ void flip_faces(Mesh &mesh, const IndexMask &selection)
MutableSpan<int> corner_verts = mesh.corner_verts_for_write();
MutableSpan<int> corner_edges = mesh.corner_edges_for_write();
threading::parallel_for(selection.index_range(), 1024, [&](const IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange poly = polys[i];
for (const int j : IndexRange(poly.size() / 2)) {
const int a = poly[j + 1];
const int b = poly.last(j);
std::swap(corner_verts[a], corner_verts[b]);
std::swap(corner_edges[a - 1], corner_edges[b]);
}
selection.foreach_index(GrainSize(1024), [&](const int i) {
const IndexRange poly = polys[i];
for (const int j : IndexRange(poly.size() / 2)) {
const int a = poly[j + 1];
const int b = poly.last(j);
std::swap(corner_verts[a], corner_verts[b]);
std::swap(corner_edges[a - 1], corner_edges[b]);
}
});
@@ -50,10 +48,8 @@ void flip_faces(Mesh &mesh, const IndexMask &selection)
bke::attribute_math::convert_to_static_type(meta_data.data_type, [&](auto dummy) {
using T = decltype(dummy);
MutableSpan<T> dst_span = attribute.span.typed<T>();
threading::parallel_for(selection.index_range(), 1024, [&](const IndexRange range) {
for (const int i : selection.slice(range)) {
dst_span.slice(polys[i].drop_front(1)).reverse();
}
selection.foreach_index(GrainSize(1024), [&](const int i) {
dst_span.slice(polys[i].drop_front(1)).reverse();
});
});
attribute.finish();

View File

@@ -1727,7 +1727,7 @@ static Mesh *create_merged_mesh(const Mesh &mesh,
* \{ */
std::optional<Mesh *> mesh_merge_by_distance_all(const Mesh &mesh,
const IndexMask selection,
const IndexMask &selection,
const float merge_distance)
{
Array<int> vert_dest_map(mesh.totvert, OUT_OF_CONTEXT);
@@ -1735,9 +1735,7 @@ std::optional<Mesh *> mesh_merge_by_distance_all(const Mesh &mesh,
KDTree_3d *tree = BLI_kdtree_3d_new(selection.size());
const Span<float3> positions = mesh.vert_positions();
for (const int i : selection) {
BLI_kdtree_3d_insert(tree, i, positions[i]);
}
selection.foreach_index([&](const int64_t i) { BLI_kdtree_3d_insert(tree, i, positions[i]); });
BLI_kdtree_3d_balance(tree);
const int vert_kill_len = BLI_kdtree_3d_calc_duplicates_fast(

View File

@@ -359,17 +359,17 @@ static void split_edge_per_poly(const int edge_i,
}
void split_edges(Mesh &mesh,
const IndexMask mask,
const IndexMask &mask,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
/* Flag vertices that need to be split. */
Array<bool> should_split_vert(mesh.totvert, false);
const Span<int2> edges = mesh.edges();
for (const int edge_i : mask) {
mask.foreach_index([&](const int edge_i) {
const int2 &edge = edges[edge_i];
should_split_vert[edge[0]] = true;
should_split_vert[edge[1]] = true;
}
});
/* Precalculate topology info. */
Array<Vector<int>> vert_to_edge_map(mesh.totvert);
@@ -389,14 +389,14 @@ void split_edges(Mesh &mesh,
Array<int> edge_offsets(edges.size());
Array<int> num_edge_duplicates(edges.size());
int new_edges_size = edges.size();
for (const int edge : mask) {
mask.foreach_index([&](const int edge) {
edge_offsets[edge] = new_edges_size;
/* We add duplicates of the edge for each poly (except the first). */
const int num_connected_loops = orig_edge_to_loop_map[edge].size();
const int num_duplicates = std::max(0, num_connected_loops - 1);
new_edges_size += num_duplicates;
num_edge_duplicates[edge] = num_duplicates;
}
});
const OffsetIndices polys = mesh.polys();
@@ -416,26 +416,24 @@ void split_edges(Mesh &mesh,
Vector<int> new_to_old_edges_map(IndexRange(new_edges.size()).as_span());
/* Step 1: Split the edges. */
threading::parallel_for(mask.index_range(), 512, [&](IndexRange range) {
for (const int mask_i : range) {
const int edge_i = mask[mask_i];
split_edge_per_poly(edge_i,
edge_offsets[edge_i],
edge_to_loop_map,
corner_edges,
new_edges,
new_to_old_edges_map);
}
mask.foreach_index(GrainSize(512), [&](const int edge_i) {
split_edge_per_poly(edge_i,
edge_offsets[edge_i],
edge_to_loop_map,
corner_edges,
new_edges,
new_to_old_edges_map);
});
/* Step 1.5: Update topology information (can't parallelize). */
for (const int edge_i : mask) {
mask.foreach_index([&](const int edge_i) {
const int2 &edge = edges[edge_i];
for (const int duplicate_i : IndexRange(edge_offsets[edge_i], num_edge_duplicates[edge_i])) {
vert_to_edge_map[edge[0]].append(duplicate_i);
vert_to_edge_map[edge[1]].append(duplicate_i);
}
}
});
/* Step 2: Calculate vertex fans. */
Array<Vector<int>> vertex_fan_sizes(mesh.totvert);

View File

@@ -204,7 +204,7 @@ BLI_NOINLINE static bke::CurvesGeometry edges_to_curves_convert(
bke::CurvesGeometry mesh_to_curve_convert(
const Mesh &mesh,
const IndexMask selection,
const IndexMask &selection,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
const Span<int2> edges = mesh.edges();

View File

@@ -16,7 +16,7 @@ namespace blender::geometry {
PointCloud *point_merge_by_distance(const PointCloud &src_points,
const float merge_distance,
const IndexMask selection,
const IndexMask &selection,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
const bke::AttributeAccessor src_attributes = src_points.attributes();
@@ -26,9 +26,8 @@ PointCloud *point_merge_by_distance(const PointCloud &src_points,
/* Create the KD tree based on only the selected points, to speed up merge detection and
* balancing. */
KDTree_3d *tree = BLI_kdtree_3d_new(selection.size());
for (const int i : selection.index_range()) {
BLI_kdtree_3d_insert(tree, i, positions[selection[i]]);
}
selection.foreach_index_optimized<int64_t>(
[&](const int64_t i, const int64_t pos) { BLI_kdtree_3d_insert(tree, pos, positions[i]); });
BLI_kdtree_3d_balance(tree);
/* Find the duplicates in the KD tree. Because the tree only contains the selected points, the
@@ -51,6 +50,7 @@ PointCloud *point_merge_by_distance(const PointCloud &src_points,
for (const int i : merge_indices.index_range()) {
merge_indices[i] = i;
}
for (const int i : selection_merge_indices.index_range()) {
const int merge_index = selection_merge_indices[i];
if (merge_index != -1) {

View File

@@ -230,7 +230,7 @@ static void normalize_span(MutableSpan<float3> data)
}
}
static void normalize_curve_point_data(const IndexMask curve_selection,
static void normalize_curve_point_data(const IndexMaskSegment curve_selection,
const OffsetIndices<int> points_by_curve,
MutableSpan<float3> data)
{
@@ -259,8 +259,8 @@ static CurvesGeometry resample_to_uniform(const CurvesGeometry &src_curves,
evaluator.add_with_destination(count_field, dst_offsets.drop_back(1));
evaluator.evaluate();
const IndexMask selection = evaluator.get_evaluated_selection_as_mask();
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(
src_curves.curves_range(), nullptr);
const Vector<IndexRange> unselected_ranges = selection.to_ranges_invert(
src_curves.curves_range());
/* Fill the counts for the curves that aren't selected and accumulate the counts into offsets. */
bke::curves::copy_curve_sizes(src_points_by_curve, unselected_ranges, dst_offsets);
@@ -290,13 +290,11 @@ static CurvesGeometry resample_to_uniform(const CurvesGeometry &src_curves,
/* Use a "for each group of curves: for each attribute: for each curve" pattern to work on
* smaller sections of data that ideally fit into CPU cache better than simply one attribute at a
* time or one curve at a time. */
threading::parallel_for(selection.index_range(), 512, [&](IndexRange selection_range) {
const IndexMask sliced_selection = selection.slice(selection_range);
selection.foreach_segment(GrainSize(512), [&](const IndexMaskSegment selection_segment) {
Vector<std::byte> evaluated_buffer;
/* Gather uniform samples based on the accumulated lengths of the original curve. */
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const bool cyclic = curves_cyclic[i_curve];
const IndexRange dst_points = dst_points_by_curve[i_curve];
const Span<float> lengths = src_curves.evaluated_lengths_for_curve(i_curve, cyclic);
@@ -322,7 +320,7 @@ static CurvesGeometry resample_to_uniform(const CurvesGeometry &src_curves,
Span<T> src = attributes.src[i_attribute].typed<T>();
MutableSpan<T> dst = attributes.dst[i_attribute].typed<T>();
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const IndexRange src_points = src_points_by_curve[i_curve];
const IndexRange dst_points = dst_points_by_curve[i_curve];
@@ -347,7 +345,7 @@ static CurvesGeometry resample_to_uniform(const CurvesGeometry &src_curves,
}
auto interpolate_evaluated_data = [&](const Span<float3> src, MutableSpan<float3> dst) {
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const IndexRange src_points = evaluated_points_by_curve[i_curve];
const IndexRange dst_points = dst_points_by_curve[i_curve];
length_parameterize::interpolate(src.slice(src_points),
@@ -362,16 +360,16 @@ static CurvesGeometry resample_to_uniform(const CurvesGeometry &src_curves,
if (!attributes.dst_tangents.is_empty()) {
interpolate_evaluated_data(attributes.src_evaluated_tangents, attributes.dst_tangents);
normalize_curve_point_data(sliced_selection, dst_points_by_curve, attributes.dst_tangents);
normalize_curve_point_data(selection_segment, dst_points_by_curve, attributes.dst_tangents);
}
if (!attributes.dst_normals.is_empty()) {
interpolate_evaluated_data(attributes.src_evaluated_normals, attributes.dst_normals);
normalize_curve_point_data(sliced_selection, dst_points_by_curve, attributes.dst_normals);
normalize_curve_point_data(selection_segment, dst_points_by_curve, attributes.dst_normals);
}
/* Fill the default value for non-interpolating attributes that still must be copied. */
for (GMutableSpan dst : attributes.dst_no_interpolation) {
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const IndexRange dst_points = dst_points_by_curve[i_curve];
dst.type().value_initialize_n(dst.slice(dst_points).data(), dst_points.size());
}
@@ -418,8 +416,8 @@ CurvesGeometry resample_to_evaluated(const CurvesGeometry &src_curves,
evaluator.set_selection(selection_field);
evaluator.evaluate();
const IndexMask selection = evaluator.get_evaluated_selection_as_mask();
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(
src_curves.curves_range(), nullptr);
const Vector<IndexRange> unselected_ranges = selection.to_ranges_invert(
src_curves.curves_range());
CurvesGeometry dst_curves = bke::curves::copy_only_curve_domain(src_curves);
dst_curves.fill_curve_types(selection, CURVE_TYPE_POLY);
@@ -437,9 +435,7 @@ CurvesGeometry resample_to_evaluated(const CurvesGeometry &src_curves,
gather_point_attributes_to_interpolate(src_curves, dst_curves, attributes, output_ids);
src_curves.ensure_can_interpolate_to_evaluated();
threading::parallel_for(selection.index_range(), 512, [&](IndexRange selection_range) {
const IndexMask sliced_selection = selection.slice(selection_range);
selection.foreach_segment(GrainSize(512), [&](const IndexMaskSegment selection_segment) {
/* Evaluate generic point attributes directly to the result attributes. */
for (const int i_attribute : attributes.dst.index_range()) {
const CPPType &type = attributes.src[i_attribute].type();
@@ -448,7 +444,7 @@ CurvesGeometry resample_to_evaluated(const CurvesGeometry &src_curves,
Span<T> src = attributes.src[i_attribute].typed<T>();
MutableSpan<T> dst = attributes.dst[i_attribute].typed<T>();
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const IndexRange src_points = src_points_by_curve[i_curve];
const IndexRange dst_points = dst_points_by_curve[i_curve];
src_curves.interpolate_to_evaluated(
@@ -458,7 +454,7 @@ CurvesGeometry resample_to_evaluated(const CurvesGeometry &src_curves,
}
auto copy_evaluated_data = [&](const Span<float3> src, MutableSpan<float3> dst) {
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const IndexRange src_points = src_evaluated_points_by_curve[i_curve];
const IndexRange dst_points = dst_points_by_curve[i_curve];
dst.slice(dst_points).copy_from(src.slice(src_points));
@@ -470,16 +466,16 @@ CurvesGeometry resample_to_evaluated(const CurvesGeometry &src_curves,
if (!attributes.dst_tangents.is_empty()) {
copy_evaluated_data(attributes.src_evaluated_tangents, attributes.dst_tangents);
normalize_curve_point_data(sliced_selection, dst_points_by_curve, attributes.dst_tangents);
normalize_curve_point_data(selection_segment, dst_points_by_curve, attributes.dst_tangents);
}
if (!attributes.dst_normals.is_empty()) {
copy_evaluated_data(attributes.src_evaluated_normals, attributes.dst_normals);
normalize_curve_point_data(sliced_selection, dst_points_by_curve, attributes.dst_normals);
normalize_curve_point_data(selection_segment, dst_points_by_curve, attributes.dst_normals);
}
/* Fill the default value for non-interpolating attributes that still must be copied. */
for (GMutableSpan dst : attributes.dst_no_interpolation) {
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
const IndexRange dst_points = dst_points_by_curve[i_curve];
dst.type().value_initialize_n(dst.slice(dst_points).data(), dst_points.size());
}

View File

@@ -279,7 +279,7 @@ static int to_nurbs_size(const CurveType src_type, const int src_size)
static bke::CurvesGeometry convert_curves_to_bezier(
const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
const OffsetIndices src_points_by_curve = src_curves.points_by_curve();
@@ -288,7 +288,7 @@ static bke::CurvesGeometry convert_curves_to_bezier(
const VArray<bool> src_cyclic = src_curves.cyclic();
const Span<float3> src_positions = src_curves.positions();
const bke::AttributeAccessor src_attributes = src_curves.attributes();
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(
const Vector<IndexRange> unselected_ranges = selection.to_ranges_invert(
src_curves.curves_range());
bke::CurvesGeometry dst_curves = bke::curves::copy_only_curve_domain(src_curves);
@@ -296,13 +296,11 @@ static bke::CurvesGeometry convert_curves_to_bezier(
MutableSpan<int> dst_offsets = dst_curves.offsets_for_write();
bke::curves::copy_curve_sizes(src_points_by_curve, unselected_ranges, dst_offsets);
threading::parallel_for(selection.index_range(), 1024, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
dst_offsets[i] = to_bezier_size(CurveType(src_types[i]),
src_cyclic[i],
KnotsMode(src_knot_modes[i]),
src_points_by_curve[i].size());
}
selection.foreach_index(GrainSize(1024), [&](const int i) {
dst_offsets[i] = to_bezier_size(CurveType(src_types[i]),
src_cyclic[i],
KnotsMode(src_knot_modes[i]),
src_points_by_curve[i].size());
});
offset_indices::accumulate_counts_to_offsets(dst_offsets);
dst_curves.resize(dst_offsets.last(), dst_curves.curves_num());
@@ -326,7 +324,7 @@ static bke::CurvesGeometry convert_curves_to_bezier(
propagation_info,
attributes_to_skip);
auto catmull_rom_to_bezier = [&](IndexMask selection) {
auto catmull_rom_to_bezier = [&](const IndexMask &selection) {
bke::curves::fill_points<int8_t>(
dst_points_by_curve, selection, BEZIER_HANDLE_ALIGN, dst_types_l);
bke::curves::fill_points<int8_t>(
@@ -334,15 +332,13 @@ static bke::CurvesGeometry convert_curves_to_bezier(
bke::curves::copy_point_data(
src_points_by_curve, dst_points_by_curve, selection, src_positions, dst_positions);
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
catmull_rom_to_bezier_handles(src_positions.slice(src_points),
src_cyclic[i],
dst_handles_l.slice(dst_points),
dst_handles_r.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
catmull_rom_to_bezier_handles(src_positions.slice(src_points),
src_cyclic[i],
dst_handles_l.slice(dst_points),
dst_handles_r.slice(dst_points));
});
for (bke::AttributeTransferData &attribute : generic_attributes) {
@@ -351,7 +347,7 @@ static bke::CurvesGeometry convert_curves_to_bezier(
}
};
auto poly_to_bezier = [&](IndexMask selection) {
auto poly_to_bezier = [&](const IndexMask &selection) {
bke::curves::copy_point_data(
src_points_by_curve, dst_points_by_curve, selection, src_positions, dst_positions);
bke::curves::fill_points<int8_t>(
@@ -365,7 +361,7 @@ static bke::CurvesGeometry convert_curves_to_bezier(
}
};
auto bezier_to_bezier = [&](IndexMask selection) {
auto bezier_to_bezier = [&](const IndexMask &selection) {
const VArraySpan<int8_t> src_types_l = src_curves.handle_types_left();
const VArraySpan<int8_t> src_types_r = src_curves.handle_types_right();
const Span<float3> src_handles_l = src_curves.handle_positions_left();
@@ -390,58 +386,54 @@ static bke::CurvesGeometry convert_curves_to_bezier(
}
};
auto nurbs_to_bezier = [&](IndexMask selection) {
auto nurbs_to_bezier = [&](const IndexMask &selection) {
bke::curves::fill_points<int8_t>(
dst_points_by_curve, selection, BEZIER_HANDLE_ALIGN, dst_types_l);
bke::curves::fill_points<int8_t>(
dst_points_by_curve, selection, BEZIER_HANDLE_ALIGN, dst_types_r);
threading::parallel_for(selection.index_range(), 64, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
const Span<float3> src_curve_positions = src_positions.slice(src_points);
if (dst_points.size() == 1) {
const float3 &position = src_positions[src_points.first()];
dst_positions[dst_points.first()] = position;
dst_handles_l[dst_points.first()] = position;
dst_handles_r[dst_points.first()] = position;
continue;
}
KnotsMode knots_mode = KnotsMode(src_knot_modes[i]);
Span<float3> nurbs_positions = src_curve_positions;
Vector<float3> nurbs_positions_vector;
if (src_cyclic[i] && is_nurbs_to_bezier_one_to_one(knots_mode)) {
/* For conversion treat this as periodic closed curve. Extend NURBS hull to first and
* second point which will act as a skeleton for placing Bezier handles. */
nurbs_positions_vector.extend(src_curve_positions);
nurbs_positions_vector.append(src_curve_positions[0]);
nurbs_positions_vector.append(src_curve_positions[1]);
nurbs_positions = nurbs_positions_vector;
knots_mode = NURBS_KNOT_MODE_NORMAL;
}
const Vector<float3> handle_positions = create_nurbs_to_bezier_handles(nurbs_positions,
knots_mode);
scale_input_assign(handle_positions.as_span(), 2, 0, dst_handles_l.slice(dst_points));
scale_input_assign(handle_positions.as_span(), 2, 1, dst_handles_r.slice(dst_points));
create_nurbs_to_bezier_positions(
nurbs_positions, handle_positions, knots_mode, dst_positions.slice(dst_points));
selection.foreach_index(GrainSize(64), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
const Span<float3> src_curve_positions = src_positions.slice(src_points);
if (dst_points.size() == 1) {
const float3 &position = src_positions[src_points.first()];
dst_positions[dst_points.first()] = position;
dst_handles_l[dst_points.first()] = position;
dst_handles_r[dst_points.first()] = position;
return;
}
KnotsMode knots_mode = KnotsMode(src_knot_modes[i]);
Span<float3> nurbs_positions = src_curve_positions;
Vector<float3> nurbs_positions_vector;
if (src_cyclic[i] && is_nurbs_to_bezier_one_to_one(knots_mode)) {
/* For conversion treat this as periodic closed curve. Extend NURBS hull to first and
* second point which will act as a skeleton for placing Bezier handles. */
nurbs_positions_vector.extend(src_curve_positions);
nurbs_positions_vector.append(src_curve_positions[0]);
nurbs_positions_vector.append(src_curve_positions[1]);
nurbs_positions = nurbs_positions_vector;
knots_mode = NURBS_KNOT_MODE_NORMAL;
}
const Vector<float3> handle_positions = create_nurbs_to_bezier_handles(nurbs_positions,
knots_mode);
scale_input_assign(handle_positions.as_span(), 2, 0, dst_handles_l.slice(dst_points));
scale_input_assign(handle_positions.as_span(), 2, 1, dst_handles_r.slice(dst_points));
create_nurbs_to_bezier_positions(
nurbs_positions, handle_positions, knots_mode, dst_positions.slice(dst_points));
});
for (bke::AttributeTransferData &attribute : generic_attributes) {
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
nurbs_to_bezier_assign(attribute.src.slice(src_points),
KnotsMode(src_knot_modes[i]),
attribute.dst.span.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
nurbs_to_bezier_assign(attribute.src.slice(src_points),
KnotsMode(src_knot_modes[i]),
attribute.dst.span.slice(dst_points));
});
}
};
@@ -471,7 +463,7 @@ static bke::CurvesGeometry convert_curves_to_bezier(
static bke::CurvesGeometry convert_curves_to_nurbs(
const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
const OffsetIndices src_points_by_curve = src_curves.points_by_curve();
@@ -479,7 +471,7 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
const VArray<bool> src_cyclic = src_curves.cyclic();
const Span<float3> src_positions = src_curves.positions();
const bke::AttributeAccessor src_attributes = src_curves.attributes();
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(
const Vector<IndexRange> unselected_ranges = selection.to_ranges_invert(
src_curves.curves_range());
bke::CurvesGeometry dst_curves = bke::curves::copy_only_curve_domain(src_curves);
@@ -487,10 +479,8 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
MutableSpan<int> dst_offsets = dst_curves.offsets_for_write();
bke::curves::copy_curve_sizes(src_points_by_curve, unselected_ranges, dst_offsets);
threading::parallel_for(selection.index_range(), 1024, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
dst_offsets[i] = to_nurbs_size(CurveType(src_types[i]), src_points_by_curve[i].size());
}
selection.foreach_index(GrainSize(1024), [&](const int i) {
dst_offsets[i] = to_nurbs_size(CurveType(src_types[i]), src_points_by_curve[i].size());
});
offset_indices::accumulate_counts_to_offsets(dst_offsets);
dst_curves.resize(dst_offsets.last(), dst_curves.curves_num());
@@ -510,21 +500,21 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
"handle_left",
"nurbs_weight"});
auto fill_weights_if_necessary = [&](const IndexMask selection) {
auto fill_weights_if_necessary = [&](const IndexMask &selection) {
if (src_attributes.contains("nurbs_weight")) {
bke::curves::fill_points(
dst_points_by_curve, selection, 1.0f, dst_curves.nurbs_weights_for_write());
}
};
auto catmull_rom_to_nurbs = [&](IndexMask selection) {
dst_curves.nurbs_orders_for_write().fill_indices(selection.indices(), 4);
dst_curves.nurbs_knots_modes_for_write().fill_indices(selection.indices(),
NURBS_KNOT_MODE_BEZIER);
auto catmull_rom_to_nurbs = [&](const IndexMask &selection) {
index_mask::masked_fill<int8_t>(dst_curves.nurbs_orders_for_write(), 4, selection);
index_mask::masked_fill<int8_t>(
dst_curves.nurbs_knots_modes_for_write(), NURBS_KNOT_MODE_BEZIER, selection);
fill_weights_if_necessary(selection);
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
selection.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
for (const int i : segment) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
catmull_rom_to_nurbs_positions(
@@ -533,19 +523,17 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
});
for (bke::AttributeTransferData &attribute : generic_attributes) {
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
bezier_generic_to_nurbs(attribute.src.slice(src_points),
attribute.dst.span.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
bezier_generic_to_nurbs(attribute.src.slice(src_points),
attribute.dst.span.slice(dst_points));
});
}
};
auto poly_to_nurbs = [&](IndexMask selection) {
dst_curves.nurbs_orders_for_write().fill_indices(selection.indices(), 4);
auto poly_to_nurbs = [&](const IndexMask &selection) {
index_mask::masked_fill<int8_t>(dst_curves.nurbs_orders_for_write(), 4, selection);
bke::curves::copy_point_data(
src_points_by_curve, dst_points_by_curve, selection, src_positions, dst_positions);
fill_weights_if_necessary(selection);
@@ -553,17 +541,16 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
/* Avoid using "Endpoint" knots modes for cyclic curves, since it adds a sharp point at the
* start/end. */
if (src_cyclic.is_single()) {
dst_curves.nurbs_knots_modes_for_write().fill_indices(
selection.indices(),
src_cyclic.get_internal_single() ? NURBS_KNOT_MODE_NORMAL : NURBS_KNOT_MODE_ENDPOINT);
index_mask::masked_fill<int8_t>(dst_curves.nurbs_knots_modes_for_write(),
src_cyclic.get_internal_single() ? NURBS_KNOT_MODE_NORMAL :
NURBS_KNOT_MODE_ENDPOINT,
selection);
}
else {
VArraySpan<bool> cyclic{src_cyclic};
MutableSpan<int8_t> knots_modes = dst_curves.nurbs_knots_modes_for_write();
threading::parallel_for(selection.index_range(), 1024, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
knots_modes[i] = cyclic[i] ? NURBS_KNOT_MODE_NORMAL : NURBS_KNOT_MODE_ENDPOINT;
}
selection.foreach_index(GrainSize(1024), [&](const int i) {
knots_modes[i] = cyclic[i] ? NURBS_KNOT_MODE_NORMAL : NURBS_KNOT_MODE_ENDPOINT;
});
}
@@ -573,39 +560,35 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
}
};
auto bezier_to_nurbs = [&](IndexMask selection) {
auto bezier_to_nurbs = [&](const IndexMask &selection) {
const Span<float3> src_handles_l = src_curves.handle_positions_left();
const Span<float3> src_handles_r = src_curves.handle_positions_right();
dst_curves.nurbs_orders_for_write().fill_indices(selection.indices(), 4);
dst_curves.nurbs_knots_modes_for_write().fill_indices(selection.indices(),
NURBS_KNOT_MODE_BEZIER);
index_mask::masked_fill<int8_t>(dst_curves.nurbs_orders_for_write(), 4, selection);
index_mask::masked_fill<int8_t>(
dst_curves.nurbs_knots_modes_for_write(), NURBS_KNOT_MODE_BEZIER, selection);
fill_weights_if_necessary(selection);
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
bezier_positions_to_nurbs(src_positions.slice(src_points),
src_handles_l.slice(src_points),
src_handles_r.slice(src_points),
dst_positions.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
bezier_positions_to_nurbs(src_positions.slice(src_points),
src_handles_l.slice(src_points),
src_handles_r.slice(src_points),
dst_positions.slice(dst_points));
});
for (bke::AttributeTransferData &attribute : generic_attributes) {
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
bezier_generic_to_nurbs(attribute.src.slice(src_points),
attribute.dst.span.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int i) {
const IndexRange src_points = src_points_by_curve[i];
const IndexRange dst_points = dst_points_by_curve[i];
bezier_generic_to_nurbs(attribute.src.slice(src_points),
attribute.dst.span.slice(dst_points));
});
}
};
auto nurbs_to_nurbs = [&](IndexMask selection) {
auto nurbs_to_nurbs = [&](const IndexMask &selection) {
bke::curves::copy_point_data(
src_points_by_curve, dst_points_by_curve, selection, src_positions, dst_positions);
@@ -647,7 +630,7 @@ static bke::CurvesGeometry convert_curves_to_nurbs(
}
static bke::CurvesGeometry convert_curves_trivial(const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const CurveType dst_type)
{
bke::CurvesGeometry dst_curves(src_curves);
@@ -657,7 +640,7 @@ static bke::CurvesGeometry convert_curves_trivial(const bke::CurvesGeometry &src
}
bke::CurvesGeometry convert_curves(const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const CurveType dst_type,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
@@ -674,7 +657,7 @@ bke::CurvesGeometry convert_curves(const bke::CurvesGeometry &src_curves,
return {};
}
bool try_curves_conversion_in_place(const IndexMask selection,
bool try_curves_conversion_in_place(const IndexMask &selection,
const CurveType dst_type,
FunctionRef<bke::CurvesGeometry &()> get_writable_curves_fn)
{

View File

@@ -12,7 +12,7 @@
namespace blender::geometry {
static void calculate_result_offsets(const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const Span<IndexRange> unselected_ranges,
const VArray<int> &cuts,
const Span<bool> cyclic,
@@ -22,33 +22,31 @@ static void calculate_result_offsets(const bke::CurvesGeometry &src_curves,
/* Fill the array with each curve's point count, then accumulate them to the offsets. */
const OffsetIndices src_points_by_curve = src_curves.points_by_curve();
bke::curves::copy_curve_sizes(src_points_by_curve, unselected_ranges, dst_curve_offsets);
threading::parallel_for(selection.index_range(), 1024, [&](IndexRange range) {
for (const int curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
selection.foreach_index(GrainSize(1024), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
MutableSpan<int> point_offsets = dst_point_offsets.slice(src_segments);
MutableSpan<int> point_counts = point_offsets.drop_back(1);
MutableSpan<int> point_offsets = dst_point_offsets.slice(src_segments);
MutableSpan<int> point_counts = point_offsets.drop_back(1);
if (src_points.size() == 1) {
point_counts.first() = 1;
}
else {
cuts.materialize_compressed(src_points, point_counts);
for (int &count : point_counts) {
/* Make sure there at least one cut, and add one for the existing point. */
count = std::max(count, 0) + 1;
}
if (!cyclic[curve_i]) {
/* The last point only has a segment to be subdivided if the curve isn't cyclic. */
point_counts.last() = 1;
}
}
offset_indices::accumulate_counts_to_offsets(point_offsets);
dst_curve_offsets[curve_i] = point_offsets.last();
if (src_points.size() == 1) {
point_counts.first() = 1;
}
else {
cuts.materialize_compressed(src_points, point_counts);
for (int &count : point_counts) {
/* Make sure there at least one cut, and add one for the existing point. */
count = std::max(count, 0) + 1;
}
if (!cyclic[curve_i]) {
/* The last point only has a segment to be subdivided if the curve isn't cyclic. */
point_counts.last() = 1;
}
}
offset_indices::accumulate_counts_to_offsets(point_offsets);
dst_curve_offsets[curve_i] = point_offsets.last();
});
offset_indices::accumulate_counts_to_offsets(dst_curve_offsets);
}
@@ -66,37 +64,35 @@ static inline void linear_interpolation(const T &a, const T &b, MutableSpan<T> d
template<typename T>
static void subdivide_attribute_linear(const OffsetIndices<int> src_points_by_curve,
const OffsetIndices<int> dst_points_by_curve,
const IndexMask selection,
const IndexMask &selection,
const Span<int> all_point_offsets,
const Span<T> src,
MutableSpan<T> dst)
{
threading::parallel_for(selection.index_range(), 512, [&](IndexRange selection_range) {
for (const int curve_i : selection.slice(selection_range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const OffsetIndices<int> curve_offsets = all_point_offsets.slice(src_segments);
const IndexRange dst_points = dst_points_by_curve[curve_i];
const Span<T> curve_src = src.slice(src_points);
MutableSpan<T> curve_dst = dst.slice(dst_points);
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const OffsetIndices<int> curve_offsets = all_point_offsets.slice(src_segments);
const IndexRange dst_points = dst_points_by_curve[curve_i];
const Span<T> curve_src = src.slice(src_points);
MutableSpan<T> curve_dst = dst.slice(dst_points);
threading::parallel_for(curve_src.index_range().drop_back(1), 1024, [&](IndexRange range) {
for (const int i : range) {
const IndexRange segment_points = curve_offsets[i];
linear_interpolation(curve_src[i], curve_src[i + 1], curve_dst.slice(segment_points));
}
});
threading::parallel_for(curve_src.index_range().drop_back(1), 1024, [&](IndexRange range) {
for (const int i : range) {
const IndexRange segment_points = curve_offsets[i];
linear_interpolation(curve_src[i], curve_src[i + 1], curve_dst.slice(segment_points));
}
});
const IndexRange dst_last_segment = dst_points.slice(curve_offsets[src_points.size() - 1]);
linear_interpolation(curve_src.last(), curve_src.first(), dst.slice(dst_last_segment));
}
const IndexRange dst_last_segment = dst_points.slice(curve_offsets[src_points.size() - 1]);
linear_interpolation(curve_src.last(), curve_src.first(), dst.slice(dst_last_segment));
});
}
static void subdivide_attribute_linear(const OffsetIndices<int> src_points_by_curve,
const OffsetIndices<int> dst_points_by_curve,
const IndexMask selection,
const IndexMask &selection,
const Span<int> all_point_offsets,
const GSpan src,
GMutableSpan dst)
@@ -114,23 +110,21 @@ static void subdivide_attribute_linear(const OffsetIndices<int> src_points_by_cu
static void subdivide_attribute_catmull_rom(const OffsetIndices<int> src_points_by_curve,
const OffsetIndices<int> dst_points_by_curve,
const IndexMask selection,
const IndexMask &selection,
const Span<int> all_point_offsets,
const Span<bool> cyclic,
const GSpan src,
GMutableSpan dst)
{
threading::parallel_for(selection.index_range(), 512, [&](IndexRange selection_range) {
for (const int curve_i : selection.slice(selection_range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const IndexRange dst_points = dst_points_by_curve[curve_i];
bke::curves::catmull_rom::interpolate_to_evaluated(src.slice(src_points),
cyclic[curve_i],
all_point_offsets.slice(src_segments),
dst.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const IndexRange dst_points = dst_points_by_curve[curve_i];
bke::curves::catmull_rom::interpolate_to_evaluated(src.slice(src_points),
cyclic[curve_i],
all_point_offsets.slice(src_segments),
dst.slice(dst_points));
});
}
@@ -275,14 +269,14 @@ static void subdivide_bezier_positions(const Span<float3> src_positions,
bke::CurvesGeometry subdivide_curves(
const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const VArray<int> &cuts,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
const OffsetIndices src_points_by_curve = src_curves.points_by_curve();
/* Cyclic is accessed a lot, it's probably worth it to make sure it's a span. */
const VArraySpan<bool> cyclic{src_curves.cyclic()};
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(
const Vector<IndexRange> unselected_ranges = selection.to_ranges_invert(
src_curves.curves_range());
bke::CurvesGeometry dst_curves = bke::curves::copy_only_curve_domain(src_curves);
@@ -319,7 +313,7 @@ bke::CurvesGeometry subdivide_curves(
const bke::AttributeAccessor src_attributes = src_curves.attributes();
bke::MutableAttributeAccessor dst_attributes = dst_curves.attributes_for_write();
auto subdivide_catmull_rom = [&](IndexMask selection) {
auto subdivide_catmull_rom = [&](const IndexMask &selection) {
for (auto &attribute : bke::retrieve_attributes_for_transfer(
src_attributes, dst_attributes, ATTR_DOMAIN_MASK_POINT, propagation_info))
{
@@ -334,7 +328,7 @@ bke::CurvesGeometry subdivide_curves(
}
};
auto subdivide_poly = [&](IndexMask selection) {
auto subdivide_poly = [&](const IndexMask &selection) {
for (auto &attribute : bke::retrieve_attributes_for_transfer(
src_attributes, dst_attributes, ATTR_DOMAIN_MASK_POINT, propagation_info))
{
@@ -348,7 +342,7 @@ bke::CurvesGeometry subdivide_curves(
}
};
auto subdivide_bezier = [&](IndexMask selection) {
auto subdivide_bezier = [&](const IndexMask &selection) {
const Span<float3> src_positions = src_curves.positions();
const VArraySpan<int8_t> src_types_l{src_curves.handle_types_left()};
const VArraySpan<int8_t> src_types_r{src_curves.handle_types_right()};
@@ -362,25 +356,23 @@ bke::CurvesGeometry subdivide_curves(
MutableSpan<float3> dst_handles_r = dst_curves.handle_positions_right_for_write();
const OffsetIndices<int> dst_points_by_curve = dst_curves.points_by_curve();
threading::parallel_for(selection.index_range(), 512, [&](IndexRange range) {
for (const int curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const IndexRange dst_points = dst_points_by_curve[curve_i];
subdivide_bezier_positions(src_positions.slice(src_points),
src_types_l.slice(src_points),
src_types_r.slice(src_points),
src_handles_l.slice(src_points),
src_handles_r.slice(src_points),
all_point_offsets.slice(src_segments),
cyclic[curve_i],
dst_positions.slice(dst_points),
dst_types_l.slice(dst_points),
dst_types_r.slice(dst_points),
dst_handles_l.slice(dst_points),
dst_handles_r.slice(dst_points));
}
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange src_segments = bke::curves::per_curve_point_offsets_range(src_points,
curve_i);
const IndexRange dst_points = dst_points_by_curve[curve_i];
subdivide_bezier_positions(src_positions.slice(src_points),
src_types_l.slice(src_points),
src_types_r.slice(src_points),
src_handles_l.slice(src_points),
src_handles_r.slice(src_points),
all_point_offsets.slice(src_segments),
cyclic[curve_i],
dst_positions.slice(dst_points),
dst_types_l.slice(dst_points),
dst_types_r.slice(dst_points),
dst_handles_l.slice(dst_points),
dst_handles_r.slice(dst_points));
});
for (auto &attribute : bke::retrieve_attributes_for_transfer(

View File

@@ -187,7 +187,7 @@ static bke::curves::CurvePoint lookup_curve_point(
/** \name Utility Functions
* \{ */
static void fill_bezier_data(bke::CurvesGeometry &dst_curves, const IndexMask selection)
static void fill_bezier_data(bke::CurvesGeometry &dst_curves, const IndexMask &selection)
{
if (!dst_curves.has_curve_with_type(CURVE_TYPE_BEZIER)) {
return;
@@ -198,17 +198,15 @@ static void fill_bezier_data(bke::CurvesGeometry &dst_curves, const IndexMask se
MutableSpan<int8_t> handle_types_left = dst_curves.handle_types_left_for_write();
MutableSpan<int8_t> handle_types_right = dst_curves.handle_types_right_for_write();
threading::parallel_for(selection.index_range(), 4096, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange points = dst_points_by_curve[curve_i];
handle_types_right.slice(points).fill(int8_t(BEZIER_HANDLE_FREE));
handle_types_left.slice(points).fill(int8_t(BEZIER_HANDLE_FREE));
handle_positions_left.slice(points).fill({0.0f, 0.0f, 0.0f});
handle_positions_right.slice(points).fill({0.0f, 0.0f, 0.0f});
}
selection.foreach_index(GrainSize(4096), [&](const int curve_i) {
const IndexRange points = dst_points_by_curve[curve_i];
handle_types_right.slice(points).fill(int8_t(BEZIER_HANDLE_FREE));
handle_types_left.slice(points).fill(int8_t(BEZIER_HANDLE_FREE));
handle_positions_left.slice(points).fill({0.0f, 0.0f, 0.0f});
handle_positions_right.slice(points).fill({0.0f, 0.0f, 0.0f});
});
}
static void fill_nurbs_data(bke::CurvesGeometry &dst_curves, const IndexMask selection)
static void fill_nurbs_data(bke::CurvesGeometry &dst_curves, const IndexMask &selection)
{
if (!dst_curves.has_curve_with_type(CURVE_TYPE_NURBS)) {
return;
@@ -585,7 +583,7 @@ static void sample_interval_bezier(const Span<float3> src_positions,
static void trim_attribute_linear(const bke::CurvesGeometry &src_curves,
bke::CurvesGeometry &dst_curves,
const IndexMask selection,
const IndexMask &selection,
const Span<bke::curves::CurvePoint> start_points,
const Span<bke::curves::CurvePoint> end_points,
const Span<bke::curves::IndexRangeCyclic> src_ranges,
@@ -597,17 +595,14 @@ static void trim_attribute_linear(const bke::CurvesGeometry &src_curves,
bke::attribute_math::convert_to_static_type(attribute.meta_data.data_type, [&](auto dummy) {
using T = decltype(dummy);
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
sample_interval_linear<T>(attribute.src.template typed<T>().slice(src_points),
attribute.dst.span.typed<T>(),
src_ranges[curve_i],
dst_points_by_curve[curve_i],
start_points[curve_i],
end_points[curve_i]);
}
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
sample_interval_linear<T>(attribute.src.template typed<T>().slice(src_points),
attribute.dst.span.typed<T>(),
src_ranges[curve_i],
dst_points_by_curve[curve_i],
start_points[curve_i],
end_points[curve_i]);
});
});
}
@@ -615,7 +610,7 @@ static void trim_attribute_linear(const bke::CurvesGeometry &src_curves,
static void trim_polygonal_curves(const bke::CurvesGeometry &src_curves,
bke::CurvesGeometry &dst_curves,
const IndexMask selection,
const IndexMask &selection,
const Span<bke::curves::CurvePoint> start_points,
const Span<bke::curves::CurvePoint> end_points,
const Span<bke::curves::IndexRangeCyclic> src_ranges,
@@ -626,18 +621,16 @@ static void trim_polygonal_curves(const bke::CurvesGeometry &src_curves,
const Span<float3> src_positions = src_curves.positions();
MutableSpan<float3> dst_positions = dst_curves.positions_for_write();
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
sample_interval_linear<float3>(src_positions.slice(src_points),
dst_positions,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i]);
}
sample_interval_linear<float3>(src_positions.slice(src_points),
dst_positions,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i]);
});
fill_bezier_data(dst_curves, selection);
fill_nurbs_data(dst_curves, selection);
@@ -652,7 +645,7 @@ static void trim_polygonal_curves(const bke::CurvesGeometry &src_curves,
static void trim_catmull_rom_curves(const bke::CurvesGeometry &src_curves,
bke::CurvesGeometry &dst_curves,
const IndexMask selection,
const IndexMask &selection,
const Span<bke::curves::CurvePoint> start_points,
const Span<bke::curves::CurvePoint> end_points,
const Span<bke::curves::IndexRangeCyclic> src_ranges,
@@ -664,19 +657,17 @@ static void trim_catmull_rom_curves(const bke::CurvesGeometry &src_curves,
const VArray<bool> src_cyclic = src_curves.cyclic();
MutableSpan<float3> dst_positions = dst_curves.positions_for_write();
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
sample_interval_catmull_rom<float3>(src_positions.slice(src_points),
dst_positions,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i],
src_cyclic[curve_i]);
}
sample_interval_catmull_rom<float3>(src_positions.slice(src_points),
dst_positions,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i],
src_cyclic[curve_i]);
});
fill_bezier_data(dst_curves, selection);
fill_nurbs_data(dst_curves, selection);
@@ -685,19 +676,17 @@ static void trim_catmull_rom_curves(const bke::CurvesGeometry &src_curves,
bke::attribute_math::convert_to_static_type(attribute.meta_data.data_type, [&](auto dummy) {
using T = decltype(dummy);
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
sample_interval_catmull_rom<T>(attribute.src.template typed<T>().slice(src_points),
attribute.dst.span.typed<T>(),
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i],
src_cyclic[curve_i]);
}
sample_interval_catmull_rom<T>(attribute.src.template typed<T>().slice(src_points),
attribute.dst.span.typed<T>(),
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i],
src_cyclic[curve_i]);
});
});
}
@@ -705,7 +694,7 @@ static void trim_catmull_rom_curves(const bke::CurvesGeometry &src_curves,
static void trim_bezier_curves(const bke::CurvesGeometry &src_curves,
bke::CurvesGeometry &dst_curves,
const IndexMask selection,
const IndexMask &selection,
const Span<bke::curves::CurvePoint> start_points,
const Span<bke::curves::CurvePoint> end_points,
const Span<bke::curves::IndexRangeCyclic> src_ranges,
@@ -725,26 +714,24 @@ static void trim_bezier_curves(const bke::CurvesGeometry &src_curves,
MutableSpan<float3> dst_handles_l = dst_curves.handle_positions_left_for_write();
MutableSpan<float3> dst_handles_r = dst_curves.handle_positions_right_for_write();
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_points = src_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
sample_interval_bezier(src_positions.slice(src_points),
src_handles_l.slice(src_points),
src_handles_r.slice(src_points),
src_types_l.slice(src_points),
src_types_r.slice(src_points),
dst_positions,
dst_handles_l,
dst_handles_r,
dst_types_l,
dst_types_r,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i]);
}
sample_interval_bezier(src_positions.slice(src_points),
src_handles_l.slice(src_points),
src_handles_r.slice(src_points),
src_types_l.slice(src_points),
src_types_r.slice(src_points),
dst_positions,
dst_handles_l,
dst_handles_r,
dst_types_l,
dst_types_r,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i]);
});
fill_nurbs_data(dst_curves, selection);
trim_attribute_linear(src_curves,
@@ -758,7 +745,7 @@ static void trim_bezier_curves(const bke::CurvesGeometry &src_curves,
static void trim_evaluated_curves(const bke::CurvesGeometry &src_curves,
bke::CurvesGeometry &dst_curves,
const IndexMask selection,
const IndexMask &selection,
const Span<bke::curves::CurvePoint> start_points,
const Span<bke::curves::CurvePoint> end_points,
const Span<bke::curves::IndexRangeCyclic> src_ranges,
@@ -770,17 +757,15 @@ static void trim_evaluated_curves(const bke::CurvesGeometry &src_curves,
const Span<float3> src_eval_positions = src_curves.evaluated_positions();
MutableSpan<float3> dst_positions = dst_curves.positions_for_write();
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
for (const int64_t curve_i : selection.slice(range)) {
const IndexRange src_evaluated_points = src_evaluated_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
sample_interval_linear<float3>(src_eval_positions.slice(src_evaluated_points),
dst_positions,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i]);
}
selection.foreach_index(GrainSize(512), [&](const int curve_i) {
const IndexRange src_evaluated_points = src_evaluated_points_by_curve[curve_i];
const IndexRange dst_points = dst_points_by_curve[curve_i];
sample_interval_linear<float3>(src_eval_positions.slice(src_evaluated_points),
dst_positions,
src_ranges[curve_i],
dst_points,
start_points[curve_i],
end_points[curve_i]);
});
fill_bezier_data(dst_curves, selection);
fill_nurbs_data(dst_curves, selection);
@@ -789,9 +774,9 @@ static void trim_evaluated_curves(const bke::CurvesGeometry &src_curves,
bke::attribute_math::convert_to_static_type(attribute.meta_data.data_type, [&](auto dummy) {
using T = decltype(dummy);
threading::parallel_for(selection.index_range(), 512, [&](const IndexRange range) {
selection.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
Vector<std::byte> evaluated_buffer;
for (const int64_t curve_i : selection.slice(range)) {
for (const int64_t curve_i : segment) {
const IndexRange src_points = src_points_by_curve[curve_i];
/* Interpolate onto the evaluated point domain and sample the evaluated domain. */
@@ -828,7 +813,7 @@ static float trim_sample_length(const Span<float> accumulated_lengths,
* Compute the selected range of points for every selected curve.
*/
static void compute_curve_trim_parameters(const bke::CurvesGeometry &curves,
const IndexMask selection,
const IndexMask &selection,
const VArray<float> &starts,
const VArray<float> &ends,
const GeometryNodeCurveSampleMode mode,
@@ -844,117 +829,115 @@ static void compute_curve_trim_parameters(const bke::CurvesGeometry &curves,
const VArray<int8_t> curve_types = curves.curve_types();
curves.ensure_can_interpolate_to_evaluated();
threading::parallel_for(selection.index_range(), 128, [&](const IndexRange selection_range) {
for (const int64_t curve_i : selection.slice(selection_range)) {
CurveType curve_type = CurveType(curve_types[curve_i]);
selection.foreach_index(GrainSize(128), [&](const int curve_i) {
CurveType curve_type = CurveType(curve_types[curve_i]);
int point_count;
if (curve_type == CURVE_TYPE_NURBS) {
/* The result curve is a poly curve. */
point_count = evaluated_points_by_curve[curve_i].size();
}
else {
point_count = points_by_curve[curve_i].size();
}
if (point_count == 1) {
int point_count;
if (curve_type == CURVE_TYPE_NURBS) {
/* The result curve is a poly curve. */
point_count = evaluated_points_by_curve[curve_i].size();
}
else {
point_count = points_by_curve[curve_i].size();
}
if (point_count == 1) {
/* Single point. */
dst_curve_size[curve_i] = 1;
src_ranges[curve_i] = bke::curves::IndexRangeCyclic(0, 0, 1, 1);
start_points[curve_i] = {{0, 0}, 0.0f};
end_points[curve_i] = {{0, 0}, 0.0f};
return;
}
const bool cyclic = src_cyclic[curve_i];
const Span<float> lengths = curves.evaluated_lengths_for_curve(curve_i, cyclic);
BLI_assert(lengths.size() > 0);
const float start_length = trim_sample_length(lengths, starts[curve_i], mode);
float end_length;
bool equal_sample_point;
if (cyclic) {
end_length = trim_sample_length(lengths, ends[curve_i], mode);
const float cyclic_start = start_length == lengths.last() ? 0.0f : start_length;
const float cyclic_end = end_length == lengths.last() ? 0.0f : end_length;
equal_sample_point = cyclic_start == cyclic_end;
}
else {
end_length = ends[curve_i] <= starts[curve_i] ?
start_length :
trim_sample_length(lengths, ends[curve_i], mode);
equal_sample_point = start_length == end_length;
}
start_points[curve_i] = lookup_curve_point(curves,
evaluated_points_by_curve,
curve_type,
curve_i,
lengths,
start_length,
cyclic,
resolution[curve_i],
point_count);
if (equal_sample_point) {
end_points[curve_i] = start_points[curve_i];
if (end_length <= start_length) {
/* Single point. */
dst_curve_size[curve_i] = 1;
src_ranges[curve_i] = bke::curves::IndexRangeCyclic(0, 0, 1, 1);
start_points[curve_i] = {{0, 0}, 0.0f};
end_points[curve_i] = {{0, 0}, 0.0f};
continue;
}
const bool cyclic = src_cyclic[curve_i];
const Span<float> lengths = curves.evaluated_lengths_for_curve(curve_i, cyclic);
BLI_assert(lengths.size() > 0);
const float start_length = trim_sample_length(lengths, starts[curve_i], mode);
float end_length;
bool equal_sample_point;
if (cyclic) {
end_length = trim_sample_length(lengths, ends[curve_i], mode);
const float cyclic_start = start_length == lengths.last() ? 0.0f : start_length;
const float cyclic_end = end_length == lengths.last() ? 0.0f : end_length;
equal_sample_point = cyclic_start == cyclic_end;
if (start_points[curve_i].is_controlpoint()) {
/* Only iterate if control point. */
const int single_point_index = start_points[curve_i].parameter == 1.0f ?
start_points[curve_i].next_index :
start_points[curve_i].index;
src_ranges[curve_i] = bke::curves::IndexRangeCyclic::get_range_from_size(
single_point_index, 1, point_count);
}
/* else: leave empty range */
}
else {
end_length = ends[curve_i] <= starts[curve_i] ?
start_length :
trim_sample_length(lengths, ends[curve_i], mode);
equal_sample_point = start_length == end_length;
}
start_points[curve_i] = lookup_curve_point(curves,
evaluated_points_by_curve,
curve_type,
curve_i,
lengths,
start_length,
cyclic,
resolution[curve_i],
point_count);
if (equal_sample_point) {
end_points[curve_i] = start_points[curve_i];
if (end_length <= start_length) {
/* Single point. */
dst_curve_size[curve_i] = 1;
if (start_points[curve_i].is_controlpoint()) {
/* Only iterate if control point. */
const int single_point_index = start_points[curve_i].parameter == 1.0f ?
start_points[curve_i].next_index :
start_points[curve_i].index;
src_ranges[curve_i] = bke::curves::IndexRangeCyclic::get_range_from_size(
single_point_index, 1, point_count);
}
/* else: leave empty range */
}
else {
/* Split. */
src_ranges[curve_i] = bke::curves::IndexRangeCyclic::get_range_between_endpoints(
start_points[curve_i], end_points[curve_i], point_count)
.push_loop();
const int count = 1 + !start_points[curve_i].is_controlpoint() + point_count;
BLI_assert(count > 1);
dst_curve_size[curve_i] = count;
}
}
else {
/* General case. */
end_points[curve_i] = lookup_curve_point(curves,
evaluated_points_by_curve,
curve_type,
curve_i,
lengths,
end_length,
cyclic,
resolution[curve_i],
point_count);
/* Split. */
src_ranges[curve_i] = bke::curves::IndexRangeCyclic::get_range_between_endpoints(
start_points[curve_i], end_points[curve_i], point_count);
const int count = src_ranges[curve_i].size() + !start_points[curve_i].is_controlpoint() +
!end_points[curve_i].is_controlpoint();
start_points[curve_i], end_points[curve_i], point_count)
.push_loop();
const int count = 1 + !start_points[curve_i].is_controlpoint() + point_count;
BLI_assert(count > 1);
dst_curve_size[curve_i] = count;
}
BLI_assert(dst_curve_size[curve_i] > 0);
}
else {
/* General case. */
end_points[curve_i] = lookup_curve_point(curves,
evaluated_points_by_curve,
curve_type,
curve_i,
lengths,
end_length,
cyclic,
resolution[curve_i],
point_count);
src_ranges[curve_i] = bke::curves::IndexRangeCyclic::get_range_between_endpoints(
start_points[curve_i], end_points[curve_i], point_count);
const int count = src_ranges[curve_i].size() + !start_points[curve_i].is_controlpoint() +
!end_points[curve_i].is_controlpoint();
BLI_assert(count > 1);
dst_curve_size[curve_i] = count;
}
BLI_assert(dst_curve_size[curve_i] > 0);
});
}
/** \} */
bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const VArray<float> &starts,
const VArray<float> &ends,
const GeometryNodeCurveSampleMode mode,
const bke::AnonymousAttributePropagationInfo &propagation_info)
{
const OffsetIndices src_points_by_curve = src_curves.points_by_curve();
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(
const Vector<IndexRange> unselected_ranges = selection.to_ranges_invert(
src_curves.curves_range());
BLI_assert(selection.size() > 0);
@@ -1005,7 +988,7 @@ bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
"handle_type_right",
"nurbs_weight"});
auto trim_catmull = [&](const IndexMask selection) {
auto trim_catmull = [&](const IndexMask &selection) {
trim_catmull_rom_curves(src_curves,
dst_curves,
selection,
@@ -1014,7 +997,7 @@ bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
src_ranges,
transfer_attributes);
};
auto trim_poly = [&](const IndexMask selection) {
auto trim_poly = [&](const IndexMask &selection) {
trim_polygonal_curves(src_curves,
dst_curves,
selection,
@@ -1023,7 +1006,7 @@ bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
src_ranges,
transfer_attributes);
};
auto trim_bezier = [&](const IndexMask selection) {
auto trim_bezier = [&](const IndexMask &selection) {
trim_bezier_curves(src_curves,
dst_curves,
selection,
@@ -1032,7 +1015,7 @@ bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
src_ranges,
transfer_attributes);
};
auto trim_evaluated = [&](const IndexMask selection) {
auto trim_evaluated = [&](const IndexMask &selection) {
dst_curves.fill_curve_types(selection, CURVE_TYPE_POLY);
/* Ensure evaluated positions are available. */
src_curves.evaluated_positions();
@@ -1067,7 +1050,7 @@ bke::CurvesGeometry trim_curves(const bke::CurvesGeometry &src_curves,
else {
/* Only trimmed curves are no longer cyclic. */
if (bke::SpanAttributeWriter cyclic = dst_attributes.lookup_for_write_span<bool>("cyclic")) {
cyclic.span.fill_indices(selection.indices(), false);
index_mask::masked_fill(cyclic.span, false, selection);
cyclic.finish();
}

View File

@@ -48,6 +48,7 @@
using blender::Array;
using blender::IndexMask;
using blender::IndexMaskMemory;
using blender::Span;
using blender::Vector;
@@ -64,18 +65,15 @@ static Span<MDeformVert> get_vertex_group(const Mesh &mesh, const int defgrp_ind
return {vertex_group, mesh.totvert};
}
static Vector<int64_t> selected_indices_from_vertex_group(Span<MDeformVert> vertex_group,
const int index,
const bool invert)
static IndexMask selected_indices_from_vertex_group(Span<MDeformVert> vertex_group,
const int index,
const bool invert,
IndexMaskMemory &memory)
{
Vector<int64_t> selected_indices;
for (const int i : vertex_group.index_range()) {
const bool found = BKE_defvert_find_weight(&vertex_group[i], index) > 0.0f;
if (found != invert) {
selected_indices.append(i);
}
}
return selected_indices;
return IndexMask::from_predicate(
vertex_group.index_range(), blender::GrainSize(512), memory, [&](const int i) {
return (BKE_defvert_find_weight(&vertex_group[i], index) > 0.0f) != invert;
});
}
static Array<bool> selection_array_from_vertex_group(Span<MDeformVert> vertex_group,
@@ -98,8 +96,9 @@ static std::optional<Mesh *> calculate_weld(const Mesh &mesh, const WeldModifier
if (wmd.mode == MOD_WELD_MODE_ALL) {
if (!vertex_group.is_empty()) {
Vector<int64_t> selected_indices = selected_indices_from_vertex_group(
vertex_group, defgrp_index, invert);
IndexMaskMemory memory;
const IndexMask selected_indices = selected_indices_from_vertex_group(
vertex_group, defgrp_index, invert, memory);
return blender::geometry::mesh_merge_by_distance_all(
mesh, IndexMask(selected_indices), wmd.merge_dist);
}

View File

@@ -29,7 +29,7 @@ static void node_layout(uiLayout *layout, bContext * /*C*/, PointerRNA *ptr)
uiItemR(layout, ptr, "pivot_axis", 0, IFACE_("Pivot"), ICON_NONE);
}
static void align_rotations_auto_pivot(IndexMask mask,
static void align_rotations_auto_pivot(const IndexMask &mask,
const VArray<float3> &input_rotations,
const VArray<float3> &vectors,
const VArray<float> &factors,
@@ -78,7 +78,7 @@ static void align_rotations_auto_pivot(IndexMask mask,
});
}
static void align_rotations_fixed_pivot(IndexMask mask,
static void align_rotations_fixed_pivot(const IndexMask &mask,
const VArray<float3> &input_rotations,
const VArray<float3> &vectors,
const VArray<float> &factors,
@@ -150,7 +150,7 @@ class MF_AlignEulerToVector : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &input_rotations = params.readonly_single_input<float3>(0, "Rotation");
const VArray<float> &factors = params.readonly_single_input<float>(1, "Factor");

View File

@@ -24,15 +24,15 @@ class MF_SpecialCharacters : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
MutableSpan<std::string> lb = params.uninitialized_single_output<std::string>(0, "Line Break");
MutableSpan<std::string> tab = params.uninitialized_single_output<std::string>(1, "Tab");
for (const int i : mask) {
mask.foreach_index([&](const int64_t i) {
new (&lb[i]) std::string("\n");
new (&tab[i]) std::string("\t");
}
});
}
};

View File

@@ -54,7 +54,7 @@ class SeparateRGBAFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<ColorGeometry4f> &colors = params.readonly_single_input<ColorGeometry4f>(0,
"Color");
@@ -80,11 +80,11 @@ class SeparateRGBAFunction : public mf::MultiFunction {
}
devirtualize_varray(colors, [&](auto colors) {
mask.to_best_mask_type([&](auto mask) {
mask.foreach_segment_optimized([&](const auto segment) {
const int used_outputs_num = used_outputs.size();
const int *used_outputs_data = used_outputs.data();
for (const int64_t i : mask) {
for (const int64_t i : segment) {
const ColorGeometry4f &color = colors[i];
for (const int out_i : IndexRange(used_outputs_num)) {
const int channel = used_outputs_data[out_i];
@@ -113,7 +113,7 @@ class SeparateHSVAFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<ColorGeometry4f> &colors = params.readonly_single_input<ColorGeometry4f>(0,
"Color");
@@ -122,14 +122,12 @@ class SeparateHSVAFunction : public mf::MultiFunction {
MutableSpan<float> value = params.uninitialized_single_output<float>(3, "Value");
MutableSpan<float> alpha = params.uninitialized_single_output_if_required<float>(4, "Alpha");
for (int64_t i : mask) {
mask.foreach_index_optimized<int64_t>([&](const int64_t i) {
rgb_to_hsv(colors[i].r, colors[i].g, colors[i].b, &hue[i], &saturation[i], &value[i]);
}
});
if (!alpha.is_empty()) {
for (int64_t i : mask) {
alpha[i] = colors[i].a;
}
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { alpha[i] = colors[i].a; });
}
}
};
@@ -151,7 +149,7 @@ class SeparateHSLAFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<ColorGeometry4f> &colors = params.readonly_single_input<ColorGeometry4f>(0,
"Color");
@@ -160,14 +158,12 @@ class SeparateHSLAFunction : public mf::MultiFunction {
MutableSpan<float> lightness = params.uninitialized_single_output<float>(3, "Lightness");
MutableSpan<float> alpha = params.uninitialized_single_output_if_required<float>(4, "Alpha");
for (int64_t i : mask) {
mask.foreach_index_optimized<int64_t>([&](const int64_t i) {
rgb_to_hsl(colors[i].r, colors[i].g, colors[i].b, &hue[i], &saturation[i], &lightness[i]);
}
});
if (!alpha.is_empty()) {
for (int64_t i : mask) {
alpha[i] = colors[i].a;
}
mask.foreach_index_optimized<int64_t>([&](const int64_t i) { alpha[i] = colors[i].a; });
}
}
};

View File

@@ -92,7 +92,7 @@ void separate_geometry(GeometrySet &geometry_set,
void get_closest_in_bvhtree(BVHTreeFromMesh &tree_data,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions);
@@ -123,7 +123,7 @@ class EvaluateAtIndexInput final : public bke::GeometryFieldInput {
EvaluateAtIndexInput(Field<int> index_field, GField value_field, eAttrDomain value_field_domain);
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask mask) const final;
const IndexMask &mask) const final;
std::optional<eAttrDomain> preferred_domain(const GeometryComponent & /*component*/) const final
{
@@ -149,7 +149,7 @@ void simulation_state_to_values(const Span<NodeSimulationItem> node_simulation_i
void copy_with_checked_indices(const GVArray &src,
const VArray<int> &indices,
IndexMask mask,
const IndexMask &mask,
GMutableSpan dst);
} // namespace blender::nodes

View File

@@ -217,7 +217,7 @@ class AccumulateFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const AttributeAccessor attributes = *context.attributes();
const int64_t domain_size = attributes.domain_size(source_domain_);
@@ -323,7 +323,7 @@ class TotalFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const AttributeAccessor attributes = *context.attributes();
const int64_t domain_size = attributes.domain_size(source_domain_);

View File

@@ -395,7 +395,7 @@ class BlurAttributeFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const int64_t domain_size = context.attributes()->domain_size(context.domain());

View File

@@ -43,7 +43,7 @@ class EndpointFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};

View File

@@ -86,7 +86,7 @@ class HandleTypeFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
IndexMask mask) const final
const IndexMask &mask) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};

View File

@@ -129,39 +129,37 @@ static void node_gather_link_searches(GatherLinkSearchOpParams &params)
static void sample_indices_and_lengths(const Span<float> accumulated_lengths,
const Span<float> sample_lengths,
const GeometryNodeCurveSampleMode length_mode,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<int> r_segment_indices,
MutableSpan<float> r_length_in_segment)
{
const float total_length = accumulated_lengths.last();
length_parameterize::SampleSegmentHint hint;
mask.to_best_mask_type([&](const auto mask) {
for (const int64_t i : mask) {
const float sample_length = length_mode == GEO_NODE_CURVE_SAMPLE_FACTOR ?
sample_lengths[i] * total_length :
sample_lengths[i];
int segment_i;
float factor_in_segment;
length_parameterize::sample_at_length(accumulated_lengths,
std::clamp(sample_length, 0.0f, total_length),
segment_i,
factor_in_segment,
&hint);
const float segment_start = segment_i == 0 ? 0.0f : accumulated_lengths[segment_i - 1];
const float segment_end = accumulated_lengths[segment_i];
const float segment_length = segment_end - segment_start;
mask.foreach_index_optimized<int>([&](const int i) {
const float sample_length = length_mode == GEO_NODE_CURVE_SAMPLE_FACTOR ?
sample_lengths[i] * total_length :
sample_lengths[i];
int segment_i;
float factor_in_segment;
length_parameterize::sample_at_length(accumulated_lengths,
std::clamp(sample_length, 0.0f, total_length),
segment_i,
factor_in_segment,
&hint);
const float segment_start = segment_i == 0 ? 0.0f : accumulated_lengths[segment_i - 1];
const float segment_end = accumulated_lengths[segment_i];
const float segment_length = segment_end - segment_start;
r_segment_indices[i] = segment_i;
r_length_in_segment[i] = factor_in_segment * segment_length;
}
r_segment_indices[i] = segment_i;
r_length_in_segment[i] = factor_in_segment * segment_length;
});
}
static void sample_indices_and_factors_to_compressed(const Span<float> accumulated_lengths,
const Span<float> sample_lengths,
const GeometryNodeCurveSampleMode length_mode,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<int> r_segment_indices,
MutableSpan<float> r_factor_in_segment)
{
@@ -170,27 +168,23 @@ static void sample_indices_and_factors_to_compressed(const Span<float> accumulat
switch (length_mode) {
case GEO_NODE_CURVE_SAMPLE_FACTOR:
mask.to_best_mask_type([&](const auto mask) {
for (const int64_t i : IndexRange(mask.size())) {
const float length = sample_lengths[mask[i]] * total_length;
length_parameterize::sample_at_length(accumulated_lengths,
std::clamp(length, 0.0f, total_length),
r_segment_indices[i],
r_factor_in_segment[i],
&hint);
}
mask.foreach_index_optimized<int>([&](const int i, const int pos) {
const float length = sample_lengths[i] * total_length;
length_parameterize::sample_at_length(accumulated_lengths,
std::clamp(length, 0.0f, total_length),
r_segment_indices[pos],
r_factor_in_segment[pos],
&hint);
});
break;
case GEO_NODE_CURVE_SAMPLE_LENGTH:
mask.to_best_mask_type([&](const auto mask) {
for (const int64_t i : IndexRange(mask.size())) {
const float length = sample_lengths[mask[i]];
length_parameterize::sample_at_length(accumulated_lengths,
std::clamp(length, 0.0f, total_length),
r_segment_indices[i],
r_factor_in_segment[i],
&hint);
}
mask.foreach_index_optimized<int>([&](const int i, const int pos) {
const float length = sample_lengths[i];
length_parameterize::sample_at_length(accumulated_lengths,
std::clamp(length, 0.0f, total_length),
r_segment_indices[pos],
r_factor_in_segment[pos],
&hint);
});
break;
}
@@ -222,7 +216,7 @@ class SampleFloatSegmentsFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArraySpan<float> lengths = params.readonly_single_input<float>(0, "Length");
MutableSpan<int> indices = params.uninitialized_single_output<int>(1, "Curve Index");
@@ -269,7 +263,7 @@ class SampleCurveFunction : public mf::MultiFunction {
this->evaluate_source();
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
MutableSpan<float3> sampled_positions = params.uninitialized_single_output_if_required<float3>(
2, "Position");
@@ -281,13 +275,13 @@ class SampleCurveFunction : public mf::MultiFunction {
auto return_default = [&]() {
if (!sampled_positions.is_empty()) {
sampled_positions.fill_indices(mask.indices(), {0, 0, 0});
index_mask::masked_fill(sampled_positions, {0, 0, 0}, mask);
}
if (!sampled_tangents.is_empty()) {
sampled_tangents.fill_indices(mask.indices(), {0, 0, 0});
index_mask::masked_fill(sampled_tangents, {0, 0, 0}, mask);
}
if (!sampled_normals.is_empty()) {
sampled_normals.fill_indices(mask.indices(), {0, 0, 0});
index_mask::masked_fill(sampled_normals, {0, 0, 0}, mask);
}
};
@@ -322,23 +316,25 @@ class SampleCurveFunction : public mf::MultiFunction {
GArray<> src_original_values(source_data_->type());
GArray<> src_evaluated_values(source_data_->type());
auto fill_invalid = [&](const IndexMask mask) {
auto fill_invalid = [&](const IndexMask &mask) {
if (!sampled_positions.is_empty()) {
sampled_positions.fill_indices(mask.indices(), float3(0));
index_mask::masked_fill(sampled_positions, float3(0), mask);
}
if (!sampled_tangents.is_empty()) {
sampled_tangents.fill_indices(mask.indices(), float3(0));
index_mask::masked_fill(sampled_tangents, float3(0), mask);
}
if (!sampled_normals.is_empty()) {
sampled_normals.fill_indices(mask.indices(), float3(0));
index_mask::masked_fill(sampled_normals, float3(0), mask);
}
if (!sampled_values.is_empty()) {
const CPPType &type = sampled_values.type();
type.fill_construct_indices(type.default_value(), sampled_values.data(), mask);
bke::attribute_math::convert_to_static_type(source_data_->type(), [&](auto dummy) {
using T = decltype(dummy);
index_mask::masked_fill<T>(sampled_values.typed<T>(), {}, mask);
});
}
};
auto sample_curve = [&](const int curve_i, const IndexMask mask) {
auto sample_curve = [&](const int curve_i, const IndexMask &mask) {
const Span<float> accumulated_lengths = curves.evaluated_lengths_for_curve(curve_i,
cyclic[curve_i]);
if (accumulated_lengths.is_empty()) {
@@ -364,16 +360,14 @@ class SampleCurveFunction : public mf::MultiFunction {
if (!sampled_tangents.is_empty()) {
length_parameterize::interpolate_to_masked<float3>(
evaluated_tangents.slice(evaluated_points), indices, factors, mask, sampled_tangents);
for (const int64_t i : mask) {
sampled_tangents[i] = math::normalize(sampled_tangents[i]);
}
mask.foreach_index(
[&](const int i) { sampled_tangents[i] = math::normalize(sampled_tangents[i]); });
}
if (!sampled_normals.is_empty()) {
length_parameterize::interpolate_to_masked<float3>(
evaluated_normals.slice(evaluated_points), indices, factors, mask, sampled_normals);
for (const int64_t i : mask) {
sampled_normals[i] = math::normalize(sampled_normals[i]);
}
mask.foreach_index(
[&](const int i) { sampled_normals[i] = math::normalize(sampled_normals[i]); });
}
if (!sampled_values.is_empty()) {
const IndexRange points = points_by_curve[curve_i];
@@ -400,10 +394,10 @@ class SampleCurveFunction : public mf::MultiFunction {
}
}
else {
Vector<int64_t> invalid_indices;
MultiValueMap<int, int64_t> indices_per_curve;
Vector<int> invalid_indices;
MultiValueMap<int, int> indices_per_curve;
devirtualize_varray(curve_indices, [&](const auto curve_indices) {
for (const int64_t i : mask) {
mask.foreach_index([&](const int i) {
const int curve_i = curve_indices[i];
if (curves.curves_range().contains(curve_i)) {
indices_per_curve.add(curve_i, i);
@@ -411,13 +405,15 @@ class SampleCurveFunction : public mf::MultiFunction {
else {
invalid_indices.append(i);
}
}
});
});
IndexMaskMemory memory;
for (const int curve_i : indices_per_curve.keys()) {
sample_curve(curve_i, IndexMask(indices_per_curve.lookup(curve_i)));
sample_curve(curve_i,
IndexMask::from_indices<int>(indices_per_curve.lookup(curve_i), memory));
}
fill_invalid(IndexMask(invalid_indices));
fill_invalid(IndexMask::from_indices<int>(invalid_indices, memory));
}
}

View File

@@ -63,10 +63,12 @@ static void set_handle_type(bke::CurvesGeometry &curves,
const IndexMask selection = evaluator.get_evaluated_selection_as_mask();
if (mode & GEO_NODE_CURVE_HANDLE_LEFT) {
curves.handle_types_left_for_write().fill_indices(selection.indices(), new_handle_type);
index_mask::masked_fill<int8_t>(
curves.handle_types_left_for_write(), new_handle_type, selection);
}
if (mode & GEO_NODE_CURVE_HANDLE_RIGHT) {
curves.handle_types_right_for_write().fill_indices(selection.indices(), new_handle_type);
index_mask::masked_fill<int8_t>(
curves.handle_types_right_for_write(), new_handle_type, selection);
}
/* Eagerly calculate automatically derived handle positions if necessary. */

View File

@@ -176,7 +176,7 @@ class CurveParameterFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
switch (domain) {
case ATTR_DOMAIN_POINT:
@@ -210,7 +210,7 @@ class CurveLengthParameterFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
switch (domain) {
case ATTR_DOMAIN_POINT:
@@ -244,7 +244,7 @@ class IndexOnSplineFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};

View File

@@ -28,7 +28,7 @@ class CurveOfPointInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};
@@ -64,7 +64,7 @@ class PointIndexInCurveInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};

View File

@@ -43,7 +43,7 @@ class PointsOfCurveInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const OffsetIndices points_by_curve = curves.points_by_curve();
@@ -63,12 +63,12 @@ class PointsOfCurveInput final : public bke::CurvesFieldInput {
const bool use_sorting = !all_sort_weights.is_single();
Array<int> point_of_curve(mask.min_array_size());
threading::parallel_for(mask.index_range(), 256, [&](const IndexRange range) {
mask.foreach_segment(GrainSize(256), [&](const IndexMaskSegment segment) {
/* Reuse arrays to avoid allocation. */
Array<float> sort_weights;
Array<int> sort_indices;
for (const int selection_i : mask.slice(range)) {
for (const int selection_i : segment) {
const int curve_i = curve_indices[selection_i];
const int index_in_sort = indices_in_sort[selection_i];
if (!curves.curves_range().contains(curve_i)) {
@@ -140,7 +140,7 @@ class CurvePointCountInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_CURVE) {
return {};
@@ -197,7 +197,7 @@ class CurveStartPointInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain /*domain*/,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return VArray<int>::ForSpan(curves.offsets());
}

View File

@@ -73,7 +73,7 @@ static void copy_attributes_based_on_mask(const Map<AttributeIDRef, AttributeKin
const bke::AttributeAccessor src_attributes,
bke::MutableAttributeAccessor dst_attributes,
const eAttrDomain domain,
const IndexMask mask)
const IndexMask &mask)
{
for (MapItem<AttributeIDRef, AttributeKind> entry : attributes.items()) {
const AttributeIDRef attribute_id = entry.key;
@@ -146,8 +146,12 @@ static void copy_face_corner_attributes(const Map<AttributeIDRef, AttributeKind>
indices.append_unchecked(corner);
}
}
copy_attributes_based_on_mask(
attributes, src_attributes, dst_attributes, ATTR_DOMAIN_CORNER, IndexMask(indices));
IndexMaskMemory memory;
copy_attributes_based_on_mask(attributes,
src_attributes,
dst_attributes,
ATTR_DOMAIN_CORNER,
IndexMask::from_indices<int64_t>(indices, memory));
}
static void copy_masked_edges_to_new_mesh(const Mesh &src_mesh, Mesh &dst_mesh, Span<int> edge_map)
@@ -851,6 +855,8 @@ static void do_mesh_separation(GeometrySet &geometry_set,
int selected_polys_num = 0;
int selected_loops_num = 0;
IndexMaskMemory memory;
Mesh *mesh_out;
Map<AttributeIDRef, AttributeKind> attributes;
@@ -932,11 +938,12 @@ static void do_mesh_separation(GeometrySet &geometry_set,
mesh_out->attributes_for_write(),
ATTR_DOMAIN_EDGE,
edge_map);
copy_attributes_based_on_mask(attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
ATTR_DOMAIN_FACE,
IndexMask(Vector<int64_t>(selected_poly_indices.as_span())));
copy_attributes_based_on_mask(
attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
ATTR_DOMAIN_FACE,
IndexMask::from_indices(selected_poly_indices.as_span(), memory));
copy_face_corner_attributes(attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
@@ -1001,11 +1008,12 @@ static void do_mesh_separation(GeometrySet &geometry_set,
mesh_out->attributes_for_write(),
ATTR_DOMAIN_EDGE,
edge_map);
copy_attributes_based_on_mask(attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
ATTR_DOMAIN_FACE,
IndexMask(Vector<int64_t>(selected_poly_indices.as_span())));
copy_attributes_based_on_mask(
attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
ATTR_DOMAIN_FACE,
IndexMask::from_indices(selected_poly_indices.as_span(), memory));
copy_face_corner_attributes(attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
@@ -1060,11 +1068,12 @@ static void do_mesh_separation(GeometrySet &geometry_set,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
{ATTR_DOMAIN_POINT, ATTR_DOMAIN_EDGE});
copy_attributes_based_on_mask(attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
ATTR_DOMAIN_FACE,
IndexMask(Vector<int64_t>(selected_poly_indices.as_span())));
copy_attributes_based_on_mask(
attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),
ATTR_DOMAIN_FACE,
IndexMask::from_indices(selected_poly_indices.as_span(), memory));
copy_face_corner_attributes(attributes,
mesh_in.attributes(),
mesh_out->attributes_for_write(),

View File

@@ -61,7 +61,7 @@ struct IndexAttributes {
/** \name Utility Functions
* \{ */
static OffsetIndices<int> accumulate_counts_to_offsets(const IndexMask selection,
static OffsetIndices<int> accumulate_counts_to_offsets(const IndexMask &selection,
const VArray<int> &counts,
Array<int> &r_offset_data)
{
@@ -88,7 +88,7 @@ static OffsetIndices<int> accumulate_counts_to_offsets(const IndexMask selection
/* Utility functions for threaded copying of attribute data where possible. */
template<typename T>
static void threaded_slice_fill(const OffsetIndices<int> offsets,
const IndexMask selection,
const IndexMask &selection,
const Span<T> src,
MutableSpan<T> dst)
{
@@ -101,7 +101,7 @@ static void threaded_slice_fill(const OffsetIndices<int> offsets,
}
static void threaded_slice_fill(const OffsetIndices<int> offsets,
const IndexMask selection,
const IndexMask &selection,
const GSpan src,
GMutableSpan dst)
{
@@ -140,7 +140,7 @@ static void threaded_id_offset_copy(const OffsetIndices<int> offsets,
/** Create the copy indices for the duplication domain. */
static void create_duplicate_index_attribute(bke::MutableAttributeAccessor attributes,
const eAttrDomain output_domain,
const IndexMask selection,
const IndexMask &selection,
const IndexAttributes &attribute_outputs,
const OffsetIndices<int> offsets)
{
@@ -180,7 +180,7 @@ static void copy_stable_id_point(const OffsetIndices<int> offsets,
}
static void copy_attributes_without_id(const OffsetIndices<int> offsets,
const IndexMask selection,
const IndexMask &selection,
const AnonymousAttributePropagationInfo &propagation_info,
const eAttrDomain domain,
const bke::AttributeAccessor src_attributes,
@@ -206,7 +206,7 @@ static void copy_attributes_without_id(const OffsetIndices<int> offsets,
*/
static void copy_curve_attributes_without_id(
const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const OffsetIndices<int> curve_offsets,
const AnonymousAttributePropagationInfo &propagation_info,
bke::CurvesGeometry &dst_curves)
@@ -255,7 +255,7 @@ static void copy_curve_attributes_without_id(
* then loop over the remaining ones point by point, hashing their ids to the new ids.
*/
static void copy_stable_id_curves(const bke::CurvesGeometry &src_curves,
const IndexMask selection,
const IndexMask &selection,
const OffsetIndices<int> offsets,
bke::CurvesGeometry &dst_curves)
{
@@ -385,7 +385,7 @@ static void copy_face_attributes_without_id(
const Span<int> vert_mapping,
const Span<int> loop_mapping,
const OffsetIndices<int> offsets,
const IndexMask selection,
const IndexMask &selection,
const AnonymousAttributePropagationInfo &propagation_info,
const bke::AttributeAccessor src_attributes,
bke::MutableAttributeAccessor dst_attributes)
@@ -426,7 +426,7 @@ static void copy_face_attributes_without_id(
* `face->edge->vert` mapping would mean creating a 1/1 mapping to allow for it, is it worth it?
*/
static void copy_stable_id_faces(const Mesh &mesh,
const IndexMask selection,
const IndexMask &selection,
const OffsetIndices<int> poly_offsets,
const Span<int> vert_mapping,
const bke::AttributeAccessor src_attributes,
@@ -590,7 +590,7 @@ static void duplicate_faces(GeometrySet &geometry_set,
static void copy_edge_attributes_without_id(
const Span<int> point_mapping,
const OffsetIndices<int> offsets,
const IndexMask selection,
const IndexMask &selection,
const AnonymousAttributePropagationInfo &propagation_info,
const bke::AttributeAccessor src_attributes,
bke::MutableAttributeAccessor dst_attributes)
@@ -622,7 +622,7 @@ static void copy_edge_attributes_without_id(
* and the duplicate number. This function is used for points when duplicating the edge domain.
*/
static void copy_stable_id_edges(const Mesh &mesh,
const IndexMask selection,
const IndexMask &selection,
const OffsetIndices<int> offsets,
const bke::AttributeAccessor src_attributes,
bke::MutableAttributeAccessor dst_attributes)

View File

@@ -20,20 +20,20 @@ static void node_declare(NodeDeclarationBuilder &b)
static Curves *edge_paths_to_curves_convert(
const Mesh &mesh,
const IndexMask start_verts_mask,
const IndexMask &start_verts_mask,
const Span<int> next_indices,
const AnonymousAttributePropagationInfo &propagation_info)
{
Vector<int> vert_indices;
Vector<int> curve_offsets;
Array<bool> visited(mesh.totvert, false);
for (const int first_vert : start_verts_mask) {
start_verts_mask.foreach_index([&](const int first_vert) {
const int second_vert = next_indices[first_vert];
if (first_vert == second_vert) {
continue;
return;
}
if (second_vert < 0 || second_vert >= mesh.totvert) {
continue;
return;
}
curve_offsets.append(vert_indices.size());
@@ -55,7 +55,7 @@ static Curves *edge_paths_to_curves_convert(
for (const int vert_in_curve : vert_indices.as_span().take_back(points_in_curve_num)) {
visited[vert_in_curve] = false;
}
}
});
if (vert_indices.is_empty()) {
return nullptr;

View File

@@ -21,19 +21,16 @@ static void node_declare(NodeDeclarationBuilder &b)
}
static void edge_paths_to_selection(const Mesh &src_mesh,
const IndexMask start_selection,
const IndexMask &start_selection,
const Span<int> next_indices,
MutableSpan<bool> r_selection)
{
const Span<int2> edges = src_mesh.edges();
Array<bool> selection(src_mesh.totvert, false);
Array<bool> selection(src_mesh.totvert);
start_selection.to_bools(selection);
for (const int start_vert : start_selection) {
selection[start_vert] = true;
}
for (const int start_i : start_selection) {
start_selection.foreach_index([&](const int start_i) {
int iter = start_i;
while (iter != next_indices[iter] && !selection[next_indices[iter]]) {
if (next_indices[iter] < 0 || next_indices[iter] >= src_mesh.totvert) {
@@ -42,7 +39,7 @@ static void edge_paths_to_selection(const Mesh &src_mesh,
selection[next_indices[iter]] = true;
iter = next_indices[iter];
}
}
});
for (const int i : edges.index_range()) {
const int2 &edge = edges[i];
@@ -70,7 +67,7 @@ class PathToEdgeSelectionFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bke::MeshFieldContext context{mesh, ATTR_DOMAIN_POINT};
fn::FieldEvaluator evaluator{context, mesh.totvert};

View File

@@ -44,7 +44,7 @@ class FaceSetFromBoundariesInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bke::MeshFieldContext context{mesh, ATTR_DOMAIN_EDGE};
fn::FieldEvaluator evaluator{context, mesh.totedge};
@@ -60,9 +60,8 @@ class FaceSetFromBoundariesInput final : public bke::MeshFieldInput {
polys, mesh.corner_edges(), mesh.totedge, edge_to_face_offsets, edge_to_face_indices);
AtomicDisjointSet islands(polys.size());
for (const int edge : non_boundary_edges) {
join_indices(islands, edge_to_face_map[edge]);
}
non_boundary_edges.foreach_index(
[&](const int edge) { join_indices(islands, edge_to_face_map[edge]); });
Array<int> output(polys.size());
islands.calc_reduced_ids(output);

View File

@@ -24,7 +24,7 @@ EvaluateAtIndexInput::EvaluateAtIndexInput(Field<int> index_field,
}
GVArray EvaluateAtIndexInput::get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask mask) const
const IndexMask &mask) const
{
const std::optional<AttributeAccessor> attributes = context.attributes();
if (!attributes) {

View File

@@ -97,7 +97,7 @@ class EvaluateOnDomainInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bke::AttributeAccessor attributes = *context.attributes();

View File

@@ -71,22 +71,13 @@ struct AttributeOutputs {
static void save_selection_as_attribute(Mesh &mesh,
const AnonymousAttributeID *id,
const eAttrDomain domain,
const IndexMask selection)
const IndexMask &selection)
{
MutableAttributeAccessor attributes = mesh.attributes_for_write();
BLI_assert(!attributes.contains(id));
SpanAttributeWriter<bool> attribute = attributes.lookup_or_add_for_write_span<bool>(id, domain);
/* Rely on the new attribute being zeroed by default. */
BLI_assert(!attribute.span.as_span().contains(true));
if (selection.is_range()) {
attribute.span.slice(selection.as_range()).fill(true);
}
else {
attribute.span.fill_indices(selection.indices(), true);
}
selection.to_bools(attribute.span);
attribute.finish();
}
@@ -452,16 +443,19 @@ static void extrude_mesh_edges(Mesh &mesh,
if (!edge_offsets.is_single()) {
vert_offsets.reinitialize(orig_vert_size);
bke::attribute_math::DefaultPropagationMixer<float3> mixer(vert_offsets);
for (const int i_edge : edge_selection) {
edge_selection.foreach_index([&](const int i_edge) {
const int2 &edge = orig_edges[i_edge];
const float3 offset = edge_offsets[i_edge];
mixer.mix_in(edge[0], offset);
mixer.mix_in(edge[1], offset);
}
});
mixer.finalize();
}
const VectorSet<int> new_vert_indices = vert_indices_from_edges(mesh, edge_selection.indices());
Vector<int> edge_selection_indices(edge_selection.size());
edge_selection.to_indices(edge_selection_indices.as_mutable_span());
const VectorSet<int> new_vert_indices = vert_indices_from_edges<int>(mesh,
edge_selection_indices);
const IndexRange new_vert_range{orig_vert_size, new_vert_indices.size()};
/* The extruded edges connect the original and duplicate edges. */
@@ -729,10 +723,8 @@ static void extrude_mesh_face_regions(Mesh &mesh,
return;
}
Array<bool> poly_selection_array(orig_polys.size(), false);
for (const int i_poly : poly_selection) {
poly_selection_array[i_poly] = true;
}
Array<bool> poly_selection_array(orig_polys.size());
poly_selection.to_bools(poly_selection_array);
/* Mix the offsets from the face domain to the vertex domain. Evaluate on the face domain above
* in order to be consistent with the selection, and to use the face normals rather than vertex
@@ -741,12 +733,12 @@ static void extrude_mesh_face_regions(Mesh &mesh,
if (!poly_position_offsets.is_single()) {
vert_offsets.reinitialize(orig_vert_size);
bke::attribute_math::DefaultPropagationMixer<float3> mixer(vert_offsets);
for (const int i_poly : poly_selection) {
poly_selection.foreach_index([&](const int i_poly) {
const float3 offset = poly_position_offsets[i_poly];
for (const int vert : orig_corner_verts.slice(orig_polys[i_poly])) {
mixer.mix_in(vert, offset);
}
}
});
mixer.finalize();
}
@@ -760,11 +752,11 @@ static void extrude_mesh_face_regions(Mesh &mesh,
* Start the size at one vert per poly to reduce unnecessary reallocation. */
VectorSet<int> all_selected_verts;
all_selected_verts.reserve(orig_polys.size());
for (const int i_poly : poly_selection) {
poly_selection.foreach_index([&](const int i_poly) {
for (const int vert : orig_corner_verts.slice(orig_polys[i_poly])) {
all_selected_verts.add(vert);
}
}
});
/* Edges inside of an extruded region that are also attached to deselected edges. They must be
* duplicated in order to leave the old edge attached to the unchanged deselected faces. */
@@ -898,7 +890,7 @@ static void extrude_mesh_face_regions(Mesh &mesh,
}
/* Connect the selected faces to the extruded or duplicated edges and the new vertices. */
for (const int i_poly : poly_selection) {
poly_selection.foreach_index([&](const int i_poly) {
for (const int corner : polys[i_poly]) {
const int i_new_vert = new_vert_indices.index_of_try(corner_verts[corner]);
if (i_new_vert != -1) {
@@ -915,7 +907,7 @@ static void extrude_mesh_face_regions(Mesh &mesh,
corner_edges[corner] = new_inner_edge_range[i_new_inner_edge];
}
}
}
});
/* Create the faces on the sides of extruded regions. */
for (const int i : boundary_edge_indices.index_range()) {

View File

@@ -316,7 +316,7 @@ class ImageFieldsFunction : public mf::MultiFunction {
}
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vectors = params.readonly_single_input<float3>(0, "Vector");
MutableSpan<ColorGeometry4f> r_color = params.uninitialized_single_output<ColorGeometry4f>(
@@ -328,23 +328,23 @@ class ImageFieldsFunction : public mf::MultiFunction {
/* Sample image texture. */
switch (interpolation_) {
case SHD_INTERP_LINEAR:
for (const int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 p = vectors[i];
color_data[i] = image_linear_texture_lookup(*image_buffer_, p.x, p.y, extension_);
}
});
break;
case SHD_INTERP_CLOSEST:
for (const int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 p = vectors[i];
color_data[i] = image_closest_texture_lookup(*image_buffer_, p.x, p.y, extension_);
}
});
break;
case SHD_INTERP_CUBIC:
case SHD_INTERP_SMART:
for (const int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 p = vectors[i];
color_data[i] = image_cubic_texture_lookup(*image_buffer_, p.x, p.y, extension_);
}
});
break;
}
@@ -356,9 +356,7 @@ class ImageFieldsFunction : public mf::MultiFunction {
switch (alpha_mode) {
case IMA_ALPHA_STRAIGHT: {
/* #ColorGeometry expects premultiplied alpha, so convert from straight to that. */
for (int64_t i : mask) {
straight_to_premul_v4(color_data[i]);
}
mask.foreach_index([&](const int64_t i) { straight_to_premul_v4(color_data[i]); });
break;
}
case IMA_ALPHA_PREMUL: {
@@ -371,17 +369,13 @@ class ImageFieldsFunction : public mf::MultiFunction {
}
case IMA_ALPHA_IGNORE: {
/* The image should be treated as being opaque. */
for (int64_t i : mask) {
color_data[i].w = 1.0f;
}
mask.foreach_index([&](const int64_t i) { color_data[i].w = 1.0f; });
break;
}
}
if (!r_alpha.is_empty()) {
for (int64_t i : mask) {
r_alpha[i] = r_color[i].a;
}
mask.foreach_index([&](const int64_t i) { r_alpha[i] = r_color[i].a; });
}
}
};

View File

@@ -18,12 +18,11 @@ static void node_declare(NodeDeclarationBuilder &b)
b.add_output<decl::Bool>("Has Neighbor").field_source();
}
static KDTree_3d *build_kdtree(const Span<float3> positions, const IndexMask mask)
static KDTree_3d *build_kdtree(const Span<float3> positions, const IndexMask &mask)
{
KDTree_3d *tree = BLI_kdtree_3d_new(mask.size());
for (const int index : mask) {
BLI_kdtree_3d_insert(tree, index, positions[index]);
}
mask.foreach_index(
[&](const int index) { BLI_kdtree_3d_insert(tree, index, positions[index]); });
BLI_kdtree_3d_balance(tree);
return tree;
}
@@ -38,13 +37,11 @@ static int find_nearest_non_self(const KDTree_3d &tree, const float3 &position,
static void find_neighbors(const KDTree_3d &tree,
const Span<float3> positions,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<int> r_indices)
{
threading::parallel_for(mask.index_range(), 1024, [&](const IndexRange range) {
for (const int index : mask.slice(range)) {
r_indices[index] = find_nearest_non_self(tree, positions[index], index);
}
mask.foreach_index(GrainSize(1024), [&](const int index) {
r_indices[index] = find_nearest_non_self(tree, positions[index], index);
});
}
@@ -62,7 +59,7 @@ class IndexOfNearestFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask mask) const final
const IndexMask &mask) const final
{
if (!context.attributes()) {
return {};
@@ -87,58 +84,38 @@ class IndexOfNearestFieldInput final : public bke::GeometryFieldInput {
const VArraySpan<int> group_ids_span(group_ids);
VectorSet<int> group_indexing;
for (const int index : mask) {
for (const int index : IndexRange(domain_size)) {
const int group_id = group_ids_span[index];
group_indexing.add(group_id);
}
const int groups_num = group_indexing.size();
/* Each group ID has two corresponding index masks. One that contains all the points
* in each group and one that contains all the points in the group that should be looked up
* (the intersection of the points in the group and `mask`). In many cases, both of these
* masks are the same or very similar, so there is not enough benefit for a separate mask
* for the lookups. */
const bool use_separate_lookup_indices = mask.size() < domain_size / 2;
IndexMaskMemory mask_memory;
Array<IndexMask> all_indices_by_group_id(groups_num);
Array<IndexMask> lookup_indices_by_group_id(groups_num);
Array<Vector<int64_t>> all_indices_by_group_id(group_indexing.size());
Array<Vector<int64_t>> lookup_indices_by_group_id;
if (use_separate_lookup_indices) {
result.reinitialize(mask.min_array_size());
lookup_indices_by_group_id.reinitialize(group_indexing.size());
}
else {
result.reinitialize(domain_size);
}
const auto build_group_masks = [&](const IndexMask mask,
MutableSpan<Vector<int64_t>> r_groups) {
for (const int index : mask) {
const int group_id = group_ids_span[index];
const int index_of_group = group_indexing.index_of_try(group_id);
if (index_of_group != -1) {
r_groups[index_of_group].append(index);
}
}
const auto get_group_index = [&](const int i) {
const int group_id = group_ids_span[i];
return group_indexing.index_of(group_id);
};
threading::parallel_invoke(
domain_size > 1024 && use_separate_lookup_indices,
[&]() {
if (use_separate_lookup_indices) {
build_group_masks(mask, lookup_indices_by_group_id);
}
},
[&]() { build_group_masks(IndexMask(domain_size), all_indices_by_group_id); });
IndexMask::from_groups<int>(
IndexMask(domain_size), mask_memory, get_group_index, all_indices_by_group_id);
if (mask.size() == domain_size) {
lookup_indices_by_group_id = all_indices_by_group_id;
}
else {
IndexMask::from_groups<int>(mask, mask_memory, get_group_index, all_indices_by_group_id);
}
/* The grain size should be larger as each tree gets smaller. */
const int avg_tree_size = domain_size / group_indexing.size();
const int grain_size = std::max(8192 / avg_tree_size, 1);
threading::parallel_for(group_indexing.index_range(), grain_size, [&](const IndexRange range) {
for (const int index : range) {
const IndexMask tree_mask = all_indices_by_group_id[index].as_span();
const IndexMask lookup_mask = use_separate_lookup_indices ?
IndexMask(lookup_indices_by_group_id[index]) :
tree_mask;
threading::parallel_for(IndexRange(groups_num), grain_size, [&](const IndexRange range) {
for (const int group_index : range) {
const IndexMask &tree_mask = all_indices_by_group_id[group_index];
const IndexMask &lookup_mask = lookup_indices_by_group_id[group_index];
KDTree_3d *tree = build_kdtree(positions, tree_mask);
find_neighbors(*tree, positions, lookup_mask, result);
BLI_kdtree_3d_free(tree);
@@ -187,7 +164,7 @@ class HasNeighborFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask mask) const final
const IndexMask &mask) const final
{
if (!context.attributes()) {
return {};

View File

@@ -30,7 +30,7 @@ class HandlePositionFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const bke::CurvesFieldContext field_context{curves, ATTR_DOMAIN_POINT};
fn::FieldEvaluator evaluator(field_context, &mask);

View File

@@ -17,7 +17,8 @@ class InstanceRotationFieldInput final : public bke::InstancesFieldInput {
public:
InstanceRotationFieldInput() : bke::InstancesFieldInput(CPPType::get<float3>(), "Rotation") {}
GVArray get_varray_for_context(const bke::Instances &instances, IndexMask /*mask*/) const final
GVArray get_varray_for_context(const bke::Instances &instances,
const IndexMask & /*mask*/) const final
{
auto rotation_fn = [&](const int i) -> float3 {
return float3(math::to_euler(math::normalize(instances.transforms()[i])));

View File

@@ -17,7 +17,8 @@ class InstanceScaleFieldInput final : public bke::InstancesFieldInput {
public:
InstanceScaleFieldInput() : bke::InstancesFieldInput(CPPType::get<float3>(), "Scale") {}
GVArray get_varray_for_context(const bke::Instances &instances, IndexMask /*mask*/) const final
GVArray get_varray_for_context(const bke::Instances &instances,
const IndexMask & /*mask*/) const final
{
auto scale_fn = [&](const int i) -> float3 {
return math::to_scale(instances.transforms()[i]);

View File

@@ -64,7 +64,7 @@ class AngleFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const Span<float3> positions = mesh.vert_positions();
const OffsetIndices polys = mesh.polys();
@@ -114,7 +114,7 @@ class SignedAngleFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const Span<float3> positions = mesh.vert_positions();
const Span<int2> edges = mesh.edges();

View File

@@ -26,7 +26,7 @@ class EdgeNeighborCountFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const Span<int> corner_edges = mesh.corner_edges();
Array<int> face_count(mesh.totedge, 0);

View File

@@ -54,7 +54,7 @@ class EdgeVertsInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_edge_verts_gvarray(mesh, vertex_, domain);
}
@@ -112,7 +112,7 @@ class EdgePositionFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_edge_positions_gvarray(mesh, vertex_, domain);
}

View File

@@ -40,7 +40,7 @@ class FaceAreaFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_face_area_varray(mesh, domain);
}

View File

@@ -38,7 +38,7 @@ class PlanarFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const Span<float3> positions = mesh.vert_positions();
const OffsetIndices polys = mesh.polys();

View File

@@ -50,7 +50,7 @@ class FaceNeighborCountFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_neighbor_count_varray(mesh, domain);
}
@@ -91,7 +91,7 @@ class FaceVertexCountFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_vertex_count_varray(mesh, domain);
}

View File

@@ -33,7 +33,7 @@ class IslandFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const Span<int2> edges = mesh.edges();
@@ -77,7 +77,7 @@ class IslandCountFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const Span<int2> edges = mesh.edges();

View File

@@ -44,7 +44,7 @@ class VertexCountFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_vertex_count_gvarray(mesh, domain);
}
@@ -88,7 +88,7 @@ class VertexFaceCountFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_face_count_gvarray(mesh, domain);
}

View File

@@ -100,7 +100,7 @@ class AttributeExistsFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bool exists = context.attributes()->contains(name_);
const int domain_size = context.attributes()->domain_size(context.domain());

View File

@@ -50,10 +50,10 @@ static void shortest_paths(const Mesh &mesh,
std::priority_queue<VertPriority, std::vector<VertPriority>, std::greater<VertPriority>> queue;
for (const int start_vert_i : end_selection) {
end_selection.foreach_index([&](const int start_vert_i) {
r_cost[start_vert_i] = 0.0f;
queue.emplace(0.0f, start_vert_i);
}
});
while (!queue.empty()) {
const float cost_i = queue.top().first;
@@ -97,7 +97,7 @@ class ShortestEdgePathsNextVertFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bke::MeshFieldContext edge_context{mesh, ATTR_DOMAIN_EDGE};
fn::FieldEvaluator edge_evaluator{edge_context, mesh.totedge};
@@ -173,7 +173,7 @@ class ShortestEdgePathsCostFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bke::MeshFieldContext edge_context{mesh, ATTR_DOMAIN_EDGE};
fn::FieldEvaluator edge_evaluator{edge_context, mesh.totedge};

View File

@@ -42,7 +42,7 @@ class SplineCountFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_curve_point_count_gvarray(curves, domain);
}

View File

@@ -20,7 +20,7 @@ class ResolutionFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return curves.adapt_domain(curves.resolution(), ATTR_DOMAIN_CURVE, domain);
}

View File

@@ -97,7 +97,7 @@ class TangentFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_curve_tangent_gvarray(curves, domain);
}

View File

@@ -22,7 +22,7 @@ static void node_declare(NodeDeclarationBuilder &b)
static VArray<bool> select_mesh_faces_by_material(const Mesh &mesh,
const Material *material,
const IndexMask face_mask)
const IndexMask &face_mask)
{
Vector<int> slots;
for (const int slot_i : IndexRange(mesh.totcol)) {
@@ -68,7 +68,7 @@ class MaterialSelectionFieldInput final : public bke::GeometryFieldInput {
}
GVArray get_varray_for_context(const bke::GeometryFieldContext &context,
const IndexMask mask) const final
const IndexMask &mask) const final
{
if (context.type() != GEO_COMPONENT_TYPE_MESH) {
return {};

View File

@@ -36,7 +36,7 @@ class BoundaryFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
const bke::MeshFieldContext face_context{mesh, ATTR_DOMAIN_FACE};
FieldEvaluator face_evaluator{face_context, mesh.totpoly};

View File

@@ -43,7 +43,7 @@ class CornersOfFaceInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const OffsetIndices polys = mesh.polys();
@@ -63,12 +63,12 @@ class CornersOfFaceInput final : public bke::MeshFieldInput {
const bool use_sorting = !all_sort_weights.is_single();
Array<int> corner_of_face(mask.min_array_size());
threading::parallel_for(mask.index_range(), 1024, [&](const IndexRange range) {
mask.foreach_segment(GrainSize(1024), [&](const IndexMaskSegment segment) {
/* Reuse arrays to avoid allocation. */
Array<float> sort_weights;
Array<int> sort_indices;
for (const int selection_i : mask.slice(range)) {
for (const int selection_i : segment) {
const int poly_i = face_indices[selection_i];
const int index_in_sort = indices_in_sort[selection_i];
if (!polys.index_range().contains(poly_i)) {
@@ -141,7 +141,7 @@ class CornersOfFaceCountInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_FACE) {
return {};

View File

@@ -27,13 +27,6 @@ static void node_declare(NodeDeclarationBuilder &b)
"The number of faces or corners connected to each vertex");
}
static void convert_span(const Span<int> src, MutableSpan<int64_t> dst)
{
for (const int i : src.index_range()) {
dst[i] = src[i];
}
}
class CornersOfVertInput final : public bke::MeshFieldInput {
const Field<int> vert_index_;
const Field<int> sort_index_;
@@ -51,7 +44,7 @@ class CornersOfVertInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const IndexRange vert_range(mesh.totvert);
Array<int> map_offsets;
@@ -75,13 +68,12 @@ class CornersOfVertInput final : public bke::MeshFieldInput {
const bool use_sorting = !all_sort_weights.is_single();
Array<int> corner_of_vertex(mask.min_array_size());
threading::parallel_for(mask.index_range(), 1024, [&](const IndexRange range) {
mask.foreach_segment(GrainSize(1024), [&](const IndexMaskSegment segment) {
/* Reuse arrays to avoid allocation. */
Array<int64_t> corner_indices;
Array<float> sort_weights;
Array<int> sort_indices;
for (const int selection_i : mask.slice(range)) {
for (const int selection_i : segment) {
const int vert_i = vert_indices[selection_i];
const int index_in_sort = indices_in_sort[selection_i];
if (!vert_range.contains(vert_i)) {
@@ -97,13 +89,10 @@ class CornersOfVertInput final : public bke::MeshFieldInput {
const int index_in_sort_wrapped = mod_i(index_in_sort, corners.size());
if (use_sorting) {
/* Retrieve the connected edge indices as 64 bit integers for #materialize_compressed. */
corner_indices.reinitialize(corners.size());
convert_span(corners, corner_indices);
/* Retrieve a compressed array of weights for each edge. */
sort_weights.reinitialize(corners.size());
all_sort_weights.materialize_compressed(IndexMask(corner_indices),
IndexMaskMemory memory;
all_sort_weights.materialize_compressed(IndexMask::from_indices<int>(corners, memory),
sort_weights.as_mutable_span());
/* Sort a separate array of compressed indices corresponding to the compressed weights.
@@ -115,7 +104,7 @@ class CornersOfVertInput final : public bke::MeshFieldInput {
std::stable_sort(sort_indices.begin(), sort_indices.end(), [&](int a, int b) {
return sort_weights[a] < sort_weights[b];
});
corner_of_vertex[selection_i] = corner_indices[sort_indices[index_in_sort_wrapped]];
corner_of_vertex[selection_i] = corners[sort_indices[index_in_sort_wrapped]];
}
else {
corner_of_vertex[selection_i] = corners[index_in_sort_wrapped];
@@ -162,7 +151,7 @@ class CornersOfVertCountInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};

View File

@@ -33,7 +33,7 @@ class CornerNextEdgeFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_CORNER) {
return {};
@@ -69,7 +69,7 @@ class CornerPreviousEdgeFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_CORNER) {
return {};

View File

@@ -27,13 +27,6 @@ static void node_declare(NodeDeclarationBuilder &b)
"The number of edges connected to each vertex");
}
static void convert_span(const Span<int> src, MutableSpan<int64_t> dst)
{
for (const int i : src.index_range()) {
dst[i] = src[i];
}
}
class EdgesOfVertInput final : public bke::MeshFieldInput {
const Field<int> vert_index_;
const Field<int> sort_index_;
@@ -51,7 +44,7 @@ class EdgesOfVertInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const IndexRange vert_range(mesh.totvert);
const Span<int2> edges = mesh.edges();
@@ -76,13 +69,12 @@ class EdgesOfVertInput final : public bke::MeshFieldInput {
const bool use_sorting = !all_sort_weights.is_single();
Array<int> edge_of_vertex(mask.min_array_size());
threading::parallel_for(mask.index_range(), 1024, [&](const IndexRange range) {
mask.foreach_segment(GrainSize(1024), [&](const IndexMaskSegment segment) {
/* Reuse arrays to avoid allocation. */
Array<int64_t> edge_indices;
Array<float> sort_weights;
Array<int> sort_indices;
for (const int selection_i : mask.slice(range)) {
for (const int selection_i : segment) {
const int vert_i = vert_indices[selection_i];
const int index_in_sort = indices_in_sort[selection_i];
if (!vert_range.contains(vert_i)) {
@@ -98,13 +90,10 @@ class EdgesOfVertInput final : public bke::MeshFieldInput {
const int index_in_sort_wrapped = mod_i(index_in_sort, edges.size());
if (use_sorting) {
/* Retrieve the connected edge indices as 64 bit integers for #materialize_compressed. */
edge_indices.reinitialize(edges.size());
convert_span(edges, edge_indices);
/* Retrieve a compressed array of weights for each edge. */
sort_weights.reinitialize(edges.size());
all_sort_weights.materialize_compressed(IndexMask(edge_indices),
IndexMaskMemory memory;
all_sort_weights.materialize_compressed(IndexMask::from_indices<int>(edges, memory),
sort_weights.as_mutable_span());
/* Sort a separate array of compressed indices corresponding to the compressed weights.
@@ -117,7 +106,7 @@ class EdgesOfVertInput final : public bke::MeshFieldInput {
return sort_weights[a] < sort_weights[b];
});
edge_of_vertex[selection_i] = edge_indices[sort_indices[index_in_sort_wrapped]];
edge_of_vertex[selection_i] = edges[sort_indices[index_in_sort_wrapped]];
}
else {
edge_of_vertex[selection_i] = edges[index_in_sort_wrapped];
@@ -164,7 +153,7 @@ class EdgesOfVertCountInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_POINT) {
return {};

View File

@@ -29,7 +29,7 @@ class CornerFaceIndexInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_CORNER) {
return {};
@@ -57,7 +57,7 @@ class CornerIndexInFaceInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_CORNER) {
return {};

View File

@@ -37,7 +37,7 @@ class OffsetCornerInFaceFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const IndexRange corner_range(mesh.totloop);
const OffsetIndices polys = mesh.polys();

View File

@@ -27,7 +27,7 @@ class CornerVertFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
if (domain != ATTR_DOMAIN_CORNER) {
return {};

View File

@@ -59,7 +59,7 @@ class ControlPointNeighborFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const OffsetIndices points_by_curve = curves.points_by_curve();
const VArray<bool> cyclic = curves.cyclic();
@@ -74,7 +74,7 @@ class ControlPointNeighborFieldInput final : public bke::CurvesFieldInput {
const VArray<int> offsets = evaluator.get_evaluated<int>(1);
Array<int> output(mask.min_array_size());
for (const int i_selection : mask) {
mask.foreach_index([&](const int i_selection) {
const int i_point = std::clamp(indices[i_selection], 0, curves.points_num() - 1);
const int i_curve = parent_curves[i_point];
const IndexRange curve_points = points_by_curve[i_curve];
@@ -83,10 +83,10 @@ class ControlPointNeighborFieldInput final : public bke::CurvesFieldInput {
if (cyclic[i_curve]) {
output[i_selection] = apply_offset_in_cyclic_range(
curve_points, i_point, offsets[i_selection]);
continue;
return;
}
output[i_selection] = std::clamp(offset_point, 0, curves.points_num() - 1);
}
});
return VArray<int>::ForContainer(std::move(output));
}
@@ -114,7 +114,7 @@ class OffsetValidFieldInput final : public bke::CurvesFieldInput {
GVArray get_varray_for_context(const bke::CurvesGeometry &curves,
const eAttrDomain domain,
const IndexMask mask) const final
const IndexMask &mask) const final
{
const VArray<bool> cyclic = curves.cyclic();
const OffsetIndices points_by_curve = curves.points_by_curve();
@@ -129,21 +129,21 @@ class OffsetValidFieldInput final : public bke::CurvesFieldInput {
const VArray<int> offsets = evaluator.get_evaluated<int>(1);
Array<bool> output(mask.min_array_size());
for (const int i_selection : mask) {
mask.foreach_index([&](const int i_selection) {
const int i_point = indices[i_selection];
if (!curves.points_range().contains(i_point)) {
output[i_selection] = false;
continue;
return;
}
const int i_curve = parent_curves[i_point];
const IndexRange curve_points = points_by_curve[i_curve];
if (cyclic[i_curve]) {
output[i_selection] = true;
continue;
return;
}
output[i_selection] = curve_points.contains(i_point + offsets[i_selection]);
};
});
return VArray<bool>::ForContainer(std::move(output));
}

View File

@@ -41,7 +41,7 @@ class PointsFieldContext : public FieldContext {
}
GVArray get_varray_for_input(const FieldInput &field_input,
const IndexMask mask,
const IndexMask &mask,
ResourceScope & /*scope*/) const
{
const bke::IDAttributeFieldInput *id_field_input =

View File

@@ -40,7 +40,7 @@ static void geo_proximity_init(bNodeTree * /*tree*/, bNode *node)
}
static bool calculate_mesh_proximity(const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const Mesh &mesh,
const GeometryNodeProximityTargetType type,
const MutableSpan<float> r_distances,
@@ -90,7 +90,7 @@ static bool calculate_mesh_proximity(const VArray<float3> &positions,
}
static bool calculate_pointcloud_proximity(const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const PointCloud &pointcloud,
MutableSpan<float> r_distances,
MutableSpan<float3> r_locations)
@@ -149,7 +149,7 @@ class ProximityFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &src_positions = params.readonly_single_input<float3>(0,
"Source Position");
@@ -161,7 +161,7 @@ class ProximityFunction : public mf::MultiFunction {
* comparison per vertex, so it's likely not worth it. */
MutableSpan<float> distances = params.uninitialized_single_output<float>(2, "Distance");
distances.fill_indices(mask.indices(), FLT_MAX);
index_mask::masked_fill(distances, FLT_MAX, mask);
bool success = false;
if (target_.has_mesh()) {
@@ -176,10 +176,10 @@ class ProximityFunction : public mf::MultiFunction {
if (!success) {
if (!positions.is_empty()) {
positions.fill_indices(mask.indices(), float3(0));
index_mask::masked_fill(positions, float3(0), mask);
}
if (!distances.is_empty()) {
distances.fill_indices(mask.indices(), 0.0f);
index_mask::masked_fill(distances, 0.0f, mask);
}
return;
}

View File

@@ -1,7 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_index_mask_ops.hh"
#include "DNA_mesh_types.h"
#include "BKE_attribute_math.hh"
@@ -116,7 +114,7 @@ static void node_gather_link_searches(GatherLinkSearchOpParams &params)
}
}
static void raycast_to_mesh(IndexMask mask,
static void raycast_to_mesh(const IndexMask &mask,
const Mesh &mesh,
const VArray<float3> &ray_origins,
const VArray<float3> &ray_directions,
@@ -137,7 +135,7 @@ static void raycast_to_mesh(IndexMask mask,
/* We shouldn't be rebuilding the BVH tree when calling this function in parallel. */
BLI_assert(tree_data.cached);
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const float ray_length = ray_lengths[i];
const float3 ray_origin = ray_origins[i];
const float3 ray_direction = ray_directions[i];
@@ -187,7 +185,7 @@ static void raycast_to_mesh(IndexMask mask,
r_hit_distances[i] = ray_length;
}
}
}
});
}
class RaycastFunction : public mf::MultiFunction {
@@ -214,7 +212,7 @@ class RaycastFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
BLI_assert(target_.has_mesh());
const Mesh &mesh = *target_.get_mesh_for_read();

View File

@@ -16,20 +16,18 @@ namespace blender::nodes {
template<typename T>
void copy_with_checked_indices(const VArray<T> &src,
const VArray<int> &indices,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<T> dst)
{
const IndexRange src_range = src.index_range();
devirtualize_varray2(src, indices, [&](const auto src, const auto indices) {
threading::parallel_for(mask.index_range(), 4096, [&](IndexRange range) {
for (const int i : mask.slice(range)) {
const int index = indices[i];
if (src_range.contains(index)) {
dst[i] = src[index];
}
else {
dst[i] = {};
}
mask.foreach_index(GrainSize(4096), [&](const int i) {
const int index = indices[i];
if (src_range.contains(index)) {
dst[i] = src[index];
}
else {
dst[i] = {};
}
});
});
@@ -37,7 +35,7 @@ void copy_with_checked_indices(const VArray<T> &src,
void copy_with_checked_indices(const GVArray &src,
const VArray<int> &indices,
const IndexMask mask,
const IndexMask &mask,
GMutableSpan dst)
{
bke::attribute_math::convert_to_static_type(src.type(), [&](auto dummy) {
@@ -171,16 +169,14 @@ static const GeometryComponent *find_source_component(const GeometrySet &geometr
template<typename T>
void copy_with_clamped_indices(const VArray<T> &src,
const VArray<int> &indices,
const IndexMask mask,
const IndexMask &mask,
MutableSpan<T> dst)
{
const int last_index = src.index_range().last();
devirtualize_varray2(src, indices, [&](const auto src, const auto indices) {
threading::parallel_for(mask.index_range(), 4096, [&](IndexRange range) {
for (const int i : mask.slice(range)) {
const int index = indices[i];
dst[i] = src[std::clamp(index, 0, last_index)];
}
mask.foreach_index(GrainSize(4096), [&](const int i) {
const int index = indices[i];
dst[i] = src[std::clamp(index, 0, last_index)];
});
});
}
@@ -236,7 +232,7 @@ class SampleIndexFunction : public mf::MultiFunction {
src_data_ = &evaluator_->get_evaluated(0);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<int> &indices = params.readonly_single_input<int>(0, "Index");
GMutableSpan dst = params.uninitialized_single_output(1, "Value");

View File

@@ -17,7 +17,7 @@ namespace blender::nodes {
void get_closest_in_bvhtree(BVHTreeFromMesh &tree_data,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -26,7 +26,7 @@ void get_closest_in_bvhtree(BVHTreeFromMesh &tree_data,
BLI_assert(positions.size() >= r_distances_sq.size());
BLI_assert(positions.size() >= r_positions.size());
for (const int i : mask) {
mask.foreach_index([&](const int i) {
BVHTreeNearest nearest;
nearest.dist_sq = FLT_MAX;
const float3 position = positions[i];
@@ -41,7 +41,7 @@ void get_closest_in_bvhtree(BVHTreeFromMesh &tree_data,
if (!r_positions.is_empty()) {
r_positions[i] = nearest.co;
}
}
});
}
} // namespace blender::nodes
@@ -69,7 +69,7 @@ static void node_init(bNodeTree * /*tree*/, bNode *node)
static void get_closest_pointcloud_points(const PointCloud &pointcloud,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_indices,
const MutableSpan<float> r_distances_sq)
{
@@ -79,7 +79,7 @@ static void get_closest_pointcloud_points(const PointCloud &pointcloud,
BVHTreeFromPointCloud tree_data;
BKE_bvhtree_from_pointcloud_get(&tree_data, &pointcloud, 2);
for (const int i : mask) {
mask.foreach_index([&](const int i) {
BVHTreeNearest nearest;
nearest.dist_sq = FLT_MAX;
const float3 position = positions[i];
@@ -89,14 +89,14 @@ static void get_closest_pointcloud_points(const PointCloud &pointcloud,
if (!r_distances_sq.is_empty()) {
r_distances_sq[i] = nearest.dist_sq;
}
}
});
free_bvhtree_from_pointcloud(&tree_data);
}
static void get_closest_mesh_points(const Mesh &mesh,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_point_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -110,7 +110,7 @@ static void get_closest_mesh_points(const Mesh &mesh,
static void get_closest_mesh_edges(const Mesh &mesh,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_edge_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -124,7 +124,7 @@ static void get_closest_mesh_edges(const Mesh &mesh,
static void get_closest_mesh_looptris(const Mesh &mesh,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_looptri_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -139,7 +139,7 @@ static void get_closest_mesh_looptris(const Mesh &mesh,
static void get_closest_mesh_polys(const Mesh &mesh,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_poly_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -151,15 +151,13 @@ static void get_closest_mesh_polys(const Mesh &mesh,
const Span<int> looptri_polys = mesh.looptri_polys();
for (const int i : mask) {
r_poly_indices[i] = looptri_polys[looptri_indices[i]];
}
mask.foreach_index([&](const int i) { r_poly_indices[i] = looptri_polys[looptri_indices[i]]; });
}
/* The closest corner is defined to be the closest corner on the closest face. */
static void get_closest_mesh_corners(const Mesh &mesh,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_corner_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -172,7 +170,7 @@ static void get_closest_mesh_corners(const Mesh &mesh,
Array<int> poly_indices(positions.size());
get_closest_mesh_polys(mesh, positions, mask, poly_indices, {}, {});
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const float3 position = positions[i];
const int poly_index = poly_indices[i];
@@ -198,7 +196,7 @@ static void get_closest_mesh_corners(const Mesh &mesh,
if (!r_distances_sq.is_empty()) {
r_distances_sq[i] = min_distance_sq;
}
}
});
}
static bool component_is_available(const GeometrySet &geometry,
@@ -251,12 +249,12 @@ class SampleNearestFunction : public mf::MultiFunction {
this->set_signature(&signature_);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &positions = params.readonly_single_input<float3>(0, "Position");
MutableSpan<int> indices = params.uninitialized_single_output<int>(1, "Index");
if (!src_component_) {
indices.fill_indices(mask.indices(), 0);
index_mask::masked_fill(indices, 0, mask);
return;
}

View File

@@ -98,7 +98,7 @@ static void node_gather_link_searches(GatherLinkSearchOpParams &params)
static void get_closest_mesh_looptris(const Mesh &mesh,
const VArray<float3> &positions,
const IndexMask mask,
const IndexMask &mask,
const MutableSpan<int> r_looptri_indices,
const MutableSpan<float> r_distances_sq,
const MutableSpan<float3> r_positions)
@@ -129,7 +129,7 @@ class SampleNearestSurfaceFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &positions = params.readonly_single_input<float3>(0, "Position");
MutableSpan<int> triangle_index = params.uninitialized_single_output<int>(1, "Triangle Index");

View File

@@ -135,7 +135,7 @@ class ReverseUVSampleFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArraySpan<float2> sample_uvs = params.readonly_single_input<float2>(0, "Sample UV");
MutableSpan<bool> is_valid = params.uninitialized_single_output_if_required<bool>(1,
@@ -145,7 +145,7 @@ class ReverseUVSampleFunction : public mf::MultiFunction {
MutableSpan<float3> bary_weights = params.uninitialized_single_output_if_required<float3>(
3, "Barycentric Weights");
for (const int i : mask) {
mask.foreach_index([&](const int i) {
const ReverseUVSampler::Result result = reverse_uv_sampler_->sample(sample_uvs[i]);
if (!is_valid.is_empty()) {
is_valid[i] = result.type == ReverseUVSampler::ResultType::Ok;
@@ -156,7 +156,7 @@ class ReverseUVSampleFunction : public mf::MultiFunction {
if (!bary_weights.is_empty()) {
bary_weights[i] = result.bary_weights;
}
}
});
}
private:

View File

@@ -161,7 +161,7 @@ static const blender::CPPType *vdb_grid_type_to_cpp_type(const VolumeGridType gr
template<typename GridT>
void sample_grid(openvdb::GridBase::ConstPtr base_grid,
const Span<float3> positions,
const IndexMask mask,
const IndexMask &mask,
GMutableSpan dst,
const GeometryNodeSampleVolumeInterpolationMode interpolation_mode)
{
@@ -229,7 +229,7 @@ class SampleVolumeFunction : public mf::MultiFunction {
this->set_signature(&signature_);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArraySpan<float3> positions = params.readonly_single_input<float3>(0, "Position");
GMutableSpan dst = params.uninitialized_single_output(1, "Value");

View File

@@ -238,14 +238,15 @@ static void scale_vertex_islands_on_axis(Mesh &mesh,
BKE_mesh_tag_positions_changed(&mesh);
}
static Vector<ElementIsland> prepare_face_islands(const Mesh &mesh, const IndexMask face_selection)
static Vector<ElementIsland> prepare_face_islands(const Mesh &mesh,
const IndexMask &face_selection)
{
const OffsetIndices polys = mesh.polys();
const Span<int> corner_verts = mesh.corner_verts();
/* Use the disjoint set data structure to determine which vertices have to be scaled together. */
DisjointSet<int> disjoint_set(mesh.totvert);
for (const int poly_index : face_selection) {
face_selection.foreach_index([&](const int poly_index) {
const Span<int> poly_verts = corner_verts.slice(polys[poly_index]);
for (const int loop_index : poly_verts.index_range().drop_back(1)) {
const int v1 = poly_verts[loop_index];
@@ -253,7 +254,7 @@ static Vector<ElementIsland> prepare_face_islands(const Mesh &mesh, const IndexM
disjoint_set.join(v1, v2);
}
disjoint_set.join(poly_verts.first(), poly_verts.last());
}
});
VectorSet<int> island_ids;
Vector<ElementIsland> islands;
@@ -261,7 +262,7 @@ static Vector<ElementIsland> prepare_face_islands(const Mesh &mesh, const IndexM
islands.reserve(face_selection.size());
/* Gather all of the face indices in each island into separate vectors. */
for (const int poly_index : face_selection) {
face_selection.foreach_index([&](const int poly_index) {
const Span<int> poly_verts = corner_verts.slice(polys[poly_index]);
const int island_id = disjoint_set.find_root(poly_verts[0]);
const int island_index = island_ids.index_of_or_add(island_id);
@@ -270,7 +271,7 @@ static Vector<ElementIsland> prepare_face_islands(const Mesh &mesh, const IndexM
}
ElementIsland &island = islands[island_index];
island.element_indices.append(poly_index);
}
});
return islands;
}
@@ -329,16 +330,17 @@ static void scale_faces_uniformly(Mesh &mesh, const UniformScaleFields &fields)
scale_vertex_islands_uniformly(mesh, island, params, get_face_verts);
}
static Vector<ElementIsland> prepare_edge_islands(const Mesh &mesh, const IndexMask edge_selection)
static Vector<ElementIsland> prepare_edge_islands(const Mesh &mesh,
const IndexMask &edge_selection)
{
const Span<int2> edges = mesh.edges();
/* Use the disjoint set data structure to determine which vertices have to be scaled together. */
DisjointSet<int> disjoint_set(mesh.totvert);
for (const int edge_index : edge_selection) {
edge_selection.foreach_index([&](const int edge_index) {
const int2 &edge = edges[edge_index];
disjoint_set.join(edge[0], edge[1]);
}
});
VectorSet<int> island_ids;
Vector<ElementIsland> islands;
@@ -346,7 +348,7 @@ static Vector<ElementIsland> prepare_edge_islands(const Mesh &mesh, const IndexM
islands.reserve(edge_selection.size());
/* Gather all of the edge indices in each island into separate vectors. */
for (const int edge_index : edge_selection) {
edge_selection.foreach_index([&](const int edge_index) {
const int2 &edge = edges[edge_index];
const int island_id = disjoint_set.find_root(edge[0]);
const int island_index = island_ids.index_of_or_add(island_id);
@@ -355,7 +357,7 @@ static Vector<ElementIsland> prepare_edge_islands(const Mesh &mesh, const IndexM
}
ElementIsland &island = islands[island_index];
island.element_indices.append(edge_index);
}
});
return islands;
}

View File

@@ -109,14 +109,11 @@ static void set_position_in_component(bke::CurvesGeometry &curves,
curves.handle_positions_right_for_write() :
curves.handle_positions_left_for_write();
threading::parallel_for(selection.index_range(), 2048, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
selection.foreach_segment(GrainSize(2048), [&](const IndexMaskSegment segment) {
for (const int i : segment) {
update_handle_types_for_movement(handle_types[i], handle_types_other[i]);
}
});
threading::parallel_for(selection.index_range(), 2048, [&](IndexRange range) {
for (const int i : selection.slice(range)) {
for (const int i : segment) {
bke::curves::bezier::set_handle_position(positions[i],
HandleType(handle_types[i]),
HandleType(handle_types_other[i]),

View File

@@ -35,7 +35,7 @@ static void set_normal_mode(bke::CurvesGeometry &curves,
evaluator.set_selection(selection_field);
evaluator.evaluate();
const IndexMask selection = evaluator.get_evaluated_selection_as_mask();
curves.normal_mode_for_write().fill_indices(selection.indices(), mode);
index_mask::masked_fill<int8_t>(curves.normal_mode_for_write(), mode, selection);
curves.tag_normals_changed();
}

View File

@@ -28,7 +28,7 @@ static void node_declare(NodeDeclarationBuilder &b)
b.add_output<decl::Geometry>("Geometry").propagate_all();
}
static void assign_material_to_faces(Mesh &mesh, const IndexMask selection, Material *material)
static void assign_material_to_faces(Mesh &mesh, const IndexMask &selection, Material *material)
{
if (selection.size() != mesh.totpoly) {
/* If the entire mesh isn't selected, and there is no material slot yet, add an empty
@@ -53,7 +53,7 @@ static void assign_material_to_faces(Mesh &mesh, const IndexMask selection, Mate
MutableAttributeAccessor attributes = mesh.attributes_for_write();
SpanAttributeWriter<int> material_indices = attributes.lookup_or_add_for_write_span<int>(
"material_index", ATTR_DOMAIN_FACE);
material_indices.span.fill_indices(selection.indices(), new_material_index);
index_mask::masked_fill(material_indices.span, new_material_index, selection);
material_indices.finish();
}

View File

@@ -26,7 +26,7 @@ static void node_declare(NodeDeclarationBuilder &b)
static void set_computed_position_and_offset(GeometryComponent &component,
const VArray<float3> &in_positions,
const VArray<float3> &in_offsets,
const IndexMask selection)
const IndexMask &selection)
{
MutableAttributeAccessor attributes = *component.attributes_for_write();
@@ -45,7 +45,7 @@ static void set_computed_position_and_offset(GeometryComponent &component,
}
}
}
const int grain_size = 10000;
const GrainSize grain_size{10000};
switch (component.type()) {
case GEO_COMPONENT_TYPE_CURVE: {
@@ -62,16 +62,13 @@ static void set_computed_position_and_offset(GeometryComponent &component,
MutableVArraySpan<float3> out_positions_span = positions.varray;
devirtualize_varray2(
in_positions, in_offsets, [&](const auto in_positions, const auto in_offsets) {
threading::parallel_for(
selection.index_range(), grain_size, [&](const IndexRange range) {
for (const int i : selection.slice(range)) {
const float3 new_position = in_positions[i] + in_offsets[i];
const float3 delta = new_position - out_positions_span[i];
handle_right_attribute.span[i] += delta;
handle_left_attribute.span[i] += delta;
out_positions_span[i] = new_position;
}
});
selection.foreach_index_optimized<int>(grain_size, [&](const int i) {
const float3 new_position = in_positions[i] + in_offsets[i];
const float3 delta = new_position - out_positions_span[i];
handle_right_attribute.span[i] += delta;
handle_left_attribute.span[i] += delta;
out_positions_span[i] = new_position;
});
});
out_positions_span.save();
@@ -90,23 +87,16 @@ static void set_computed_position_and_offset(GeometryComponent &component,
MutableVArraySpan<float3> out_positions_span = positions.varray;
if (positions_are_original) {
devirtualize_varray(in_offsets, [&](const auto in_offsets) {
threading::parallel_for(
selection.index_range(), grain_size, [&](const IndexRange range) {
for (const int i : selection.slice(range)) {
out_positions_span[i] += in_offsets[i];
}
});
selection.foreach_index_optimized<int>(
grain_size, [&](const int i) { out_positions_span[i] += in_offsets[i]; });
});
}
else {
devirtualize_varray2(
in_positions, in_offsets, [&](const auto in_positions, const auto in_offsets) {
threading::parallel_for(
selection.index_range(), grain_size, [&](const IndexRange range) {
for (const int i : selection.slice(range)) {
out_positions_span[i] = in_positions[i] + in_offsets[i];
}
});
selection.foreach_index_optimized<int>(grain_size, [&](const int i) {
out_positions_span[i] = in_positions[i] + in_offsets[i];
});
});
}
out_positions_span.save();

View File

@@ -38,7 +38,7 @@ static void geo_triangulate_init(bNodeTree * /*tree*/, bNode *node)
static Mesh *triangulate_mesh_selection(const Mesh &mesh,
const int quad_method,
const int ngon_method,
const IndexMask selection,
const IndexMask &selection,
const int min_vertices)
{
CustomData_MeshMasks cd_mask_extra = {
@@ -52,9 +52,9 @@ static Mesh *triangulate_mesh_selection(const Mesh &mesh,
/* Tag faces to be triangulated from the selection mask. */
BM_mesh_elem_table_ensure(bm, BM_FACE);
for (int i_face : selection) {
selection.foreach_index([&](const int i_face) {
BM_elem_flag_set(BM_face_at_index(bm, i_face), BM_ELEM_TAG, true);
}
});
BM_mesh_triangulate(bm, quad_method, ngon_method, min_vertices, true, nullptr, nullptr, nullptr);
Mesh *result = BKE_mesh_from_bmesh_for_eval_nomain(bm, &cd_mask_extra, &mesh);

View File

@@ -52,7 +52,7 @@ static VArray<float3> construct_uv_gvarray(const Mesh &mesh,
evaluator.evaluate();
geometry::ParamHandle *handle = geometry::uv_parametrizer_construct_begin();
for (const int poly_index : selection) {
selection.foreach_index([&](const int poly_index) {
const IndexRange poly = polys[poly_index];
Array<geometry::ParamKey, 16> mp_vkeys(poly.size());
Array<bool, 16> mp_pin(poly.size());
@@ -76,7 +76,7 @@ static VArray<float3> construct_uv_gvarray(const Mesh &mesh,
mp_uv.data(),
mp_pin.data(),
mp_select.data());
}
});
geometry::uv_parametrizer_construct_end(handle, true, true, nullptr);
geometry::uv_parametrizer_pack(handle, margin, rotate, true);
@@ -110,7 +110,7 @@ class PackIslandsFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_uv_gvarray(mesh, selection_field_, uv_field_, rotate_, margin_, domain);
}

View File

@@ -81,7 +81,7 @@ static VArray<float3> construct_uv_gvarray(const Mesh &mesh,
Array<float3> uv(corner_verts.size(), float3(0));
geometry::ParamHandle *handle = geometry::uv_parametrizer_construct_begin();
for (const int poly_index : selection) {
selection.foreach_index([&](const int poly_index) {
const IndexRange poly = polys[poly_index];
Array<geometry::ParamKey, 16> mp_vkeys(poly.size());
Array<bool, 16> mp_pin(poly.size());
@@ -105,11 +105,13 @@ static VArray<float3> construct_uv_gvarray(const Mesh &mesh,
mp_uv.data(),
mp_pin.data(),
mp_select.data());
}
for (const int i : seam) {
});
seam.foreach_index([&](const int i) {
geometry::ParamKey vkeys[2]{uint(edges[i][0]), uint(edges[i][1])};
geometry::uv_parametrizer_edge_set_seam(handle, vkeys);
}
});
/* TODO: once field input nodes are able to emit warnings (#94039), emit a
* warning if we fail to solve an island. */
geometry::uv_parametrizer_construct_end(handle, fill_holes, false, nullptr);
@@ -153,7 +155,7 @@ class UnwrapFieldInput final : public bke::MeshFieldInput {
GVArray get_varray_for_context(const Mesh &mesh,
const eAttrDomain domain,
const IndexMask /*mask*/) const final
const IndexMask & /*mask*/) const final
{
return construct_uv_gvarray(mesh, selection_, seam_, fill_holes_, margin_, method_, domain);
}

View File

@@ -78,7 +78,7 @@ class Grid3DFieldContext : public FieldContext {
}
GVArray get_varray_for_input(const FieldInput &field_input,
const IndexMask /*mask*/,
const IndexMask & /*mask*/,
ResourceScope & /*scope*/) const
{
const bke::AttributeFieldInput *attribute_field_input =

View File

@@ -502,7 +502,8 @@ static void execute_multi_function_on_value_or_field(
}
else {
/* In this case, the multi-function is evaluated directly. */
mf::ParamsBuilder params{fn, 1};
const IndexMask mask(1);
mf::ParamsBuilder params{fn, &mask};
mf::ContextBuilder context;
for (const int i : input_types.index_range()) {
@@ -519,7 +520,7 @@ static void execute_multi_function_on_value_or_field(
type.value.destruct(value);
params.add_uninitialized_single_output(GMutableSpan{type.value, value, 1});
}
fn.call(IndexRange(1), params, context);
fn.call(mask, params, context);
}
}

View File

@@ -105,19 +105,19 @@ class ColorBandFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float> &values = params.readonly_single_input<float>(0, "Value");
MutableSpan<ColorGeometry4f> colors = params.uninitialized_single_output<ColorGeometry4f>(
1, "Color");
MutableSpan<float> alphas = params.uninitialized_single_output<float>(2, "Alpha");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
ColorGeometry4f color;
BKE_colorband_evaluate(&color_band_, values[i], color);
colors[i] = color;
alphas[i] = color.a;
}
});
}
};

View File

@@ -77,18 +77,18 @@ class CurveVecFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float> &fac = params.readonly_single_input<float>(0, "Fac");
const VArray<float3> &vec_in = params.readonly_single_input<float3>(1, "Vector");
MutableSpan<float3> vec_out = params.uninitialized_single_output<float3>(2, "Vector");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
BKE_curvemapping_evaluate3F(&cumap_, vec_out[i], vec_in[i]);
if (fac[i] != 1.0f) {
interp_v3_v3v3(vec_out[i], vec_in[i], vec_out[i], fac[i]);
}
}
});
}
};
@@ -217,7 +217,7 @@ class CurveRGBFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float> &fac = params.readonly_single_input<float>(0, "Fac");
const VArray<ColorGeometry4f> &col_in = params.readonly_single_input<ColorGeometry4f>(1,
@@ -225,12 +225,12 @@ class CurveRGBFunction : public mf::MultiFunction {
MutableSpan<ColorGeometry4f> col_out = params.uninitialized_single_output<ColorGeometry4f>(
2, "Color");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
BKE_curvemapping_evaluateRGBF(&cumap_, col_out[i], col_in[i]);
if (fac[i] != 1.0f) {
interp_v3_v3v3(col_out[i], col_in[i], col_out[i], fac[i]);
}
}
});
}
};
@@ -337,18 +337,18 @@ class CurveFloatFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float> &fac = params.readonly_single_input<float>(0, "Factor");
const VArray<float> &val_in = params.readonly_single_input<float>(1, "Value");
MutableSpan<float> val_out = params.uninitialized_single_output<float>(2, "Value");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
val_out[i] = BKE_curvemapping_evaluateF(&cumap_, 0, val_in[i]);
if (fac[i] != 1.0f) {
val_out[i] = (1.0f - fac[i]) * val_in[i] + fac[i] * val_out[i];
}
}
});
}
};

View File

@@ -145,7 +145,7 @@ class ClampWrapperFunction : public mf::MultiFunction {
this->set_signature(&fn.signature());
}
void call(IndexMask mask, mf::Params params, mf::Context context) const override
void call(const IndexMask &mask, mf::Params params, mf::Context context) const override
{
fn_.call(mask, params, context);
@@ -154,10 +154,10 @@ class ClampWrapperFunction : public mf::MultiFunction {
/* This has actually been initialized in the call above. */
MutableSpan<float> results = params.uninitialized_single_output<float>(output_param_index);
for (const int i : mask) {
mask.foreach_index_optimized<int>([&](const int i) {
float &value = results[i];
CLAMP(value, 0.0f, 1.0f);
}
});
}
};

View File

@@ -394,7 +394,7 @@ class MixColorFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float> &fac = params.readonly_single_input<float>(0, "Factor");
const VArray<ColorGeometry4f> &col1 = params.readonly_single_input<ColorGeometry4f>(1, "A");
@@ -403,22 +403,21 @@ class MixColorFunction : public mf::MultiFunction {
3, "Result");
if (clamp_factor_) {
for (int64_t i : mask) {
mask.foreach_index_optimized<int64_t>([&](const int64_t i) {
results[i] = col1[i];
ramp_blend(blend_type_, results[i], std::clamp(fac[i], 0.0f, 1.0f), col2[i]);
}
});
}
else {
for (int64_t i : mask) {
mask.foreach_index_optimized<int64_t>([&](const int64_t i) {
results[i] = col1[i];
ramp_blend(blend_type_, results[i], fac[i], col2[i]);
}
});
}
if (clamp_result_) {
for (int64_t i : mask) {
clamp_v3(results[i], 0.0f, 1.0f);
}
mask.foreach_index_optimized<int64_t>(
[&](const int64_t i) { clamp_v3(results[i], 0.0f, 1.0f); });
}
}
};

View File

@@ -111,7 +111,7 @@ class MixRGBFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float> &fac = params.readonly_single_input<float>(0, "Fac");
const VArray<ColorGeometry4f> &col1 = params.readonly_single_input<ColorGeometry4f>(1,
@@ -121,15 +121,13 @@ class MixRGBFunction : public mf::MultiFunction {
MutableSpan<ColorGeometry4f> results = params.uninitialized_single_output<ColorGeometry4f>(
3, "Color");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
results[i] = col1[i];
ramp_blend(type_, results[i], clamp_f(fac[i], 0.0f, 1.0f), col2[i]);
}
});
if (clamp_) {
for (int64_t i : mask) {
clamp_v3(results[i], 0.0f, 1.0f);
}
mask.foreach_index([&](const int64_t i) { clamp_v3(results[i], 0.0f, 1.0f); });
}
}
};

View File

@@ -43,7 +43,7 @@ class SeparateRGBFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<ColorGeometry4f> &colors = params.readonly_single_input<ColorGeometry4f>(0,
"Color");
@@ -51,12 +51,12 @@ class SeparateRGBFunction : public mf::MultiFunction {
MutableSpan<float> gs = params.uninitialized_single_output<float>(2, "G");
MutableSpan<float> bs = params.uninitialized_single_output<float>(3, "B");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
ColorGeometry4f color = colors[i];
rs[i] = color.r;
gs[i] = color.g;
bs[i] = color.b;
}
});
}
};

View File

@@ -43,7 +43,7 @@ class MF_SeparateXYZ : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vectors = params.readonly_single_input<float3>(0, "XYZ");
MutableSpan<float> xs = params.uninitialized_single_output_if_required<float>(1, "X");
@@ -63,11 +63,11 @@ class MF_SeparateXYZ : public mf::MultiFunction {
}
devirtualize_varray(vectors, [&](auto vectors) {
mask.to_best_mask_type([&](auto mask) {
mask.foreach_segment_optimized([&](const auto segment) {
const int used_outputs_num = used_outputs.size();
const int *used_outputs_data = used_outputs.data();
for (const int64_t i : mask) {
for (const int64_t i : segment) {
const float3 &vector = vectors[i];
for (const int out_i : IndexRange(used_outputs_num)) {
const int coordinate = used_outputs_data[out_i];

View File

@@ -197,7 +197,7 @@ class BrickFunction : public mf::MultiFunction {
return float2(tint, mortar);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
const VArray<ColorGeometry4f> &color1_values = params.readonly_single_input<ColorGeometry4f>(
@@ -220,7 +220,7 @@ class BrickFunction : public mf::MultiFunction {
const bool store_fac = !r_fac.is_empty();
const bool store_color = !r_color.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float2 f2 = brick(vector[i] * scale[i],
mortar_size[i],
mortar_smooth[i],
@@ -252,7 +252,7 @@ class BrickFunction : public mf::MultiFunction {
if (store_fac) {
r_fac[i] = f;
}
}
});
}
};

View File

@@ -60,7 +60,7 @@ class NodeTexChecker : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
const VArray<ColorGeometry4f> &color1 = params.readonly_single_input<ColorGeometry4f>(
@@ -72,7 +72,7 @@ class NodeTexChecker : public mf::MultiFunction {
params.uninitialized_single_output_if_required<ColorGeometry4f>(4, "Color");
MutableSpan<float> r_fac = params.uninitialized_single_output<float>(5, "Fac");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
/* Avoid precision issues on unit coordinates. */
const float3 p = (vector[i] * scale[i] + 0.000001f) * 0.999999f;
@@ -81,12 +81,11 @@ class NodeTexChecker : public mf::MultiFunction {
const int zi = abs(int(floorf(p.z)));
r_fac[i] = ((xi % 2 == yi % 2) == (zi % 2)) ? 1.0f : 0.0f;
}
});
if (!r_color.is_empty()) {
for (int64_t i : mask) {
r_color[i] = (r_fac[i] == 1.0f) ? color1[i] : color2[i];
}
mask.foreach_index(
[&](const int64_t i) { r_color[i] = (r_fac[i] == 1.0f) ? color1[i] : color2[i]; });
}
}
};

View File

@@ -63,7 +63,7 @@ class GradientFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
@@ -75,62 +75,57 @@ class GradientFunction : public mf::MultiFunction {
switch (gradient_type_) {
case SHD_BLEND_LINEAR: {
for (int64_t i : mask) {
fac[i] = vector[i].x;
}
mask.foreach_index([&](const int64_t i) { fac[i] = vector[i].x; });
break;
}
case SHD_BLEND_QUADRATIC: {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float r = std::max(vector[i].x, 0.0f);
fac[i] = r * r;
}
});
break;
}
case SHD_BLEND_EASING: {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float r = std::min(std::max(vector[i].x, 0.0f), 1.0f);
const float t = r * r;
fac[i] = (3.0f * t - 2.0f * t * r);
}
});
break;
}
case SHD_BLEND_DIAGONAL: {
for (int64_t i : mask) {
fac[i] = (vector[i].x + vector[i].y) * 0.5f;
}
mask.foreach_index([&](const int64_t i) { fac[i] = (vector[i].x + vector[i].y) * 0.5f; });
break;
}
case SHD_BLEND_RADIAL: {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
fac[i] = atan2f(vector[i].y, vector[i].x) / (M_PI * 2.0f) + 0.5f;
}
});
break;
}
case SHD_BLEND_QUADRATIC_SPHERE: {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
/* Bias a little bit for the case where input is a unit length vector,
* to get exactly zero instead of a small random value depending
* on float precision. */
const float r = std::max(0.999999f - math::length(vector[i]), 0.0f);
fac[i] = r * r;
}
});
break;
}
case SHD_BLEND_SPHERICAL: {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
/* Bias a little bit for the case where input is a unit length vector,
* to get exactly zero instead of a small random value depending
* on float precision. */
fac[i] = std::max(0.999999f - math::length(vector[i]), 0.0f);
}
});
break;
}
}
if (compute_color) {
for (int64_t i : mask) {
r_color[i] = ColorGeometry4f(fac[i], fac[i], fac[i], 1.0f);
}
mask.foreach_index(
[&](const int64_t i) { r_color[i] = ColorGeometry4f(fac[i], fac[i], fac[i], 1.0f); });
}
}
};

View File

@@ -68,7 +68,7 @@ class MagicFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
const VArray<float> &scale = params.readonly_single_input<float>(1, "Scale");
@@ -80,7 +80,7 @@ class MagicFunction : public mf::MultiFunction {
const bool compute_factor = !r_fac.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 co = vector[i] * scale[i];
const float distort = distortion[i];
float x = sinf((co[0] + co[1] + co[2]) * 5.0f);
@@ -148,11 +148,11 @@ class MagicFunction : public mf::MultiFunction {
}
r_color[i] = ColorGeometry4f(0.5f - x, 0.5f - y, 0.5f - z, 1.0f);
}
});
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
r_fac[i] = (r_color[i].r + r_color[i].g + r_color[i].b) * (1.0f / 3.0f);
}
});
}
}
};

View File

@@ -195,7 +195,7 @@ class MusgraveFunction : public mf::MultiFunction {
return signature;
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
auto get_vector = [&](int param_index) -> VArray<float3> {
return params.readonly_single_input<float3>(param_index, "Vector");
@@ -240,34 +240,34 @@ class MusgraveFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = get_w(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
r_factor[i] = noise::musgrave_multi_fractal(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
case 2: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float2 position = float2(pxyz[0], pxyz[1]);
r_factor[i] = noise::musgrave_multi_fractal(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
r_factor[i] = noise::musgrave_multi_fractal(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
@@ -275,13 +275,13 @@ class MusgraveFunction : public mf::MultiFunction {
const VArray<float3> &vector = get_vector(0);
const VArray<float> &w = get_w(1);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float pw = w[i] * scale[i];
const float4 position{pxyz[0], pxyz[1], pxyz[2], pw};
r_factor[i] = noise::musgrave_multi_fractal(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
@@ -297,34 +297,34 @@ class MusgraveFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = get_w(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
r_factor[i] = noise::musgrave_ridged_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
case 2: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float2 position = float2(pxyz[0], pxyz[1]);
r_factor[i] = noise::musgrave_ridged_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
r_factor[i] = noise::musgrave_ridged_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
@@ -332,13 +332,13 @@ class MusgraveFunction : public mf::MultiFunction {
const VArray<float3> &vector = get_vector(0);
const VArray<float> &w = get_w(1);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float pw = w[i] * scale[i];
const float4 position{pxyz[0], pxyz[1], pxyz[2], pw};
r_factor[i] = noise::musgrave_ridged_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
@@ -354,34 +354,34 @@ class MusgraveFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = get_w(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
r_factor[i] = noise::musgrave_hybrid_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
case 2: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float2 position = float2(pxyz[0], pxyz[1]);
r_factor[i] = noise::musgrave_hybrid_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
r_factor[i] = noise::musgrave_hybrid_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
@@ -389,13 +389,13 @@ class MusgraveFunction : public mf::MultiFunction {
const VArray<float3> &vector = get_vector(0);
const VArray<float> &w = get_w(1);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float pw = w[i] * scale[i];
const float4 position{pxyz[0], pxyz[1], pxyz[2], pw};
r_factor[i] = noise::musgrave_hybrid_multi_fractal(
position, dimension[i], lacunarity[i], detail[i], offset[i], gain[i]);
}
});
}
break;
}
@@ -409,34 +409,34 @@ class MusgraveFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = get_w(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
r_factor[i] = noise::musgrave_fBm(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
case 2: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float2 position = float2(pxyz[0], pxyz[1]);
r_factor[i] = noise::musgrave_fBm(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
r_factor[i] = noise::musgrave_fBm(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
@@ -444,13 +444,13 @@ class MusgraveFunction : public mf::MultiFunction {
const VArray<float3> &vector = get_vector(0);
const VArray<float> &w = get_w(1);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float pw = w[i] * scale[i];
const float4 position{pxyz[0], pxyz[1], pxyz[2], pw};
r_factor[i] = noise::musgrave_fBm(
position, dimension[i], lacunarity[i], detail[i]);
}
});
}
break;
}
@@ -465,34 +465,34 @@ class MusgraveFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = get_w(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
r_factor[i] = noise::musgrave_hetero_terrain(
position, dimension[i], lacunarity[i], detail[i], offset[i]);
}
});
}
break;
}
case 2: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float2 position = float2(pxyz[0], pxyz[1]);
r_factor[i] = noise::musgrave_hetero_terrain(
position, dimension[i], lacunarity[i], detail[i], offset[i]);
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = get_vector(0);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
r_factor[i] = noise::musgrave_hetero_terrain(
position, dimension[i], lacunarity[i], detail[i], offset[i]);
}
});
}
break;
}
@@ -500,13 +500,13 @@ class MusgraveFunction : public mf::MultiFunction {
const VArray<float3> &vector = get_vector(0);
const VArray<float> &w = get_w(1);
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 pxyz = vector[i] * scale[i];
const float pw = w[i] * scale[i];
const float4 position{pxyz[0], pxyz[1], pxyz[2], pw};
r_factor[i] = noise::musgrave_hetero_terrain(
position, dimension[i], lacunarity[i], detail[i], offset[i]);
}
});
}
break;
}

View File

@@ -121,7 +121,7 @@ class NoiseFunction : public mf::MultiFunction {
return signature;
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
int param = ELEM(dimensions_, 2, 3, 4) + ELEM(dimensions_, 1, 4);
const VArray<float> &scale = params.readonly_single_input<float>(param++, "Scale");
@@ -141,57 +141,57 @@ class NoiseFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = params.readonly_single_input<float>(0, "W");
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
r_factor[i] = noise::perlin_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
}
});
}
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float position = w[i] * scale[i];
const float3 c = noise::perlin_float3_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
break;
}
case 2: {
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float2 position = float2(vector[i] * scale[i]);
r_factor[i] = noise::perlin_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
}
});
}
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float2 position = float2(vector[i] * scale[i]);
const float3 c = noise::perlin_float3_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
r_factor[i] = noise::perlin_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
}
});
}
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position = vector[i] * scale[i];
const float3 c = noise::perlin_float3_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
break;
}
@@ -199,17 +199,17 @@ class NoiseFunction : public mf::MultiFunction {
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
const VArray<float> &w = params.readonly_single_input<float>(1, "W");
if (compute_factor) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position_vector = vector[i] * scale[i];
const float position_w = w[i] * scale[i];
const float4 position{
position_vector[0], position_vector[1], position_vector[2], position_w};
r_factor[i] = noise::perlin_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
}
});
}
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 position_vector = vector[i] * scale[i];
const float position_w = w[i] * scale[i];
const float4 position{
@@ -217,7 +217,7 @@ class NoiseFunction : public mf::MultiFunction {
const float3 c = noise::perlin_float3_fractal_distorted(
position, detail[i], roughness[i], distortion[i]);
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
break;
}

View File

@@ -238,7 +238,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
return signature;
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
auto get_vector = [&](int param_index) -> VArray<float3> {
return params.readonly_single_input<float3>(param_index, "Vector");
@@ -286,7 +286,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
float2 pos;
@@ -304,7 +304,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
pos = math::safe_divide(pos, scale[i]);
r_position[i] = float3(pos.x, pos.y, 0.0f);
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -318,7 +318,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
float2 pos;
@@ -336,7 +336,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
pos = math::safe_divide(pos, scale[i]);
r_position[i] = float3(pos.x, pos.y, 0.0f);
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -351,7 +351,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
@@ -371,7 +371,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
pos = math::safe_divide(pos, scale[i]);
r_position[i] = float3(pos.x, pos.y, 0.0f);
}
}
});
break;
}
}
@@ -390,7 +390,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
noise::voronoi_f1(vector[i] * scale[i],
@@ -406,7 +406,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
if (calc_position) {
r_position[i] = math::safe_divide(r_position[i], scale[i]);
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -420,7 +420,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
noise::voronoi_f2(vector[i] * scale[i],
@@ -436,7 +436,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
if (calc_position) {
r_position[i] = math::safe_divide(r_position[i], scale[i]);
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -451,7 +451,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
@@ -469,7 +469,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
if (calc_position) {
r_position[i] = math::safe_divide(r_position[i], scale[i]);
}
}
});
break;
}
}
@@ -491,7 +491,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
float3 col;
@@ -515,7 +515,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
r_w[i] = pos.w;
}
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -532,7 +532,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
float3 col;
@@ -556,7 +556,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
r_w[i] = pos.w;
}
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -574,7 +574,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
@@ -600,7 +600,7 @@ class VoronoiMinowskiFunction : public mf::MultiFunction {
r_w[i] = pos.w;
}
}
}
});
break;
}
}
@@ -675,7 +675,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
return signature;
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
auto get_vector = [&](int param_index) -> VArray<float3> {
return params.readonly_single_input<float3>(param_index, "Vector");
@@ -719,7 +719,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float p = w[i] * scale[i];
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
@@ -734,7 +734,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
if (calc_w) {
r_w[i] = safe_divide(r_w[i], scale[i]);
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -747,7 +747,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float p = w[i] * scale[i];
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
@@ -762,7 +762,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
if (calc_w) {
r_w[i] = safe_divide(r_w[i], scale[i]);
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -776,7 +776,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float p = w[i] * scale[i];
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
@@ -793,7 +793,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
if (calc_w) {
r_w[i] = safe_divide(r_w[i], scale[i]);
}
}
});
break;
}
}
@@ -811,7 +811,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
float2 pos;
@@ -829,7 +829,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
pos = math::safe_divide(pos, scale[i]);
r_position[i] = float3(pos.x, pos.y, 0.0f);
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -842,7 +842,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
float2 pos;
@@ -860,7 +860,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
pos = math::safe_divide(pos, scale[i]);
r_position[i] = float3(pos.x, pos.y, 0.0f);
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -874,7 +874,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
@@ -894,7 +894,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
pos = math::safe_divide(pos, scale[i]);
r_position[i] = float3(pos.x, pos.y, 0.0f);
}
}
});
break;
}
}
@@ -912,7 +912,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
noise::voronoi_f1(vector[i] * scale[i],
@@ -928,7 +928,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
if (calc_position) {
r_position[i] = math::safe_divide(r_position[i], scale[i]);
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -941,7 +941,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_distance = !r_distance.is_empty();
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
noise::voronoi_f2(vector[i] * scale[i],
@@ -957,7 +957,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
if (calc_position) {
r_position[i] = math::safe_divide(r_position[i], scale[i]);
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -972,7 +972,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
{
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
float3 col;
@@ -990,7 +990,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
if (calc_position) {
r_position[i] = math::safe_divide(r_position[i], scale[i]);
}
}
});
}
break;
@@ -1013,7 +1013,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
float3 col;
@@ -1037,7 +1037,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
r_w[i] = pos.w;
}
}
}
});
break;
}
case SHD_VORONOI_F2: {
@@ -1053,7 +1053,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
float3 col;
@@ -1077,7 +1077,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
r_w[i] = pos.w;
}
}
}
});
break;
}
case SHD_VORONOI_SMOOTH_F1: {
@@ -1094,7 +1094,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
const bool calc_color = !r_color.is_empty();
const bool calc_position = !r_position.is_empty();
const bool calc_w = !r_w.is_empty();
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float smth = std::min(std::max(smoothness[i] / 2.0f, 0.0f), 0.5f);
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
@@ -1120,7 +1120,7 @@ class VoronoiMetricFunction : public mf::MultiFunction {
r_w[i] = pos.w;
}
}
}
});
break;
}
}
@@ -1183,7 +1183,7 @@ class VoronoiEdgeFunction : public mf::MultiFunction {
return signature;
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
auto get_vector = [&](int param_index) -> VArray<float3> {
return params.readonly_single_input<float3>(param_index, "Vector");
@@ -1213,20 +1213,20 @@ class VoronoiEdgeFunction : public mf::MultiFunction {
switch (feature_) {
case SHD_VORONOI_DISTANCE_TO_EDGE: {
MutableSpan<float> r_distance = get_r_distance(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float p = w[i] * scale[i];
noise::voronoi_distance_to_edge(p, rand, &r_distance[i]);
}
});
break;
}
case SHD_VORONOI_N_SPHERE_RADIUS: {
MutableSpan<float> r_radius = get_r_radius(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float p = w[i] * scale[i];
noise::voronoi_n_sphere_radius(p, rand, &r_radius[i]);
}
});
break;
}
}
@@ -1239,20 +1239,20 @@ class VoronoiEdgeFunction : public mf::MultiFunction {
switch (feature_) {
case SHD_VORONOI_DISTANCE_TO_EDGE: {
MutableSpan<float> r_distance = get_r_distance(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float2 p = float2(vector[i].x, vector[i].y) * scale[i];
noise::voronoi_distance_to_edge(p, rand, &r_distance[i]);
}
});
break;
}
case SHD_VORONOI_N_SPHERE_RADIUS: {
MutableSpan<float> r_radius = get_r_radius(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float2 p = float2(vector[i].x, vector[i].y) * scale[i];
noise::voronoi_n_sphere_radius(p, rand, &r_radius[i]);
}
});
break;
}
}
@@ -1265,18 +1265,18 @@ class VoronoiEdgeFunction : public mf::MultiFunction {
switch (feature_) {
case SHD_VORONOI_DISTANCE_TO_EDGE: {
MutableSpan<float> r_distance = get_r_distance(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
noise::voronoi_distance_to_edge(vector[i] * scale[i], rand, &r_distance[i]);
}
});
break;
}
case SHD_VORONOI_N_SPHERE_RADIUS: {
MutableSpan<float> r_radius = get_r_radius(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
noise::voronoi_n_sphere_radius(vector[i] * scale[i], rand, &r_radius[i]);
}
});
break;
}
}
@@ -1290,20 +1290,20 @@ class VoronoiEdgeFunction : public mf::MultiFunction {
switch (feature_) {
case SHD_VORONOI_DISTANCE_TO_EDGE: {
MutableSpan<float> r_distance = get_r_distance(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
noise::voronoi_distance_to_edge(p, rand, &r_distance[i]);
}
});
break;
}
case SHD_VORONOI_N_SPHERE_RADIUS: {
MutableSpan<float> r_radius = get_r_radius(param++);
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float rand = std::min(std::max(randomness[i], 0.0f), 1.0f);
const float4 p = float4(vector[i].x, vector[i].y, vector[i].z, w[i]) * scale[i];
noise::voronoi_n_sphere_radius(p, rand, &r_radius[i]);
}
});
break;
}
}

View File

@@ -111,7 +111,7 @@ class WaveFunction : public mf::MultiFunction {
this->set_signature(&signature);
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
const VArray<float> &scale = params.readonly_single_input<float>(1, "Scale");
@@ -125,8 +125,7 @@ class WaveFunction : public mf::MultiFunction {
params.uninitialized_single_output_if_required<ColorGeometry4f>(7, "Color");
MutableSpan<float> r_fac = params.uninitialized_single_output<float>(8, "Fac");
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
float3 p = vector[i] * scale[i];
/* Prevent precision issues on unit coordinates. */
p = (p + 0.000001f) * 0.999999f;
@@ -193,11 +192,11 @@ class WaveFunction : public mf::MultiFunction {
}
r_fac[i] = val;
}
});
if (!r_color.is_empty()) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
r_color[i] = ColorGeometry4f(r_fac[i], r_fac[i], r_fac[i], 1.0f);
}
});
}
}
};

View File

@@ -96,7 +96,7 @@ class WhiteNoiseFunction : public mf::MultiFunction {
return signature;
}
void call(IndexMask mask, mf::Params params, mf::Context /*context*/) const override
void call(const IndexMask &mask, mf::Params params, mf::Context /*context*/) const override
{
int param = ELEM(dimensions_, 2, 3, 4) + ELEM(dimensions_, 1, 4);
@@ -112,45 +112,43 @@ class WhiteNoiseFunction : public mf::MultiFunction {
case 1: {
const VArray<float> &w = params.readonly_single_input<float>(0, "W");
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 c = noise::hash_float_to_float3(w[i]);
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
if (compute_value) {
for (int64_t i : mask) {
r_value[i] = noise::hash_float_to_float(w[i]);
}
mask.foreach_index(
[&](const int64_t i) { r_value[i] = noise::hash_float_to_float(w[i]); });
}
break;
}
case 2: {
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 c = noise::hash_float_to_float3(float2(vector[i].x, vector[i].y));
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
if (compute_value) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
r_value[i] = noise::hash_float_to_float(float2(vector[i].x, vector[i].y));
}
});
}
break;
}
case 3: {
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 c = noise::hash_float_to_float3(vector[i]);
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
if (compute_value) {
for (int64_t i : mask) {
r_value[i] = noise::hash_float_to_float(vector[i]);
}
mask.foreach_index(
[&](const int64_t i) { r_value[i] = noise::hash_float_to_float(vector[i]); });
}
break;
}
@@ -158,17 +156,17 @@ class WhiteNoiseFunction : public mf::MultiFunction {
const VArray<float3> &vector = params.readonly_single_input<float3>(0, "Vector");
const VArray<float> &w = params.readonly_single_input<float>(1, "W");
if (compute_color) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
const float3 c = noise::hash_float_to_float3(
float4(vector[i].x, vector[i].y, vector[i].z, w[i]));
r_color[i] = ColorGeometry4f(c[0], c[1], c[2], 1.0f);
}
});
}
if (compute_value) {
for (int64_t i : mask) {
mask.foreach_index([&](const int64_t i) {
r_value[i] = noise::hash_float_to_float(
float4(vector[i].x, vector[i].y, vector[i].z, w[i]));
}
});
}
break;
}