Files
test/source/blender/blenlib/BLI_generic_vector_array.hh
Jacques Lucke 2cfcb8b0b8 BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
  `int64_t` for each index which is more than necessary in pretty much all
  practical cases currently. Using `int32_t` might still become limiting
  in the future in case we use this to index e.g. byte buffers larger than
  a few gigabytes. We also don't want to template `IndexMask`, because
  that would cause a split in the "ecosystem", or everything would have to
  be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
  array. This is generally good but has the problem that it is hard to fill
  from multiple-threads when the final size is not known from the beginning.
  This is commonly the case when e.g. converting an array of bool to an
  index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
  It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
  The most important part of that is to avoid all memory access when iterating
  over continuous ranges. For some core nodes (e.g. math nodes), we generate
  optimized code for the cases of irregular index masks and simple index ranges.

To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
  `O(log #indices)` now, but with a low constant factor. It should be possible
  to split a mask into n approximately equally sized parts in `O(n)` though,
  making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
  data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
  callbacks have to be used. To avoid extra code complexity at the call site,
  the `foreach_*` methods support multi-threading out of the box.

The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.

For more details see comments in `BLI_index_mask.hh`.

I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
  expected because we already made sure that e.g. add node evaluation is
  vectorized. The important thing here is to check that changes to the way we
  iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
  That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
  In the worst case, the memory requirements can be larger when there are many
  indices that are very far away. However, when they are far away from each other,
  that indicates that there aren't many indices in total. In common cases, memory
  usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
  `index_mask_from_selection` on 10.000.000 elements at various probabilities for
  `true` at every index:
  ```
  Probability      Old        New
  0              4.6 ms     0.8 ms
  0.001          5.1 ms     1.3 ms
  0.2            8.4 ms     1.8 ms
  0.5           15.3 ms     3.0 ms
  0.8           20.1 ms     3.0 ms
  0.999         25.1 ms     1.7 ms
  1             13.5 ms     1.1 ms
  ```

Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00

148 lines
3.7 KiB
C++

/* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
/** \file
* \ingroup bli
*
* A`GVectorArray` is a container for a fixed amount of dynamically growing vectors with a generic
* data type. Its main use case is to store many small vectors with few separate allocations. Using
* this structure is generally more efficient than allocating each vector separately.
*/
#include "BLI_array.hh"
#include "BLI_generic_virtual_vector_array.hh"
#include "BLI_linear_allocator.hh"
namespace blender {
/* An array of vectors containing elements of a generic type. */
class GVectorArray : NonCopyable, NonMovable {
private:
struct Item {
void *start = nullptr;
int64_t length = 0;
int64_t capacity = 0;
};
/* Use a linear allocator to pack many small vectors together. Currently, memory from reallocated
* vectors is not reused. This can be improved in the future. */
LinearAllocator<> allocator_;
/* The data type of individual elements. */
const CPPType &type_;
/* The size of an individual element. This is inlined from `type_.size()` for easier access. */
const int64_t element_size_;
/* The individual vectors. */
Array<Item> items_;
public:
GVectorArray() = delete;
GVectorArray(const CPPType &type, int64_t array_size);
~GVectorArray();
int64_t size() const
{
return items_.size();
}
bool is_empty() const
{
return items_.is_empty();
}
const CPPType &type() const
{
return type_;
}
void append(int64_t index, const void *value);
/* Add multiple elements to a single vector. */
void extend(int64_t index, const GVArray &values);
void extend(int64_t index, GSpan values);
/* Add multiple elements to multiple vectors. */
void extend(const IndexMask &mask, const GVVectorArray &values);
void extend(const IndexMask &mask, const GVectorArray &values);
void clear(const IndexMask &mask);
GMutableSpan operator[](int64_t index);
GSpan operator[](int64_t index) const;
private:
void realloc_to_at_least(Item &item, int64_t min_capacity);
};
/* A non-owning typed mutable reference to an `GVectorArray`. It simplifies access when the type of
* the data is known at compile time. */
template<typename T> class GVectorArray_TypedMutableRef {
private:
GVectorArray *vector_array_;
public:
GVectorArray_TypedMutableRef(GVectorArray &vector_array) : vector_array_(&vector_array)
{
BLI_assert(vector_array_->type().is<T>());
}
int64_t size() const
{
return vector_array_->size();
}
bool is_empty() const
{
return vector_array_->is_empty();
}
void append(const int64_t index, const T &value)
{
vector_array_->append(index, &value);
}
void extend(const int64_t index, const Span<T> values)
{
vector_array_->extend(index, values);
}
void extend(const int64_t index, const VArray<T> &values)
{
vector_array_->extend(index, values);
}
MutableSpan<T> operator[](const int64_t index)
{
return (*vector_array_)[index].typed<T>();
}
};
/* A generic virtual vector array implementation for a `GVectorArray`. */
class GVVectorArray_For_GVectorArray : public GVVectorArray {
private:
const GVectorArray &vector_array_;
public:
GVVectorArray_For_GVectorArray(const GVectorArray &vector_array)
: GVVectorArray(vector_array.type(), vector_array.size()), vector_array_(vector_array)
{
}
protected:
int64_t get_vector_size_impl(const int64_t index) const override
{
return vector_array_[index].size();
}
void get_vector_element_impl(const int64_t index,
const int64_t index_in_vector,
void *r_value) const override
{
type_->copy_assign(vector_array_[index][index_in_vector], r_value);
}
};
} // namespace blender