Previously, `VArrayImpl` had a `materialize` and `materialize_to_uninitialized`
function. Now both are merged into one with an additional `bool
dst_is_uninitialized` parameter. The same is done for the
`materialize_compressed` method as all as `GVArrayImpl`.
While this kind of merging is typically not ideal, it reduces the binary size by
~200kb while being basically free performance wise. The cost of this predictable
boolean check is expected to be negligible even if only very few indices are
materialized. Additionally, in most cases, this parameter does not even have to
be checked, because for trivial types it does not matter if the destination
array is already initialized or not when overwriting it.
It saves this much memory, because there are quite a few implementations being
generated with e.g. `VArray::from_func` and a lot of code was duplicated for
each instantiation.
This changes only the actual `(G)VArrayImpl`, but not the `VArray` and `GVArray`
API which is typically used to work with virtual arrays.
Pull Request: https://projects.blender.org/blender/blender/pulls/145144
It's safer to pass a type so that it can be checked if delete should be
used instead. Also changes a few void pointer casts to const_cast so that
if the data becomes typed it's an error.
Pull Request: https://projects.blender.org/blender/blender/pulls/137404
This makes accessing these properties more convenient. Since we only ever have
const references to `CPPType`, there isn't really a benefit to using methods to
avoid mutation.
Pull Request: https://projects.blender.org/blender/blender/pulls/137482
It looks like this specific case never really worked. This wasn't found before,
because in the large majority of cases, execution uses a more optimized code
path instead of this general one.
Pull Request: https://projects.blender.org/blender/blender/pulls/128993
The lack of these functions in the "single trivial value" and "sliced
GVArray" implementations caused some code to call fack to the base
class functions. Those are much slower since they involve a virtual
function call per element. For example, this changed the runtime of
creating a new boolean attribute set to "true" on one million faces
from 3.4 ms to 0.35 ms.
Pull Request: https://projects.blender.org/blender/blender/pulls/118161
Along with the 4.1 libraries upgrade, we are bumping the clang-format
version from 8-12 to 17. This affects quite a few files.
If not already the case, you may consider pointing your IDE to the
clang-format binary bundled with the Blender precompiled libraries.
Listing the "Blender Foundation" as copyright holder implied the Blender
Foundation holds copyright to files which may include work from many
developers.
While keeping copyright on headers makes sense for isolated libraries,
Blender's own code may be refactored or moved between files in a way
that makes the per file copyright holders less meaningful.
Copyright references to the "Blender Foundation" have been replaced with
"Blender Authors", with the exception of `./extern/` since these this
contains libraries which are more isolated, any changed to license
headers there can be handled on a case-by-case basis.
Some directories in `./intern/` have also been excluded:
- `./intern/cycles/` it's own `AUTHORS` file is planned.
- `./intern/opensubdiv/`.
An "AUTHORS" file has been added, using the chromium projects authors
file as a template.
Design task: #110784
Ref !110783.
A lot of files were missing copyright field in the header and
the Blender Foundation contributed to them in a sense of bug
fixing and general maintenance.
This change makes it explicit that those files are at least
partially copyrighted by the Blender Foundation.
Note that this does not make it so the Blender Foundation is
the only holder of the copyright in those files, and developers
who do not have a signed contract with the foundation still
hold the copyright as well.
Another aspect of this change is using SPDX format for the
header. We already used it for the license specification,
and now we state it for the copyright as well, following the
FAQ:
https://reuse.software/faq/
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
For example
```
OIIOOutputDriver::~OIIOOutputDriver()
{
}
```
becomes
```
OIIOOutputDriver::~OIIOOutputDriver() {}
```
Saves quite some vertical space, which is especially handy for
constructors.
Pull Request: https://projects.blender.org/blender/blender/pulls/105594
This is the conventional way of dealing with unused arguments in C++,
since it works on all compilers.
Regex find and replace: `UNUSED\((\w+)\)` -> `/*$1*/`
Calling `finish` after writing to generic attributes is currently necessary for
correctness. Previously, this was easy to forget. Now there is a check for this
in debug builds.
`GSpan` and spans based on virtual arrays were not default constructible
before, which made them hard to use sometimes. It's generally fine for
spans to be empty.
The main thing the keep in mind is that the type pointer in `GSpan` may
be null now. Generally, code receiving spans as input can assume that
the type is not-null, but sometimes that may be valid. The old #type() method
that returned a reference to the type still exists. It asserts when the
type is null.
This is just a theoretical improvement currently, I won't try to justify
it with some microbenchmark, but it should be better to use the
specialized single and span virtual arrays when slicing a `GVArray`,
since any use of `GVArrayImpl_For_SlicedGVArray` has extra overhead.
Differential Revision: https://developer.blender.org/D15361
This reduces the amount of code, and improves performance a bit by
doing more with less virtual method calls.
Differential Revision: https://developer.blender.org/D15293
My benchmark which spend most time preparing function parameters
takes `250 ms` now, from `510 ms` before. This is mainly achieved by
doing less unnecessary work and by giving the compiler more inlined
code to optimize.
* Reserve correct vector sizes and use unchecked `append` function.
* Construct `GVArray` parameters directly in the vector, instead of
moving/copying them in the vector afterwards.
* Inline some constructors, because that allows the compiler understand
what is happening, resulting in less code.
This probably has negilible impact on the user experience currently,
because there are other bottlenecks.
Differential Revision: https://developer.blender.org/D15009
Method which overrides a base class's virtual methods are expetced to
be marked with `override`. This also gives better idea to the developers
about what is going on.
This does two things:
* Introduce new `materialize_compressed` methods. Those are used
when the dst array should not have any gaps.
* Add materialize methods in various classes where they were missing
(and therefore caused overhead, because slower fallbacks had to be used).
In a test file from T96282, this commit reduces the runtime of the
delete geometry node from 82 ms to 23 ms, a 3.6x improvement.
Writing to vertex groups in other cases should be faster too.
The largest improvement comes from not writing a new weight
of zero if the vertex is not in the group. This mirrors the behavior
of custom data interpolation in `layerInterp_mdeformvert`.
Other improvements come from using `set_all` for writing
output attributes and implementing that method for vertex groups.
I also implemented `materialize` methods. Though I didn't obverse
an improvement from this, I think it's best to remove virtual method
call overhead where it's simple to do so.
The test file for the delete geometry node needs to be updated.
These methods could be parallelized too, but better to do that later.
Differential Revision: https://developer.blender.org/D14420