There's quite a few libraries that depend on dna_type_offsets.h
but had gotten to it by just adding the folder that contains it to
their includes INC section without declaring a dependency to
bf_dna in the LIB section.
which occasionally lead to the lib building before bf_dna and the
header being missing, while this generally gets fixed in CMake by
adding bf_dna to the LIB section of the lib, however until last
week all libraries in the LIB section were linked as INTERFACE so
adding it in there did not resolve the build issue.
To make things still build, we sprinkled add_dependencies wherever
we needed it to force a build order.
This diff :
Declares public include folders for the bf_dna target so there's
no more fudging the INC section required to get to them.
Removes all dna related paths from the INC section for all
libraries.
Adds an alias target bf:dna to signify it has been updated to
modern cmake
Declares a dependency on bf::dna for all libraries that require it
Removes (almost) all calls to add_dependencies for bf_dna
Future work:
Because of the manual dependency management that was done, there is
now some "clutter" with libs depending on bf_dna that realistically
don't. Example bf_intern_opencolorio itself has no dependency on
bf_dna at all, doesn't need it, doesn't use it. However the
dna include folder had been added to it in the past since bf_blenlib
uses dna headers in some of its public headers and
bf_intern_opencolorio does use those blenlib headers.
Given bf_blenlib now correctly declares the dependency on bf_dna
as public bf_intern_opencolorio will get the dna header directory
automatically from CMake, hence some cleanup could be done for
bf_intern_opencolorio
Because 99% of the changes in this diff have been automated, this diff
does not seek to address these issues as there is no easy way to
determine why a certain dependency is in place. A developer will have
to make a pass a this at some later point in time. As I'd rather not
mix automated and manual labour.
There are a few libraries that could not be automatically processed
(ie bf_blendthumb) that also will need this manual look-over.
Pull Request: https://projects.blender.org/blender/blender/pulls/109835
Add a quaternion attribute type that will be used in combination with
rotation sockets for geometry nodes to give a more intuitive experience
and better performance when using rotations.
The most interesting part is probably the interpolation, the rest is
the same as the last attribute type addition, 988f23cec3.
We need to interpolate multiple values with different weights.
Based on Sybren's suggestion, this uses the `expmap` methods from
4805a54525 for that.
This also refactors `SimpleMixerWithAccumulationType` to use a
function rather than a cast to convert to the accumulation type.
See #92967
Pull Request: https://projects.blender.org/blender/blender/pulls/108678
A lot of files were missing copyright field in the header and
the Blender Foundation contributed to them in a sense of bug
fixing and general maintenance.
This change makes it explicit that those files are at least
partially copyrighted by the Blender Foundation.
Note that this does not make it so the Blender Foundation is
the only holder of the copyright in those files, and developers
who do not have a signed contract with the foundation still
hold the copyright as well.
Another aspect of this change is using SPDX format for the
header. We already used it for the license specification,
and now we state it for the copyright as well, following the
FAQ:
https://reuse.software/faq/
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
The main goal here is to reduce the number of times thread-local data has
to be looked up using e.g. `EnumerableThreadSpecific.local()`. While this
isn't a bottleneck in many cases, it is when the action performed on the local
data is very short and that happens very often (e.g. logging used sockets
during geometry nodes evaluation).
The solution is to simply pass the thread-local data as parameter to many
functions that use it, instead of looking it up in those functions which
generally is more costly.
The lazy-function graph executor now only looks up the local data if
it knows that it might be on a new thread, otherwise it uses the local data
retrieved earlier.
Alongside with `UserData` there is `LocalUserData` now. This allows users
of the lazy-function evaluation (such as geometry nodes) to have custom
thread-local data that is passed to all the lazy-functions automatically.
This is used for logging now.
This type will be used to store mesh edges in #106638, but it could
be used for anything else too. This commit adds support for:
- The new type in the Python API
- Editing the type in the edit mode "Attribute Set" operator
- Rendering the type in EEVEE and Cycles for all geometry types
- Geometry nodes attribute interpolation and mixing
- Viewing the type in the spreadsheet and using row filters
The attribute uses the `blender::int2` type in most code, and
the `vec2i` DNA type in C code when necessary. The enum names
are based on `INT32_2D` for consistency with `INT8` and `INT32`.
Pull Request: https://projects.blender.org/blender/blender/pulls/106677
For example
```
OIIOOutputDriver::~OIIOOutputDriver()
{
}
```
becomes
```
OIIOOutputDriver::~OIIOOutputDriver() {}
```
Saves quite some vertical space, which is especially handy for
constructors.
Pull Request: https://projects.blender.org/blender/blender/pulls/105594
Straightforward port. I took the oportunity to remove some C vector
functions (ex: copy_v2_v2).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
Straightforward port. I took the oportunity to remove some C vector
functions (ex: `copy_v2_v2`).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
This can improve performance in some circumstances when there are
vectorized and/or unrolled loops. I especially noticed that this helps
a lot while working on D16970 (got a 10-20% speedup there by avoiding
running into the non-vectorized fallback loop too often).
In most cases it is currently not used, so always having it there
causes unnecessary overhead. In my test file that causes
a 2 % performance improvement.
Previously, `ParamsBuilder` lazily allocated an array for an
output when it was unused, but the called multi-function
wanted to access it. Now, whether the multi-function supports
an output to be unused is part of the signature. This way, the
allocation can happen earlier when the parameters are build.
The benefit is that this makes all methods of `MFParams`
thread-safe again, removing the need for a mutex.
`std::get` could not be used due to restrictions on macos.
However, the minimum requirement has been lifted in
{rB597aecc01644f0063fa4545dabadc5f73387e3d3}.
During geometry nodes evaluation some sockets can be determined
to be unused, for example based on the condition input in a switch node.
Once a socket is determined to be unused, that information has to be
propagated backwards through the tree to free any memory that may
have been reserved for those sockets already. This is happening before
this commit already, but in a less ideal way.
Determining that sockets are unused early is good because it helps with
memory reuse and avoids copy-on-write copies caused by shared data.
Now, nodes that are scheduled because an output became unused have
priority over nodes scheduled for other reasons.
This allows auto-vectorization to happen when the a multi-function is
evaluated in "materialized" mode, i.e. it is processed in chunks where
all input and outputs values are stored in contiguous arrays.
It also unifies the handling input, mutable and output parameters a bit.
Now they all can use tempory buffers in the same way.
This simplifies the code enough so that msvc is able to unroll and
vectorize some multi-functions like simple addition.
The performance improvements are almost as good as the GCC
improvements shown in D16942 (for add and multiply at least).
This mainly helps GCC catch up with Clang in terms of field evaluation
performance in some cases. In some cases this patch can speedup
field evaluation 2-3x (e.g. when there are many float math nodes).
See D16942 for a more detailed benchmark.
This moves all multi-function related code in the `functions` module
into a new `multi_function` namespace. This is similar to how there
is a `lazy_function` namespace.
The main benefit of this is that many types names that were prefixed
with `MF` (for "multi function") can be simplified.
There is also a common shorthand for the `multi_function` namespace: `mf`.
This is also similar to lazy-functions where the shortened namespace
is called `lf`.
* `depends_on_context` was not used for a long time already.
* `param_data_indices` is not used since rB42b88c008861b6.
* The remaining data is moved to a single `Vector` to avoid
having to do two allocations when the size signature becomes
larger than fits into the inline buffer.
This avoids a move of the signature after building it. Tthe value had
to be moved out of `MFSignatureBuilder` in the `build` method.
This also makes the naming a bit less confusing where sometimes
both the `MFSignature` and `MFSignatureBuilder` were referred
to as "signature".
* New `build_mf` namespace for the multi-function builders.
* The type name of the created multi-functions is now "private",
i.e. the caller has to use `auto`. This has the benefit that the
implementation can change more freely without affecting
the caller.
* `CustomMF` does not use `std::function` internally anymore.
This reduces some overhead during code generation and at
run-time.
* `CustomMF` now supports single-mutable parameters.
This refactors how devirtualization is done in general and how
multi-functions use it.
* The old `Devirtualizer` class has been removed in favor of a simpler
solution. It is also more general in the sense that it is not coupled
with `IndexMask` and `VArray`. Instead there is a function that has
inputs which control how different types are devirtualized. The
new implementation is currently less general with regard to the number
of parameters it supports. This can be changed in the future, but
does not seem necessary now and would make the code less obvious.
* Devirtualizers for different types are now defined in their respective
headers.
* The multi-function builder works with the `GVArray` stored in `MFParams`
directly now, instead of first converting it to a `VArray<T>`. This reduces
some constant overhead, which makes the multi-function slightly
faster. This is only noticable when very few elements are processed though.
No functional changes or performance regressions are expected.
The use of `std::variant` allows combining the four vectors
into one which more closely matches the intend and avoids
a workaround used before.
Note that this uses `std::get_if` instead of `std::get` because
`std::get` is only available since macOS 10.14.
Previously, the lifetimes of anonymous attributes were determined by
reference counts which were non-deterministic when multiple threads
are used. Now the lifetimes of anonymous attributes are handled
more explicitly and deterministically. This is a prerequisite for any kind
of caching, because caching the output of nodes that do things
non-deterministically and have "invisible inputs" (reference counts)
doesn't really work.
For more details for how deterministic lifetimes are achieved, see D16858.
No functional changes are expected. Small performance changes are expected
as well (within few percent, anything larger regressions should be reported as
bugs).
Differential Revision: https://developer.blender.org/D16858