When a function is executed for many elements (e.g. per point) it is often the case
that some parameters are different for every element and other parameters are
the same (there are some more less common cases). To simplify writing such
functions one can use a "virtual array". This is a data structure that has a value
for every index, but might not be stored as an actual array internally. Instead, it
might be just a single value or is computed on the fly. There are various tradeoffs
involved when using this data structure which are mentioned in `BLI_virtual_array.hh`.
It is called "virtual", because it uses inheritance and virtual methods.
Furthermore, there is a new virtual vector array data structure, which is an array
of vectors. Both these types have corresponding generic variants, which can be used
when the data type is not known at compile time. This is typically the case when
building a somewhat generic execution system. The function system used these virtual
data structures before, but now they are more versatile.
I've done this refactor in preparation for the attribute processor and other features of
geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used
independent of the function system.
One open question for me is whether all the generic data structures (and `CPPType`)
should be moved to blenlib as well. They are well isolated and don't really contain
any business logic. That can be done later if necessary.
Some generic algorithms from the standard library like `std::any_of`
did not work with all container and iterator types. To improve the
situation, this patch adds various type members to containers
and iterators.
Custom iterators for Set, Map and IndexRange now have an iterator
category, which soe algorithms require. IndexRange could become
a random access iterator, but adding all the missing methods can
be done when it is necessary.
This reverts commit 7a34bd7c28.
Broke windows build. Can apparently fix with /Zc:preprocessor flag
for windows but need a Windows dev to make that fix.
The commit rB6f63417b500d that made exact boolean work on meshes
with holes (like Suzanne) unfortunately dramatically slowed things
down on other non-manifold meshes that don't have holes and didn't
need the per-triangle insideness test.
This adds a hole_tolerant parameter, false by default, that the user
can enable to get good results on non-manifold meshes with holes.
Using false for this parameter speeds up the time from 90 seconds
to 10 seconds on an example with 1.2M triangles.
Instead of returning a raw pointer, `LinearAllocator.construct(...)` now returns
a `destruct_ptr`, which is similar to `unique_ptr`, but does not deallocate
the memory and only calls the destructor instead.
The main change is that large allocations are done separately now.
Also, buffers that small allocations are packed into, have a maximum
size now. Using larger buffers does not really provider performance
benefits, but increases wasted memory.
Using `FunctionRef` is better than using `std::function`, templates and c function
pointers in some cases. The trade offs are explained in more detail in code documentation.
The following are some of the main benefits of using `FunctionRef`:
* It is convenient to use with all kinds of callables.
* It is cheaper to construct, copy and (possibly) call compared to `std::function`.
* Functions taking a `FunctionRef` as parameter don't need to be declared
in header files (as is necessary when using templates usually).
Differential Revision: https://developer.blender.org/D10476
Previously, methods like `Span.drop_front` would crash when more
elements would be dropped than are available. While this is most
efficient, it is not very practical in some use cases. Also other languages
silently clamp the index, so one can easily write wrong code accidentally.
Now, `Span.drop_front` and similar methods will only crash when n
is negative. Too large values will be clamped down to their maximum
possible value. While this is slightly less efficient, I did not have a case
where this actually mattered yet. If it does matter in the future, we can
add a separate `*_unchecked` method.
This should not change the behavior of existing code.
Casting pointers from one type to another does change the
value of the pointer in some cases. Therefore, casting a span
that contains pointers of one type to a span that contains
pointers of another type, is not generally safe. In practice, this
issue mainly comes up when dealing with classes that have a
vtable.
There are some special cases that are still allowed. For example,
adding const to the pointer does not change the address.
Also, casting to a void pointer is fine.
In cases where implicit conversion is disabled, but one is sure
that the cast is valid, an explicit call of `span.cast<NewType>()`
can be used.
This data structure adds priority queue functionality to an existing array.
The underlying array is not changed. Instead, the priority queue maintains
indices into the original array.
Changing priorities of elements dynamically is supported, but the priority
queue has to be informed of such changes.
This data structure is needed for D9787.
Motivated by `std::string_view` being usable in
const (compile-time) context.
One functional change was needed for StringRef:
`std::char_traits<char>::length(str)` instead of `strlen`.
Reviewed By: JacquesLucke, LazyDodo
Differential Revision: https://developer.blender.org/D9788
It turns out that after the fix to T83196 (rB814b2787cadd) the matrix
to quaternion conversion can produce noncanonical results in large
areas of the rotation space, when previously this was limited to
way smaller areas. This in turn causes Swing+Twist math to produce
angles beyond 180 degrees, e.g. outputting a -120..240 range.
This fixes both issues, ensuring that conversion outputs a canonical
result, and decomposition canonifies its input.
This was reported in chat by @jpbouza.
Modernize loops by using the `for(type variable : container)` syntax.
Some loops were trivial to fix, whereas others required more attention
to avoid semantic changes. I couldn't address all old-style loops, so
this commit doesn't enable the `modernize-loop-convert` rule.
Although Clang-Tidy's auto-fixer prefers to use `auto` for the loop
variable declaration, I made as many declarations as possible explicit.
To me this increases local readability, as you don't need to fully
understand the container in order to understand the loop variable type.
No functional changes.
Replace `typedef` with `using` in C++ code.
In the case of `typedef struct SomeName { ... } SomeName;` I removed the
`typedef` altogether, as this is unnecessary in C++. Such cases have been
rewritten to `struct SomeName { ... };`
No functional changes.
Adjust the threshold for switching from the base case to trace > 0,
based on very similar example code from www.euclideanspace.com to
avoid float precision issues when trace is close to -1.
Also, remove conversions to and from double, because using double
here doesn't really have benefit, especially with the new threshold.
Finally, add quaternion-matrix-quaternion round trip tests with
full coverage for all 4 branches.
Differential Revision: https://developer.blender.org/D9675
The existing hash function didn't work well with Set's method of
masking to the lower bits, because many verts have zeros in the
lower bits.
Also, replaced VectorSet with Set for Vert deduping.
By using floating point filters, the speed improves by a factor of 2 to 10.
This will help speed up some cases of the Exact Boolean modifier.
Changed the interface of mpq2::isect_seg_seg to not return mu, as it was
not needed and not calculating it saved 15% time.
The code for determining coplanar clusters had a bug where it would
miss some triangles. The fix for now is to just put triangles in
the cluster if their bounding boxes overlap. This works but maybe
makes clusters bigger then they have to be. I'll follow this commit
with work on making the CDT routine faster when using exact arithmetic.
Also removed a lot of unused code, and added some new intersect
performance tests.