Cleanup to avoid unnecessary copies of VArray. This
requires ref-qualifier overloads of dereference operator
of attribute reader and some move operators and constructor
overloads in the code.
Pull Request: https://projects.blender.org/blender/blender/pulls/118437
All official Blender platforms use the SIMD code path, with pow() approximations
for 2.4 and 1/2.4 powers. The non-SIMD code path would only be used on other
platforms like PowerPC etc. Make that fallback scalar code path use the same
math approximation for consistency. This is part of #121312.
This also makes srgb_to_linearrgb_v3_v3 and linearrgb_to_srgb_v3_v3 functions
non-inlined. They are 50-100 CPU instructions, and thus hardly good candidates
for forced inlining into each and every call site.
Also _bli_math_blend_sse now uses actual SSE4 blend instruction instead of doing
it in a roundabout way.
Pull Request: https://projects.blender.org/blender/blender/pulls/121368
This should get all of the tests to pass on Windows ARM64 platforms.
Sadly it needs disabling for hydra/USD stuff as currently it doesn't play nicely with the new preprocessor. @LazyDodo suggested a USD version update may fix this, which is something I can investigate in due course - right now, let's get daily builds up and working :)
Pull Request: https://projects.blender.org/blender/blender/pulls/121361
This is because sse2neon.h might be used to emulate SSE intrinsics
on ARM64 architecture, and it uses some preprocessor which is not
available for C language when using MSVC.
The old-style math file math_matrix.c uses this header, so needed
to become C++. Simple rename did not work since there is a new math
utility math_matrix.cc exists. Following some existing convention
the math_matrix.c is renamed to math_matrix_c.cc. Eventually all the
code should switch to use C++ style math, and the C style removed,
so it seems reasonable to not mix old and new style of API in the
same file.
There should be no functional changes.
Pull Request: https://projects.blender.org/blender/blender/pulls/121335
This integrates the functionality for `parallel_for_weighted` from 9a3ceb79de
into `parallel_for`. This reduces the number of entry points to the threading
API and also makes it easier to build higher level threading primitives. For
example, `IndexMask.foreach_*` may use `parallel_for` if a `GrainSize` is
provided, but can't use `parallel_for_weighted` easily without duplicating a
fair amount of code.
The default behavior of `parallel_for` does not change. However, now one can
optionally pass in `TaskSizeHints` as the last parameter. This can be used to
specify the size of individual tasks relative to each other and relative to the
grain size. This helps scheduling more equally sized tasks which generally
improves performance because threads are used more effectively.
One generally does not construct `TaskSizeHints` manually, but calls either
`threading::individual_task_sizes` or `threading::accumulated_task_sizes`. Both
allow specifying individual task sizes, but the latter should be used when the
combined size of consecutive tasks can be computed in O(1) time. This allows
splitting up the work more efficiently. It can often be used in conjunction with
`OffsetIndices`.
Pull Request: https://projects.blender.org/blender/blender/pulls/121127
This PR inverts the return values of `indexed_data_equal` to make the
function return as one would expect (i.e. if everything is equal then it
returns true, if any are not equal it returns false)
Pull Request: https://projects.blender.org/blender/blender/pulls/121056
Support for building blender with clang on windows on x64 was added
years ago but given there are no active users support has crumbled a
bit.
This PR brings the build system back into working order but upstream
patches in openVDB are still required for a successful build see PR
#120317 for details.
Blender when build with clang the classroom scenes rendered on the cpu
with cycles is seeing a 5% reduction in render time on both an
AMD 7700x and an Intel 14900k.
Factors out utility function "does the path contain any hidden file/folder
components?" function out of innards of UI code (edit_file,
is_hidden_dot_filename) into a BLI function BLI_path_has_hidden_component
and then:
- Adds unit test coverage to it, which uncovered some inconsistencies
- Fix the behavior inconsistencies:
- A path component that is just a dot (.), was not considered hidden.
Unless it was the first folder component (now this is fixed).
- A path component that ended in a tilde (~) was considered hidden.
Unless it was the first folder component (now this is fixed).
- Speedup the function by not doing several recursive scans over the
string; instead all the logic is done in a single string scan.
Synthetic HasHiddenComponents_Performance test: 6.0s -> 1.1s for 50M calls
on Mac M1 Max.
More real world test (setup as in #120494): out of whole build_catalog_tree
time, the time taken by BLI_path_has_hidden_component drops from 37% down
to 28%
Pull Request: https://projects.blender.org/blender/blender/pulls/120541
Previously, this conversion would often result in invalid quaternions or
hit an assert in `normalized_to_quat_fast`. It's not super nice to convert
to euler as an intermediate step performance wise, but it seems to be
the easiest solution for now. Extracting rotations from matrices should
not be done all that often anyway.
Pull Request: https://projects.blender.org/blender/blender/pulls/120568
Commit 8b9743eb40 already made Blender be compiled with SSE4.2 flags
on x64 architecture, which kicked in the SSE4 code paths in
BLI_math_interp functions.
Which made them faster, e.g. in VSE on Windows/Ryzen5950X, scaling
up an image to 4K resolution:
- Bilinear 5.8ms -> 5.3ms
- Cubic Mitchell 16.3ms -> 15.7ms
This change removes the now-unneeded SSE pre-SSE4 code paths for
_mm_floor_ps, _mm_min_epi32 and _mm_max_epi32 emulation.
Additionally, including BLI_simd.h on SSE4 platform now includes
the necessary SSE4 intrinsics header.
Pull Request: https://projects.blender.org/blender/blender/pulls/120583
The problem was that `XXH3_128bits` was called on `len` bytes
and not `HashSizeInBytes + len` as before 51f8bf53b2.
This lead to more compute context duplicates that one would expect.
I changed the code a little bit to make this mistake less likely in case
the hash function is ever changed to something else.
Due to legacy reasons (`MEdge`), edge calculation was being done with
idea that edges cannot be temporarily copied. But today, edges are just
`int2`, so using `edge *` instead of `edge` actually made things worse.
And since `OrderedEdge` itself is the same thing as `int2`, it does not
make sense to use `Map` for edges. So, now edges are in a hash set.
To be able to take index of edges, `VectorSet` is used.
The only functional change now is that original edges will be reordered
as well. This should be okay just like an unintentional but stable
indices change.
For 2'000 x 2'000 x 2'000 cube edges calculation, change is around
`3703.47` -> `2911.18` ms.
In order to reduce memory usage, a template parameter is added to
`VectorSet` slots, so they can use a 32 instead of 64 bit index type.
Without that, the performance change is not consistent and might not be
better on a computer with more memory bandwidth.
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/120224
This was creating issues with triangle winding order since
the resulting matrix was degenerate (0 determinant). Which
caused the Draw manager to wrongly invert the winding.
This fixes a bug in EEVEE-Next mesh voxelization for volume
rendering (with accurate method).
The `IndexMask` class already had a static function `from_union`.
This adds two new functions `from_difference` and `from_intersection`
as well as tests for each of them.
It also uses `from_intersection` in two grease pencil utility functions.
Pull Request: https://projects.blender.org/blender/blender/pulls/120419
This replaces the compute shader pass for volume material properties
voxelization by a fragment shader that is run only once per pixel.
The fragment shader then execute the nodetree in a loop for each
individual froxel.
The motivations are:
- faster evaluation of homogenous materials: can evaluate nodetree
once and fast write the properties for all froxel in a loop.
This matches cycles homogenous material optimization (except that
it only considers the first hit).
- no invocations for empty froxels: not restricted to box dispach.
- support for more than one material: invocations are per pixel.
- cleaner implementation (no compute shader specific paths).
Implementation wise, this is done by adding a stencil texture when
rendering volumetric objects. It is populated during the occupancy
phase but it is not directly used (the stencil test is enabled but
since we use `imageAtomic` to set the occupancy bits, the fragment
shader is forced to be run). The early depth-test is then turned
on for the material properties pass, allowing only one fragment to
be invoked.
This fragment runs the nodetree at the desired frequency: once per
direction (homogenous), or once per froxel (heterogenous).
Note that I tried to use the frontmost fragment using a depth equal
test but it was failing for some reason on Apple silicon producing
flickering artifacts. We might reconsider this frontmost fragment
approach later since the result is now face order dependant when
an object has multiple materials.
Pull Request: https://projects.blender.org/blender/blender/pulls/119439