In this report, adding the "resolution" attribute didn't clear the
evaluated positions cache. In some cases capturing an attribute on
the mesh might just add the mesh rather than using an attribute
writer.
Previously, we used precomputed Gaussian fits to the XYZ CMFs, performed
the spectral integration in that space, and then converted the result
to the RGB working space.
That worked because we're only supporting dielectric base layers for
the thin film code, so the inputs to the spectral integration
(reflectivity and phase) are both constant w.r.t. wavelength.
However, this will no longer work for conductive base layers.
We could handle reflectivity by converting to XYZ, but that won't work
for phase since its effect on the output is nonlinear.
Therefore, it's time to do this properly by performing the spectral
integration directly in the RGB primaries. To do this, we need to:
- Compute the RGB CMFs from the XYZ CMFs and XYZ-to-RGB matrix
- Resample the RGB CMFs to be parametrized by frequency instead of wavelength
- Compute the FFT of the CMFs
- Store it as a LUT to be used by the kernel code
However, there's two optimizations we can make:
- Both the resampling and the FFT are linear operations, as is the
XYZ-to-RGB conversion. Therefore, we can resample and Fourier-transform
the XYZ CMFs once, store the result in a precomputed table, and then just
multiply the entries by the XYZ-to-RGB matrix at runtime.
- I've included the Python script used to compute the table under
`intern/cycles/doc/precompute`.
- The reference implementation by the paper authors [1] simply stores the
real and imaginary parts in the LUT, and then computes
`cos(shift)*real + sin(shift)*imag`. However, the real and imaginary parts
are oscillating, so the LUT with linear interpolation is not particularly
good at representing them. Instead, we can convert the table to
Magnitude/Phase representation, which is much smoother, and do
`mag * cos(phase - shift)` in the kernel.
- Phase needs to be unwrapped to handle the interpolation decently,
but that's easy.
- This requires an extra trig operation in the kernel in the dielectric case,
but for the conductive case we'll actually save three.
Rendered output is mostly the same, just slightly different because we're
no longer using the Gaussian approximation.
[1] "A Practical Extension to Microfacet Theory for the Modeling of
Varying Iridescence" by Laurent Belcour and Pascal Barla,
https://belcour.github.io/blog/research/publication/2017/05/01/brdf-thin-film.html
Pull Request: https://projects.blender.org/blender/blender/pulls/140944
Detect which volume attributes nodes have a linear mapping to their usage
as density / color / temperature in volume shader nodes, and use stochastic
sampling for them.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
Stochastically turn a tricubic filter into a trilinear one. This
reduces the number of taps from 64 to 8. It combines ideas from
the "Stochastic Texture Filtering" paper and our previous GPU
sampling of 3D textures.
This is currently only used in a few places where we know stochastic
interpolation is valid or close enough in practice.
* Principled volume density, color and temperature
* Motion blur velocity
On an Macbook Pro M3 with the openvdb_smoke.blend regression test
and cubic sampling, this gives a ~2x speedup for CPU and ~4x speedup
for GPU. However it also increases noise, usually only a little. Equal
time renders for this scene show a clear reduction in noise for both
CPU and GPU.
Note we can probably get a bigger speedup with acceptable noise trade-off
using full stochastic sampling, but will investigate that separately.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
All GPU backends now support NanoVDB, using our own kernel side code
that is easily portable. This simplifies kernel and device code.
Volume bounds are now built from the NanoVDB grid instead of OpenVDB,
to avoid having to keep around the OpenVDB grid after loading.
While this reduces memory usage, it does have a performance impact,
particularly for the Cubic filter. That will be addressed by
another commit.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
This commit fixes a issue where GPU compositor tests would always run
with the default GPU backend, Metal for macOS, and OpenGL
for other operating systems.
Now tests correctly run on the defined GPU backend, allowing Vulkan
GPU compositor tests to correctly run with the Vulkan GPU backend.
Pull Request: https://projects.blender.org/blender/blender/pulls/141447
This is a basic armature deformation test for #141535 using Lattices instead of
Mesh as the target object type. Lattice deformation was briefly broken, which is
caught by this test.
The test adds the general-purpose `unit_test_compare` function to lattice object
data. It only compares lattice point counts and positions for now, more data can
be added later if necessary.
The `MeshTest` class did not support lattice object types yet, so needed some
changes. The Curves case was already supported, but only by full conversion to
mesh data, without actually using the `unit_test_compare` function specific for
curves geometry. This is unchanged, because applying constructive modifiers on
curves does not work. If it were not for this limitation the test could do
actual curves comparisons now.
For lattice support the `MeshTest` class comparison function has been
generalized to all supported object data types. It runs the appropriate
`unit_test_compare` api function and validation where supported (only meshes at
this point).
Pull Request: https://projects.blender.org/blender/blender/pulls/141546
The mesh importer was only checking for animated positions, velocities,
and primvars when determining if a cache modifier needed to be used.
Extend this to crease values (and normals) too.
The added test ensures a base level of coverage here.
Related to: #141633
Pull Request: https://projects.blender.org/blender/blender/pulls/141643
Adds a new file for testing the Multires Apply Base operator.
This is not included with the other modifier tests as it is a bit of an
exception - the modifier is not applied at the end for comparison
between the expected result and actual result.
Pull Request: https://projects.blender.org/blender/blender/pulls/141571
When beveling a vertex with only 2 connected edges, the resulting
geometry was not selected.
Resolve by not only selecting the faces created by the bevel operation,
but also selecting the vertices created.
Note: apply the same change as before,
the LFS data has been manually pushed.
Ref !139691
When beveling a vertex with only 2 connected edges, the resulting
geometry was not selected.
Resolve by not only selecting the faces created by the bevel operation,
but also selecting the vertices created.
Ref !139691
This commit adds a minimum number of frames tested in the EEVEE
performance tests (`tests/performance/tests/eevee.py`). This way,
test results are more reliable for .blend files with a low number of
animation frames.
Pull Request: https://projects.blender.org/blender/blender/pulls/141290
**Problem Description**
Blender's current mesh data layout often lacks spatial coherence,
causing performance bottlenecks during BVH construction for sculpting
and painting operations. Each time a BVH is built, the system must
recompute spatial partitioning and vertex groupings from scratch,
leading to redundant calculations and suboptimal memory access patterns.
**Proposed Solution**
This patch implements pre-computed spatial organization of mesh data
through a new `mesh_apply_spatial_organization()` function that:
- Reorders vertices and faces based on spatial locality using recursive
spatial partitioning.
- Stores pre-computed MeshGroup hierarchies in MeshRuntime for reuse.
- Enables the BVH system to bypass expensive spatial computation when
pre-organized data is available.
This approach separates the expensive spatial computation from more
frequent BVH rebuilds, providing sustained performance improvements
across multiple sculpting operations.
**Limitations**
- Requires manual invocation (occurs automatically only during remesh
operations).
- Additional memory overhead for storing MeshGroup metadata.
- One-time computational cost during initial organization.
- Spatial group references are not yet stored in files.
**User Interface**
The feature is accessible via a new "Reorder Mesh Spatially" operator in
the Mesh Data Properties panel under the Geometry Data section. Users
can invoke it manually when needed, or it will be applied automatically
during quadriflow and voxel remesh operations. The operator provides
feedback confirming successful spatial reordering.
Pull Request: https://projects.blender.org/blender/blender/pulls/139536
This commit adds a test for the situation in which a AOV pass is used
on a object with a fully or semi-transparent material, either using the
transparent BSDF directly, or mixing it with some other material.
Pull Request: https://projects.blender.org/blender/blender/pulls/141068
It was applying the bounds clamp to each object individually,
which changed spatial relationships between objects. Make the logic
match what used to be done in the Python OBJ importer back in the day.
Pull Request: https://projects.blender.org/blender/blender/pulls/141446
When a paint stroke is executed instead of processed via the modal
handler, prior to this commit, the `paint_brush_update` function was
not called. This method handles initialization of some temporary stroke
data inside `UnifiedPaintSettings`, which is used by dyntopo when
performing edge collapse.
This had the result of causing a divide by 0 with certain uninitalized
settings when using a brush with dyntopo enabled and calling the
operator from the python API (e.g. from unit testing), resulting in
nonsensical deformations.
There are a number of weak points with the current design:
* This issue was only exposed because of the refactor to the
`UnifiedPaintSettings`, indicating that despite these values being
runtime-only, they were still persisted in some cases in .blend files
* The data stored as individual stroke steps is not sufficient to
reconstruct a paint stroke given a list of screen-space locations, and
this data is populated outside of the common `stroke` callbacks.
Both of the above issues are wider reaching than this PR is intended to
fix.
This commit ensures that `paint_brush_update` is called in the `exec`
codepath and updates the related test image.
Pull Request: https://projects.blender.org/blender/blender/pulls/141314
`node_bilateralblur.blend` covered a specific case where the
determinator was a single value. This was renamed to
`node_bilateralblur_det_single_value.blend` and added a single value
input test for the image, and a standard bilateral blur test for cpu/gpu
Coverage:
- Function: 66.67% -> 100%
- Line: 39.10% -> 100%
- Region: 45.71% -> 100%
- Branch: 22.22% -> 88.89%
Pull Request: https://projects.blender.org/blender/blender/pulls/141315
Add performance test for subdividing a multiresolution mesh from level
2 to 3. This test ends with a total fine vertex count of approximately
10m, similar to the stroke and BVH tests for multires.
Pull Request: https://projects.blender.org/blender/blender/pulls/141168
This commit moves Curves and Grease Pencil to use `AttributeStorage`
instead of `CustomData`, except for vertex groups. This PR mostly
involves extending the changes from the above commit for point clouds
to generalize to other geometry types.
This is mostly straightforward, though a couple non-trivial places of
note are the joining of Grease Pencil objects (`merge_attributes`), the
"default render fallback" UV for curves objects which was previously
unused at the UI level and just ended up being the first attribute, and
the `update_curve_types()` call in the curves versioning function.
Similar to:
- fa03c53d4a
- f74e304b00
Part of #122398.
Pull Request: https://projects.blender.org/blender/blender/pulls/140936
The issue was that the depsgraph was not rebuilt, thus
the driver node still stuck around. This then crashed when the
depsgraph evaluated.
The reason why this wasn't caught in the unit tests, was because the
depsgraph was not updated between creating and removing the data.
Pull Request: https://projects.blender.org/blender/blender/pulls/141272
Test for armature deform modifier settings that are not yet covered by other tests:
- Vertex Group vs. Envelope deformation and both combined.
- Vertex group masking ('vertex_group' setting of the modifier)
- Inverted vertex group masking
- Preserve Volume (dual quaternions)
- Vertex Group/Envelope influence mixing (bone option)
- B-Bone deformation
- Multi-modifier mixing
Each case has a new test/expected mesh pair in the modifiers test.
Pull Request: https://projects.blender.org/blender/blender/pulls/141054
In a recent commit (1), light leaking was accidentally introduced on
some objects making use of normal maps (#139870)
This commit adds a test inspired by the file in that report to avoid
issues like this reoccuring in the future.
(1) a6015e1411
Pull Request: https://projects.blender.org/blender/blender/pulls/141065
This addresses issue #120949 to move compositor tests to reflect the
grouping used when adding a new node.
This PR only moves the relevant single tests and their renders into
matching directories. Folders such as 'multi-node setups' and
'pixel nodes' were not changed.
Pull Request: https://projects.blender.org/blender/blender/pulls/139757
Saved files in 4.5 with a compositing node tree containing a Normal
Node with animated 'Dot' input crash in 5.0.
A test case with animated dot and normal inputs was added as well.
Pull Request: https://projects.blender.org/blender/blender/pulls/140908
This test replaces the existing multires modifier test with one that
subdivides the mesh and then compares the result. The existing one has
little purpose, as it applies a modifier with 0 subdivision levels.
Pull Request: https://projects.blender.org/blender/blender/pulls/140567
The `apply_modifiers` property of the `RunTest` class overrides
all of the test level `apply_modifier` properties. This prevents
modifiers from manually specifying when a modifier is applied and forces
the modifier to be applied immediately after it is added.
The vast majority of tests do not override the `apply_modifier`
property, the primary usecase for this property is to work in
combination with the `do_compare` property to allow examining the
corresponding .blend file to debug test failures.
This commit simplifies the settings by removing this parameter. It now
only disables applying the modifier if `do_compare` is set to False.
Pull Request: https://projects.blender.org/blender/blender/pulls/140893
When a benchmark test was failing, there was very little info available
to investigate it. Now report the stdout/stderr generated by the failing
command.
This patch removes the Texture node from the compositor, which was based
on the legacy Internal Textures system in Blender. The main motivation
for removing this node is as follows:
- Procedural texturing nodes that previously existed in shading and
geometry nodes are now supported in the compositor, which cover 95% of
what is previously possible using and even adds new possibilities like
Gabor, Bricks, and various improvements to existing texture types.
- The old texture system did not support GPU evaluation, so it was
always computed and cached on the CPU, which causes bad performance
especially for interactive use in the viewport compositor. While the
new nodes are fully GPU accelerated and do not require any caching.
- The Texture node didn't support Texture nodes, so it was not fully
supported and we so far had a warning about that.
- The general direction in Blender is to remove the old texture system,
and the compositor was one of the last main users of it. 5.0 is thus
the ideal time to remove such use.
- The Texture node was always and still is a source of bugs, since it
relies on proper tagging for cache invalidation and updates, which is
so far not perfect. It also suffers from UI/UX issues, since it needs
to be adjusted from the properties panel, which can break if there are
other texture nodes in the context.
This is a breaking change and no versioning was attempted since:
1. It is impossible to get the same results as before due to the use of
different random number generators, so any versioning would just give us
the general look.
2. The Texture node supports a lot of possible configurations. For
instance, each general texture can have many options for the basis type,
and each basis type might have multiple options. So versioning all of
that will take a lot of time, code, and effort.
Pull Request: https://projects.blender.org/blender/blender/pulls/140545