This includes a new list structure type and socket shape, a node
to create lists, a node to retrieve values from lists, and a node to
retrieve the length of lists. It also implements multi-function support
so that function nodes work on lists.
There are three nodes included in this PR.
- **List** Creates a list of elements with a given size. The values
are computed with a field that can use the index as an input.
- **Get List Item** A field node that retrieves an element from a
a list at a given index. The index input is dynamic, so if the input
is a list, the output will be a list too.
- **List Length** Just gives the length of a list.
When a function node is used with multiple list inputs, the shorter
lists are repeated to extend it to the length of the longest.
The list nodes and structure type are hidden behind an experimental
feature until we can be sure they're useful for an actual use case.
Pull Request: https://projects.blender.org/blender/blender/pulls/140679
Add the new Curves datablock type to the IO report. This was created
for the new hair system but is more generally used for any type of curve
generated within geometry nodes.
The report itself is mostly just a dump of all available attributes for
the object.
Pull Request: https://projects.blender.org/blender/blender/pulls/142925
Use Principled BSDF instead of Diffuse BSDF. This is a breaking change but
likely does not affect many scenes significantly. It adds some specularity
when an object does not have a a material assigned.
Fix#142538
Pull Request: https://projects.blender.org/blender/blender/pulls/142703
HDR video files are properly read into Blender, and can be rendered out
of Blender.
HDR video reading / decoding:
- Two flavors of HDR are recognized, based on color related video
metadata: "PQ" (Rec.2100 Perceptual Quantizer, aka SMPTE 2084) and
"HLG" (Rec.2100 Hybrid-Log-Gamma, aka ARIB STD B67). Both are read
effectively into floating point images, and their color space
transformations are done through OpenColorIO.
- The OCIO config shipped in Blender has been extended to contain
Rec.2100-PQ and Rec.2100-HLG color spaces.
- Note that if you already had a HDR video in sequencer or movie clip,
it would have looked "incorrect" previously, and it will continue to
look incorrect, since it already has "wrong" color space assigned to
it. Either re-add it (which should assign the correct color space),
or manually change the color space to PQ or HLG one as needed.
HDR video writing / encoding"
- For H.265 and AV1 the video encoding options now display the HDR mode.
Similar to reading, there are PQ and HLG HDR mode options.
- Reference white is assumed to be 100 nits.
- YUV uses "full" ("PC/jpeg") color range.
- No mastering display metadata is written into the video file, since
generally that information is not known inside Blender.
More details and screenshots in the PR.
Co-authored-by: Sergey Sharybin <sergey@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/120033
Need to ensure that interpolation between neighboring quaternions
takes the shortest path. The Python FBX importer was doing this;
due to oversight was missed in the new importer.
Pull Request: https://projects.blender.org/blender/blender/pulls/142659
This rename creates separation between USD primvars/Blender geometry
attributes - which are not controlled with this setting - and the
concept of USD attributes+userProperties/Blender custom object
properties - that this option deals with.
Ref #134012
Pull Request: https://projects.blender.org/blender/blender/pulls/142301
Reordering layer nodes by drag-drop, move up/down, add new etc. swaps
layer attributes with wrong layers. This is due to mistake in map
`new_by_old_map`. Values of this map is used as indices in src array.
And keys are indices of dst array. Expected behavior is to copy attribute from
old position (`layer_i_old`) of src array to new position (`layer_i_new`) of
dst array. See: `array_utils::gather`.
Noticed during !141772.
Pull Request: https://projects.blender.org/blender/blender/pulls/141935
Cycles automatically tries to decide if the camera ray should be a
surface or a volume + surface camera ray by checking to see if the
scene contains a volumetric material, and if it does, is it near
where the camera rays are expected to spawn. This step is done
during scene intialization.
With the OSL camera, it is impratical to predict where the
camera rays will spawn during scene intialization, which makes it
impratical to predict if the OSL camera ray will spawn near a
volumetric object. So this commit marks all OSL cameras as
"inside a volume", leading to the spawning of volume + surface camera
rays for OSL cameras while the scene contains a volumetric material.
This leads to increased render times ranging between 1% - 5% in scenes
that use a OSL camera, has a volumetric object in it, and the
volumetric object is far away from the camera. Every other scene should
see no performance impact.
Testing was done on a AMD Ryzen 9 5950X and a NVIDIA GeForce RTX 4090.
Pull Request: https://projects.blender.org/blender/blender/pulls/142036
This fixes the anonymous attribute lifetime inferencing for the closure zone and
the evaluate closure node. It also adds new regression tests for cases where a
closure outputs a field.
Pull Request: https://projects.blender.org/blender/blender/pulls/141925
This fixes 4 bugs all conspiring to make the referenced SubD scenario
quite broken:
- When manually creating a MeshSequenceCache, the reading of edge and
vertex crease data was skipped. Fixed by reading crease data inside
the common `read_mesh` method.
- When importing an Alembic with animated edge or vertex crease data, a
MeshSequenceCache modifier was not being added to the object. This was
due to not checking the relevant crease properties and required adding
a specialized `has_animations` function.
- When importing animated vertex crease data, a duplicate `vertex_crease`
attribute would be created, breaking the animation. Fixed by using the
attribute API rather than custom data.
- The MeshSequenceCache scenario would call into the Alembic Mesh reader
which ended up referencing deallocated stack memory for the
ImportSettings. In release builds this would cause sporadic failures
because the value of `blender_archive_version_prior_44` would be
random. There was already a very old TODO for this and we finally
really needed to address it.
A new test was added which exports animated creases on two meshes and
re-imports them back in. It verifies that each mesh gets a
MeshSequenceCache modifier added and also ensures the values of the
creases are correct for all frames.
Pull Request: https://projects.blender.org/blender/blender/pulls/141646
These two tests cover the extended boundary option for the blur node.
One with single value input and another with variable input.
They mirror the non-extended gaussian tests.
Coverage:
- Function: 86.36% -> 90.91%
- Line: 91.95% -> 96.90%
- Region: 83.51% -> 96.63%
- Branch: 84.00% -> 95.65%
Pull Request: https://projects.blender.org/blender/blender/pulls/141528
Some parts of the UI (e.g., the boolean node) had renamed the 'fast'
solver to be the 'float' solver, since with the advent of the manifold
solver, it is no longer really right to characterize the float one
as the (sole) fast one. This commit finishes the job.
It does have the effect of changing the string needed within the
Python API to select the float solver, so this is a (minor)
API-breaking change.
In this report, adding the "resolution" attribute didn't clear the
evaluated positions cache. In some cases capturing an attribute on
the mesh might just add the mesh rather than using an attribute
writer.
Previously, we used precomputed Gaussian fits to the XYZ CMFs, performed
the spectral integration in that space, and then converted the result
to the RGB working space.
That worked because we're only supporting dielectric base layers for
the thin film code, so the inputs to the spectral integration
(reflectivity and phase) are both constant w.r.t. wavelength.
However, this will no longer work for conductive base layers.
We could handle reflectivity by converting to XYZ, but that won't work
for phase since its effect on the output is nonlinear.
Therefore, it's time to do this properly by performing the spectral
integration directly in the RGB primaries. To do this, we need to:
- Compute the RGB CMFs from the XYZ CMFs and XYZ-to-RGB matrix
- Resample the RGB CMFs to be parametrized by frequency instead of wavelength
- Compute the FFT of the CMFs
- Store it as a LUT to be used by the kernel code
However, there's two optimizations we can make:
- Both the resampling and the FFT are linear operations, as is the
XYZ-to-RGB conversion. Therefore, we can resample and Fourier-transform
the XYZ CMFs once, store the result in a precomputed table, and then just
multiply the entries by the XYZ-to-RGB matrix at runtime.
- I've included the Python script used to compute the table under
`intern/cycles/doc/precompute`.
- The reference implementation by the paper authors [1] simply stores the
real and imaginary parts in the LUT, and then computes
`cos(shift)*real + sin(shift)*imag`. However, the real and imaginary parts
are oscillating, so the LUT with linear interpolation is not particularly
good at representing them. Instead, we can convert the table to
Magnitude/Phase representation, which is much smoother, and do
`mag * cos(phase - shift)` in the kernel.
- Phase needs to be unwrapped to handle the interpolation decently,
but that's easy.
- This requires an extra trig operation in the kernel in the dielectric case,
but for the conductive case we'll actually save three.
Rendered output is mostly the same, just slightly different because we're
no longer using the Gaussian approximation.
[1] "A Practical Extension to Microfacet Theory for the Modeling of
Varying Iridescence" by Laurent Belcour and Pascal Barla,
https://belcour.github.io/blog/research/publication/2017/05/01/brdf-thin-film.html
Pull Request: https://projects.blender.org/blender/blender/pulls/140944
Detect which volume attributes nodes have a linear mapping to their usage
as density / color / temperature in volume shader nodes, and use stochastic
sampling for them.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
Stochastically turn a tricubic filter into a trilinear one. This
reduces the number of taps from 64 to 8. It combines ideas from
the "Stochastic Texture Filtering" paper and our previous GPU
sampling of 3D textures.
This is currently only used in a few places where we know stochastic
interpolation is valid or close enough in practice.
* Principled volume density, color and temperature
* Motion blur velocity
On an Macbook Pro M3 with the openvdb_smoke.blend regression test
and cubic sampling, this gives a ~2x speedup for CPU and ~4x speedup
for GPU. However it also increases noise, usually only a little. Equal
time renders for this scene show a clear reduction in noise for both
CPU and GPU.
Note we can probably get a bigger speedup with acceptable noise trade-off
using full stochastic sampling, but will investigate that separately.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
All GPU backends now support NanoVDB, using our own kernel side code
that is easily portable. This simplifies kernel and device code.
Volume bounds are now built from the NanoVDB grid instead of OpenVDB,
to avoid having to keep around the OpenVDB grid after loading.
While this reduces memory usage, it does have a performance impact,
particularly for the Cubic filter. That will be addressed by
another commit.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
This commit fixes a issue where GPU compositor tests would always run
with the default GPU backend, Metal for macOS, and OpenGL
for other operating systems.
Now tests correctly run on the defined GPU backend, allowing Vulkan
GPU compositor tests to correctly run with the Vulkan GPU backend.
Pull Request: https://projects.blender.org/blender/blender/pulls/141447
This is a basic armature deformation test for #141535 using Lattices instead of
Mesh as the target object type. Lattice deformation was briefly broken, which is
caught by this test.
The test adds the general-purpose `unit_test_compare` function to lattice object
data. It only compares lattice point counts and positions for now, more data can
be added later if necessary.
The `MeshTest` class did not support lattice object types yet, so needed some
changes. The Curves case was already supported, but only by full conversion to
mesh data, without actually using the `unit_test_compare` function specific for
curves geometry. This is unchanged, because applying constructive modifiers on
curves does not work. If it were not for this limitation the test could do
actual curves comparisons now.
For lattice support the `MeshTest` class comparison function has been
generalized to all supported object data types. It runs the appropriate
`unit_test_compare` api function and validation where supported (only meshes at
this point).
Pull Request: https://projects.blender.org/blender/blender/pulls/141546
The mesh importer was only checking for animated positions, velocities,
and primvars when determining if a cache modifier needed to be used.
Extend this to crease values (and normals) too.
The added test ensures a base level of coverage here.
Related to: #141633
Pull Request: https://projects.blender.org/blender/blender/pulls/141643
Adds a new file for testing the Multires Apply Base operator.
This is not included with the other modifier tests as it is a bit of an
exception - the modifier is not applied at the end for comparison
between the expected result and actual result.
Pull Request: https://projects.blender.org/blender/blender/pulls/141571
When beveling a vertex with only 2 connected edges, the resulting
geometry was not selected.
Resolve by not only selecting the faces created by the bevel operation,
but also selecting the vertices created.
Note: apply the same change as before,
the LFS data has been manually pushed.
Ref !139691
When beveling a vertex with only 2 connected edges, the resulting
geometry was not selected.
Resolve by not only selecting the faces created by the bevel operation,
but also selecting the vertices created.
Ref !139691
This commit adds a minimum number of frames tested in the EEVEE
performance tests (`tests/performance/tests/eevee.py`). This way,
test results are more reliable for .blend files with a low number of
animation frames.
Pull Request: https://projects.blender.org/blender/blender/pulls/141290
**Problem Description**
Blender's current mesh data layout often lacks spatial coherence,
causing performance bottlenecks during BVH construction for sculpting
and painting operations. Each time a BVH is built, the system must
recompute spatial partitioning and vertex groupings from scratch,
leading to redundant calculations and suboptimal memory access patterns.
**Proposed Solution**
This patch implements pre-computed spatial organization of mesh data
through a new `mesh_apply_spatial_organization()` function that:
- Reorders vertices and faces based on spatial locality using recursive
spatial partitioning.
- Stores pre-computed MeshGroup hierarchies in MeshRuntime for reuse.
- Enables the BVH system to bypass expensive spatial computation when
pre-organized data is available.
This approach separates the expensive spatial computation from more
frequent BVH rebuilds, providing sustained performance improvements
across multiple sculpting operations.
**Limitations**
- Requires manual invocation (occurs automatically only during remesh
operations).
- Additional memory overhead for storing MeshGroup metadata.
- One-time computational cost during initial organization.
- Spatial group references are not yet stored in files.
**User Interface**
The feature is accessible via a new "Reorder Mesh Spatially" operator in
the Mesh Data Properties panel under the Geometry Data section. Users
can invoke it manually when needed, or it will be applied automatically
during quadriflow and voxel remesh operations. The operator provides
feedback confirming successful spatial reordering.
Pull Request: https://projects.blender.org/blender/blender/pulls/139536
This commit adds a test for the situation in which a AOV pass is used
on a object with a fully or semi-transparent material, either using the
transparent BSDF directly, or mixing it with some other material.
Pull Request: https://projects.blender.org/blender/blender/pulls/141068