Similar to the Microfacet Closures, the Principled BSDF Sheen closure is
added at a high weight but typically results in fairly low values.
Therefore, the default weight is a bad indicator of importance.
The fix here is the same as it was back then for Microfacets:
Compute an average weight using the normal as the half-vector
and use it to scale down the sample weight and the albedo channel.
In addition to drastically improving denoising of materials with
sheen when using the new Denoising node, this also can reduce noise
on such materials considerably.
This aligns with the VFX reference platform 2020 along with the decision
to stick to Python 3.7, see T68774.
Blosc was downgraded to 1.5 as recommended by the OpenVDB documentation.
IlmBase and OpenEXR are now built together with CMake rather separately
using autoconf.
Differential Revision: https://developer.blender.org/D6593
Cache file loading for mesh and particle files now works through the direct update_structures functions. The final cache mode now also only bakes the most essential files and is therefore not resumable anymore.
Embree's local intersection routine was not prepared
for local intersections without per-object BVH.
Now it should be able to handle any kind of local
intersection, such as AO, bevel and SSS.
Differential Revision: https://developer.blender.org/D6602
It's not certain this fixes the issue since I can't reproduce the crash, but
the code was wrong in any case.
Thanks to Ray Molenkamp and Anonymous for finding this.
Seems that previous fix didn't work in all cases: Debian's build
environment didn't fully detect endianess, possibly due to typo,
possibly due to difference in various environments.
Using define magic from a more battle-tested project seems a safe
way to go.
There are more changes than just PPC since the upstream commit contains
full re-synchronization of all defines.
This commit updates numaapi to a latest library version from upstream.
Commit baeb11826b switched memory
allocation for the motion transform to use CUDA directly, instead of going
through abstractions. But no CUDA context was set active before those
were called, so the calls failed. This fixes that by binding a context beforehand.
Commit rB7e61e597253f3ca75f2fb86a57212ca750ffbbe8 broke viewport rendering when not displaying as halfs.
This fixes that by actually using the `scale` parameter that is passed into `film_map` and also de-duplicates code around it.
Differential Revision: https://developer.blender.org/D6557
OptiX always uses record-all behavior for transparent shadow rays, but did not check
whether the maximum number of hits exceeded the shadow hit stack. This fixes that.
transform_direction() can't handle parameters in constant address space.
Creating a local copy of the parameter satisfies the OpenCL compiler.
CUDA and CPU compilers should be able to optimize this away I hope.
In transform_motion_decompose, successive quaternion pairs are checked to be
aligned such that their interpolation is rotation through the shortest angle
between them. If not, the first in the pair was flipped. This can cause
problems for sequences of more than 2 quarternions, since flipping the first
in a pair might misalign the previously pair, if unlucky.
Instead, this change flips the second in the pair, which is safe when
iterating forwards.
Differential Revision: https://developer.blender.org/D6537
This patch adds support for the OptiX denoiser as an alternative to the existing NLM denoiser in Cycles. It's re-using the same denoising architecture based on tiles and therefore implicitly also works with multiple GPUs.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D6395
The problem is described in a comment in the change.
Short version: If a pass was used as a divide_type but also requested
explicitly (e.g. diffuse color), it was added to the passes list
twice because the names of the two requests didn't match.
Then, when searching for the pass to divide by, the wrong one (not
the one that the kernel was writing to) was picked.
Cycles needs some smaller updates so that the up-res smoke wavelet noise and the liquid mesh speed vector export work correctly.
Reviewed By: mont29
Maniphest Tasks: T59995
Differential Revision: https://developer.blender.org/D3854
A collection of smaller changes that are required in the /blender/source files. A lot of them are also due to variable renaming.
Reviewed By: sergey
Maniphest Tasks: T59995
Differential Revision: https://developer.blender.org/D3855
Files from /intern/mantaflow handle the communication between core Blender code and Mantaflow itself. It's the bridge to communicate with Mantas Python functions.
Code from /intern/mantaflow/intern/strings/ is pure Manta code and would likely need less attention in the review.
Reviewed By: sergey
Maniphest Tasks: T59995
Differential Revision: https://developer.blender.org/D3851
The ImageTextureNode incorrectly used the tile slot encoding for box-
mapped textures as well. Since box-mapping always generates UVs that
lie in the 1001 tile, there's no need to support tiles here.
The code checked for the presence of more than one tile before
substituting the tile number into the filename, so if a one-tile
UDIM was used (or all but one tile were culled), the substitution
was skipped and as a result the file was not found.
With this change, the code explicitly tracks whether substitution
is required, avoiding this problem.
This also fixes another problem: The Environment texture never
does substitution since it doesn't support UDIMs, but before the
syncing code still inserted the placeholder into the filename if the
user selected a tiled background image.
This patch contains the work that I did during my week at the Code Quest - adding support for tiled images to Blender.
With this patch, images now contain a list of tiles. By default, this just contains one tile, but if the source type is set to Tiled, the user can add additional tiles. When acquiring an ImBuf, the tile to be loaded is specified in the ImageUser.
Therefore, code that is not yet aware of tiles will just access the default tile as usual.
The filenames of the additional tiles are derived from the original filename according to the UDIM naming scheme - the filename contains an index that is calculated as (1001 + 10*<y coordinate of the tile> + <x coordinate of the tile>), where the x coordinate never goes above 9.
Internally, the various tiles are stored in a cache just like sequences. When acquired for the first time, the code will try to load the corresponding file from disk. Alternatively, a new operator can be used to initialize the tile similar to the New Image operator.
The following features are supported so far:
- Automatic detection and loading of all tiles when opening the first tile (1001)
- Saving all tiles
- Adding and removing tiles
- Filling tiles with generated images
- Drawing all tiles in the Image Editor
- Viewing a tiled grid even if no image is selected
- Rendering tiled images in Eevee
- Rendering tiled images in Cycles (in SVM mode)
- Automatically skipping loading of unused tiles in Cycles
- 2D texture painting (also across tiles)
- 3D texture painting (also across tiles, only limitation: individual faces can not cross tile borders)
- Assigning custom labels to individual tiles (drawn in the Image Editor instead of the ID)
- Different resolutions between tiles
There still are some missing features that will be added later (see T72390):
- Workbench engine support
- Packing/Unpacking support
- Baking support
- Cycles OSL support
- many other Blender features that rely on images
Thanks to Brecht for the review and to all who tested the intermediate versions!
Differential Revision: https://developer.blender.org/D3509
With upcoming light group passes, for them to sum up correctly to the combined
pass the clamping must be more fine grained.
This also has the advantage that if one light is particularly noisy, it does
not diminish the contribution from other lights which do not need as much
clamping.
Clamp values on existing scenes will need to be tweaked to get similar results,
there is no automatic conversion possible which would give the same results as
before.
Implemented by Lukas, with tweaks by Brecht.
Part of D4837
In the current OpenCL implementation we have a work-around for platforms
that didn't support NULL pointers. We used to replace all NULLs and
empty arrays with a pointer to a single byte on the OpenCL Device.
During investigation of {T65924} it was asked to remove this work-around
for testing. This change improves the render times.
SCENE | BEFORE | AFTER
--------------------+--------+-------
bmw27 | 108 | 89
barbershop_interior | 867 | 673
classroom | 270 | 173
fishy_cat | 244 | 196
koro | 249 | 207
pavillon_barcelona | 582 | 414
Note that this change does not fix T65924 it just improves the
rendering performance for OpenCL. We haven't tested this patch on all
platforms so we should keep an eye out on the tracker.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D6391
Custom render passes are added in the Shader AOVs panel in the view layer
settings, with a name and data type. In shader nodes, an AOV Output node
is then used to output either a value or color to the pass.
Arbitrary names can be used for these passes, as long as they don't conflict
with built-in passes that are enabled. The AOV Output node can be used in both
material and world shader nodes.
Implemented by Lukas, with tweaks by Brecht.
Differential Revision: https://developer.blender.org/D4837
This adds compaction support for OptiX acceleration structures, which reduces the device memory footprint in a post step after building. Depending on the scene this can reduce the amount of used device memory quite a bit and even improve performance (smaller acceleration structure improves cache usage). It's only enabled for background renders to make acceleration structure builds fast in viewport.
Also fixes a bug in the memory management for OptiX acceleration structures: These were held in a dynamic vector of 'device_memory' instances and used the mem_alloc/mem_free functions. However, those keep track of memory instances in the 'cuda_mem_map' via pointers to 'device_memory' (which works fine everywhere else since those are never copied/moved). But in the case of the vector, it may decide to reallocate at some point, which invalidates those pointers and would result in some nasty accesses to invalid memory. So it is not actually safe to move a 'device_memory' object and therefore this removes the move operator overloads again.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D6369
Previously Noise and Wave texture nodes would use noise functions within a [0,1]
range for distortion effects. We either add or subtract noise from coordinates,
never do both at same time. This led to the texture drastically shifting on the
diagonal axis of a plane / cube. This behavior makes the Distortion input hard
to control or animate. Capabilities of driving it with other texture are also
limited, diagonal shifting is very apparent.
This was fixed by offsetting the noise function to a signed range and making it
zero-centered. This way noise is uniformly added and subtracted from coordinates.
Texture pattern sticks to main coordinates which makes it way easier to control.
This change is not strictly backwards compatible, there is versioning to ensure
the scale of the distortion remains similar, but the particular pattern can be
a little different.
Differential Revision: https://developer.blender.org/D6177
Modes: Linear interpolation (default), stepped linear, smoothstep and smootherstep.
This also includes an additional option for the **Clamp node** to switch between **Min Max** (default) and **Range** mode.
This was needed to allow clamping when **To Max** is less than **To Min**.
Reviewed By: JacquesLucke, brecht
Differential Revision: https://developer.blender.org/D5827