Similar to the Microfacet Closures, the Principled BSDF Sheen closure is
added at a high weight but typically results in fairly low values.
Therefore, the default weight is a bad indicator of importance.
The fix here is the same as it was back then for Microfacets:
Compute an average weight using the normal as the half-vector
and use it to scale down the sample weight and the albedo channel.
In addition to drastically improving denoising of materials with
sheen when using the new Denoising node, this also can reduce noise
on such materials considerably.
The Principled BSDF uses Microfacet closures that include a fresnel term,
which are a special case since their weight tends to be near white even
if their average contribution is fairly low.
The sample weight is scaled by the average fresnel weight to account for
this, but the denoising albedo still used the unscaled weight.
This was fine for the original denoiser, but apparently OIDN can't handle
the resulting albedo pass well. Therefore, this commit adds the described
scaling to the albedo pass contribution as well.
This problem was described in T69770.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D6289
This patch allows the Voronoi node to operate in 1D, 2D, and 4D space.
It also adds a Randomness input to control the randomness of the texture.
Additionally, it adds three new modes of operation:
- Smooth F1: A smooth version of F1 Voronoi with no discontinuities.
- Distance To Edge: Returns the distance to the edges of the cells.
- N-Sphere Radius: Returns the radius of the n-sphere inscribed in
the cells. In other words, it is half the distance between the
closest feature point and the feature point closest to it.
And it removes the following three modes of operation:
- F3.
- F4.
- Cracks.
The Distance metric is now called Euclidean, and it computes the actual
euclidean distance as opposed to the old method of computing the squared
euclidean distance.
This breaks backward compatibility in many ways, including the base case.
Reviewers: brecht, JacquesLucke
Differential Revision: https://developer.blender.org/D5743
This avoids artifacts for bump mapping and diffuse BSDFs, where the bump
normal deviates far from the actual normal.
Differential Revision: https://developer.blender.org/D5399
Before there were two options: Paste to original layer called "Paste" and Paste to active layer called "Paste & Merge"
Now, by default the paste is in active layer and the "Paste & Merge" has been renamed "Paste".
For old "Paste", now is called "Paste by Layer" and it's not the default value anymore.
Note: Minor edits to add icons not present in Differential revision.
Differential Revision: https://developer.blender.org/D5591
This is a physically-based, easy-to-use shader for rendering hair and fur,
with controls for melanin, roughness and randomization.
Based on the paper "A Practical and Controllable Hair and Fur Model for
Production Path Tracing".
Implemented by Leonardo E. Segovia and Lukas Stockner, part of Google
Summer of Code 2018.
There is one legit place in the code where memcpy was used as an
optimization trick. Was needed for older version of GCC, but now
it should be re-evaluated and checked if it still helps to have
that trick.
In other places it's somewhat lazy programming to zero out all
object members. That is absolutely unsafe, at the moment when
less trivial class is used as a member in that object things
will break.
Other cases were using memcpy into an object which comes from
an external library. We don't control that object, and we can
not guarantee it will always be safe for such memory tricks
and debugging bugs caused by such low level access is far fun.
Ideally we need to use more proper C++, but needs to be done with
big care, including benchmarks of each change, For now do
annoying but simple cast to void*.
Roughness baking previously defaulted to 1.0 for all diffuse materials,
now we also bake roughness values of Oren-Nayer and Principled Diffuse.
Differential Revision: https://developer.blender.org/D3115
We now continue transparent paths after diffuse/glossy/transmission/volume
bounces are exceeded. This avoids unexpected boundaries in volumes with
transparent boundaries. It is also required for MIS to work correctly with
transparent surfaces, as we also continue through these in shadow rays.
The main visible changes is that volumes will now be lit by the background
even at volume bounces 0, same as surfaces.
Fixes T53914 and T54103.
It is basically brute force volume scattering within the mesh, but part
of the SSS code for faster performance. The main difference with actual
volume scattering is that we assume the boundaries are diffuse and that
all lighting is coming through this boundary from outside the volume.
This gives much more accurate results for thin features and low density.
Some challenges remain however:
* Significantly more noisy than BSSRDF. Adding Dwivedi sampling may help
here, but it's unclear still how much it helps in real world cases.
* Due to this being a volumetric method, geometry like eyes or mouth can
darken the skin on the outside. We may be able to reduce this effect,
or users can compensate for it by reducing the scattering radius in
such areas.
* Sharp corners are quite bright. This matches actual volume rendering
and results in some other renderers, but maybe not so much real world
objects.
Differential Revision: https://developer.blender.org/D3054
Previously we stored each color channel in a single closure, which was
convenient for sampling a closure and channel together. But this doesn't
work so well for algorithms where we want to render multiple color
channels together.
This can be enabled in the Film panel, with an option to control the
transmisison roughness below which glass becomes transparent.
Differential Revision: https://developer.blender.org/D2904
Was giving difference when using sharpness of 1.0 and 0.999 even though the
result was expected to be really close to each other.
This SSS profile will probably be removed in the future in favor of more
physically bases Burley, but for the time being don't see anything wrong
fixing an existing code.
In fact this was an existing issue when exceeding the number of available
closure, but it's more common now that we set the number to 0 for shadows
and emission
The algorithm averages normals from nearby surfaces. It uses the same
sampling strategy as BSSRDFs, casting rays along the normal and two
orthogonal axes, and combining the samples with MIS.
The main concern here is that we are introducing raytracing inside
shader evaluation, which could be quite bad for GPU performance and
stack memory usage. In practice it doesn't seem so bad though.
Note that using this feature can easily slow down renders 20%, and
that if you care about performance then it's better to use a bevel
modifier. Mainly this is useful for baking, and for cases where the
mesh topology makes it difficult for the bevel modifier to work well.
Differential Revision: https://developer.blender.org/D2803
With a Titan Xp, reduces path trace local memory from 1092MB to 840MB.
Benchmark performance was within 1% with both RX 480 and Titan Xp.
Original patch was implemented by Sergey.
Differential Revision: https://developer.blender.org/D2249
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
If there was any specularity in the Principled BSDF, it would get a sampling
weight of one regardless of its actual impact.
This commit makes Cycles estimate the contribution of the component and adjust
the weighting accordingly, which greatly improves the noise characteristics of
the Principled BSDF in many cases.
Note that this commit might slightly change the brightness of areas when using
MultiGGX and high roughnesses, but the new brightness is more accurate and
closer to the result of Branched Path Tracing. See T51836 for details.
Differential Revision: https://developer.blender.org/D2677
The PDF of the MultiGGX sampling is approximated by the singlescattering GGX
term as well as a scaled diffuse term that makes up for the energy in the
multiscattering component that's missed by GGX.
However, there were two problems with the glossy terms: The diffuse term missed
a normalization factor, and the singlescattering term was not properly scaled
down based on the albedo estimate.
The glass term was completely wrong and has been rewritten. It uses the fresnel
factor to weight reflection vs. refraction and uses the glossy MultiGGX model
for reflection.
For refraction, the correct singlescattering term is now used, and a new
albedo approximation is used that was derived by evaluating GGX albedo for
roughnesses from 0 to 1 and IORs from 1 to 3 and fitting numerical
approximations to it. The resulting model has a mean relative error of 9e-5,
but could probably be simplified without losing noticable accuracy in the
final render.
The improved PDFs help with glossy highlights (due to better light sampling vs.
closure sampling MIS) and fix the situation described in T51836 where mixing
MultiGGX with other closures (as it happens in e.g. the Principled
BSDF) causes incorrect darkening.
The implementation originally handled four different cases:
Regular glossy, glass, metallic fresnel glossy and diffuse.
However, only the first two are actually used currently. Therefore, this commit
removes the other two, which allows to simplify the code.
Additionally, due to the Principled BSDF, the function arguments are now
identical for glossy and glass, which allows to get rid of some ugly #ifdefs.
This commit contains the first part of the new Cycles denoising option,
which filters the resulting image using information gathered during rendering
to get rid of noise while preserving visual features as well as possible.
To use the option, enable it in the render layer options. The default settings
fit a wide range of scenes, but the user can tweak individual settings to
control the tradeoff between a noise-free image, image details, and calculation
time.
Note that the denoiser may still change in the future and that some features
are not implemented yet. The most important missing feature is animation
denoising, which uses information from multiple frames at once to produce a
flicker-free and smoother result. These features will be added in the future.
Finally, thanks to all the people who supported this project:
- Google (through the GSoC) and Theory Studios for sponsoring the development
- The authors of the papers I used for implementing the denoiser (more details
on them will be included in the technical docs)
- The other Cycles devs for feedback on the code, especially Sergey for
mentoring the GSoC project and Brecht for the code review!
- And of course the users who helped with testing, reported bugs and things
that could and/or should work better!