this option was already unselectable in the UI, and is treated as GGX
with zero roughness. Upon building the shader graph, we only convert a
closure to `SHARP` when option Filter Glossy is not used and the
roughness is below certain threshold. The benefit is that we can avoid
calling `bsdf_eval()` or return earlier in some cases, but the thresholds
vary across files.
This patch removes `SHARP` closures altogether, and checks if the
roughness value is below a global threshold `BSDF_ROUGHNESS_THRESH`
after blurring, in which case the flag `SD_BSDF_HAS_EVAL` is not set.
The global threshold is set to be `5e-7f` because threshold smaller than
that seems to have caused problem in the past (c6aa0217ac). Also removes
a bunch of functions, variables and arguments that were only there
because we converted closures under certain conditions.
Pull Request: https://projects.blender.org/blender/blender/pulls/109902
So far, each closure in Cycles was either diffuse OR glossy OR
transmissive, and its color and contributions were assigned
to the corresponding direct/indirect/color passes.
However, since Glass is a single closure now, that is no longer enough,
since glass has both a glossy and a transmissive component.
Therefore, this commit adds support for splitting contributions from
the Glass closure between the two types.
For 4.0, we might want to also use this for Principled Hair since it
also technically has both types, but that would be a change from
the existing result so it's not part of 3.6 yet.
While the multiscattering GGX code is cool and solves the darkening problem at higher roughnesses, it's also currently buggy, hard to maintain and often impractical to use due to the higher noise and render time.
In practice, though, having the exact correct directional distribution is not that important as long as the overall albedo is correct and we a) don't get the darkening effect and b) do get the saturation effect at higher roughnesses.
This can simply be achieved by adding a second lobe (https://blog.selfshadow.com/publications/s2017-shading-course/imageworks/s2017_pbs_imageworks_slides_v2.pdf) or scaling the single-scattering GGX lobe (https://blog.selfshadow.com/publications/turquin/ms_comp_final.pdf). Both approaches require the same precomputation and produce outputs of comparable quality, so I went for the simple albedo scaling since it's easier to implement and more efficient.
Overall, the results are pretty good: All scenarios that I tested (Glossy BSDF, Glass BSDF, Principled BSDF with metallic or transmissive = 1) pass the white furnace test (a material with pure-white color in front of a pure-white background should be indistinguishable from the background if it preserves energy), and the overall albedo for non-white materials matches that produced by the real multi-scattering code (with the expected saturation increase as the roughness increases).
In order to produce the precomputed tables, the PR also includes a utility that computes them. This is not built by default, since there's no reason for a user to run it (it only makes sense for documentation/reproducibility purposes and when making changes to the microfacet models).
Pull Request: https://projects.blender.org/blender/blender/pulls/107958
For example
```
OIIOOutputDriver::~OIIOOutputDriver()
{
}
```
becomes
```
OIIOOutputDriver::~OIIOOutputDriver() {}
```
Saves quite some vertical space, which is especially handy for
constructors.
Pull Request: https://projects.blender.org/blender/blender/pulls/105594
The name #ensure_valid_reflection seems to indicate that the resulted
reflection must be valid, whereas in the reality it only ensure validity
for specular reflections. The new name matches the behavior better.
This function checks if the shading normal would result in an invalid reflection into the lower hemisphere; if it is the case, the function raises the shading normal just enough so that the specular reflection lies above the surface. This is a trick to prevent dark regions at grazing angles caused by normal/bump maps. However, the specular direction is not a good representation for a diffuse material, applying this function sometimes brightens the result too much and causes unexpected results. This patch applies the function to only glossy materials instead.
Pull Request: #105776
Actually both potential roots lie in the interval [0, 1], so the
function ended up checking both roots all the time.
The new implementation explains why only one of the roots is valid; it
saves two square roots and a bunch of other computations.
This commit implements three OSL microfacet closures that are needed to support
MaterialX: dielectric_bsdf, conductor_bsdf and generalized_schlick_bsdf.
Internally these map to existing microfacet closures, only the Fresnel term is
different.
Currently, we use the closure type to encode the type of microfacet distribution
(GGX/Beckmann/Sharp/MultiGGX), the lobes we're interested in
(Reflection/Refraction/both) AND the Fresnel type (None or Principled v1).
This results in the mess of dozens of options that we currently have. Since
adding Principled v2 and the MaterialX OSL closures will involve adding more
Fresnel types, this clearly doesn't scale.
But, since the earlier Fresnel rework (D17101), the Fresnel type only matters
in one place now. This allows to significantly clean up the closure type
handling. To do this, MicrofacetBsdfs now separately store their Fresnel type,
and instead of a single MicrofacetExtra we have one struct per Fresnel type
(unless no extra data is needed).
Further, instead of having one _setup() function per combination, the Fresnel
setup is also split into separate functions. This decouples the implementation
of new Fresnel terms from most of the Microfacet logic, and makes it a very
simple and clean operation.
This commit replaces the current Glass approach, where Glass is a virtual closure
that gets replaced with a Glossy and a Refractive closure, with a combined
closure that handles Fresnel after sampling the microfacet. That way, the Fresnel
term is more accurate since it accounts for the microfacet normal, not the
shading normal.
Also updates the BSDF sampling to use a 3D sampler now, since we need two
dimensions to pick the microfacet normal and then a third dimension to pick
reflection/refraction. This can also be used to get rid of the LCG in the
Principled Hair BSDF, which means we can remove it altogether once MultiGGX is
gone.
Also, "sharp" is now supported as a microfacet distribution in OSL, and 2
is supported as the "refract" argument to microfacet() in order to get glass.
The proper fix (bb9eb262d4) caused compilation problems with HIP, so we're
delaying it until 3.6.
To fix the original bug report (#104586), this is a quick workaround that'll
hopefully not upset the compiler.
Pull Request #104723
- Rename roughness variables for more clarity - before, the SVM/OSL code would
set s and v to the linear roughness values, and the setup function would over-
write them with the distribution parameters. This actually caused a bug in the
albedo code, since it intended to use the linear roughness value, but ended up
getting the remapped value.
- Deduplicate the evaluation and sample functions. Most of their code is the
same, only the middle part is different.
- Changed albedo computation to return the sum of the intensities of the four
BSDF lobes. Previously, the code applied the inverse of the color->sigma
mapping from the paper - this returns the color specified in the node, but
for very dark hair (e.g. when using the Melanin controls) the result is
extremely low (e.g. 0.000001) despite the hair still reflecting a significant
amount of light (since the R lobe is independent of sigma). This causes issues
with the light component passes, so this change fixes#104586.
- There's quite a few computations at the start of the evaluation function that
are needed for sampling, evaluation and albedo computation, but only depend on
the view direction. Therefore, just precompute them - we still have space in
PrincipledHairExtra after all.
- Fix a tiny bug - the direction sampling code did not account for the R lobe
roughness modifier.
Pull Request #104669
This is both a cleanup and a preparation for the Principled v2 changes.
Notable changes:
- Clearcoat weight is now folded into the closure weight, there's no reason
to track this separately.
- There's a general-purpose helper for computing a Closure's albedo, which is
currently used by the denoising albedo and diffuse/gloss/transmission color
passes.
- The d/g/t color passes didn't account for closure albedo before, this means
that e.g. metallic shaders with Principled v2 now have their color texture
included in the glossy color pass. Also fixes T104041 (sheen albedo).
- Instead of precomputing and storing the albedo during shader setup, compute
it when needed. This is technically redundant since we still need to compute
it on shader setup to adjust the sample weight, but the operation is cheap
enough that freeing up the storage seems worth it.
- Future changes (Principled v2) are easier to integrate since the Fresnel
handling isn't all over the place anymore.
- Fresnel handling in the Multiscattering GGX code is still ugly, but since
removing that entirely is the next step, putting effort into cleaning it up
doesn't seem worth it.
- Apart from the d/g/t color passes, no changes to render results are expected.
Differential Revision: https://developer.blender.org/D17101
Based on "Sampling the GGX Distribution of Visible Normals" by Eric Heitz
(https://jcgt.org/published/0007/04/01/).
Also, this removes the lambdaI computation from the Beckmann sampling code and
just recomputes it below. We already need to recompute for two other cases
(GGX and clearcoat), so this makes the code more consistent.
In terms of performance, I don't expect a notable impact since the earlier
computation also was non-trivial, and while it probably was slightly more
accurate, I'd argue that being consistent between evaluation and sampling is
more important than absolute numerical accuracy anyways.
Differential Revision: https://developer.blender.org/D17100
This gives closer results to what I've seen in papers and other renderers when
using the code to precompute albedo later (to replace MultiGGX).
It's usually a tiny difference, the only case where I've seen it matter is
in the `shader/node_group_float.blend` test - but that's a (single-scatter) GGX
closure with 0.9 roughness, so it's not too surprising. In any case, the new
result looks closer to Eevee, so that's good I guess.
Differential Revision: https://developer.blender.org/D17099
The lookup table method on CPU and the numerical root finding method on
GPU give quite different results. This commit deletes the Beckmann lookup
table and uses numerical root finding on all devices. For the numerical
root finding, a combined bisection-Newton method with precision control
is used.
Differential Revision: https://developer.blender.org/D17050
wi is the viewing direction, and wo is the illumination direction. Under this notation, BSDF sampling always samples from wi and outputs wo, which is consistent with most of the papers and mitsuba. This order is reversed compared with PBRT, although PBRT also traces from the camera.
Materials now have an enum to set the emission sampling method, to be
either None, Auto, Front, Back or Front & Back. This replace the
previous "Multiple Importance Sample" option.
Auto is the new default, and uses a heuristic to estimate the emitted
light intensity to determine of the mesh should be considered as a light
for sampling. Shaders sometimes have a bit of emission but treating them
as a light source is not worth the memory/performance overhead.
The Front/Back settings are not important yet, but will help when a
light tree is added. In that case setting emission to Front only on
closed meshes can help ignore emission from inside the mesh interior that
does not contribute anything.
Includes contributions by Brecht Van Lommel and Alaska.
Ref T77889