In Cycles, the convention is that reflection vs. refraction are classified
based on the hemisphere defined by the *shading* normal (N).
In general, most closure code uses the shading normal for most operations,
as is expected since using the geometric normal (Ng) would break normal maps
and smooth shading.
However, there are two places that use Ng: On the one hand, BSDF sampling
functions generally reject reflections that fall below the Ng hemisphere, since
they'd intersect the geometry when tracing the bounce. This is required, and
we can't do much about it.
On the other hand, the Microfacet evaluation code also checked that the ray
is in the same hemisphere w.r.t. both shading and geometric normal.
Theoretically, this is the right thing to do, since sampling and evaluation code
are supposed to be consistent. However, doing so breaks smooth shading, since
now direct light evaluation near the terminator will sometimes be rejected.
This didn't cause problems in practice because of another inconsistency: While
the parameter of the eval functions was named Ng, the caller actually provided
N (unclear whether by mistake or as a hacky workaround to the terminator).
When this was fixed in 063a9e89, users quickly reported issues with the shadow
terminator, so it was reverted to the hacky inconsistency in 1c50dd8b.
So, let's clean this mess up properly. If we don't want to do the Ng hemisphere
check in _eval, then instead of passing in a misleading value that ends up
making it a no-op, just remove the check. After all, the other closures don't
perform it either.
This way, we avoid the mislabeled Ng, we get rid of the special case for
microfacets, and the shadow terminator continues to be fine.
Technically, we still have the _sample vs. _eval mismatch. However, this is just
unavoidable, and is irrelevant in practice: For a strongly directional light
that makes the shadow terminator noticeable, the MIS weights will be massively
in favor of eval, to the point that it doesn't really matter what sample does.
To support this argument: You can actually reproduce a broken shadow terminator
in pretty much every Cycles version going back to 2011 by just setting up a
small intense mesh emitter, turning off MIS on it to disable _eval, and then
rendering a diffuse smooth-shaded sphere with >100000 samples so that the
fireflies resolve into somewhat consistent lighting.
If nobody has complained about this affecting all closures for 11 years,
I guess it's fine.
Pull Request: https://projects.blender.org/blender/blender/pulls/138632
Previously spell checker ignored text in single quotes however this
meant incorrect spelling was ignored in text where it shouldn't have
been.
In cases single quotes were used for literal strings
(such as variables, code & compiler flags),
replace these with back-ticks.
In cases they were used for UI labels,
replace these with double quotes.
In cases they were used to reference symbols,
replace them with doxygens symbol link syntax (leading hash).
Apply some spelling corrections & tweaks (for check_spelling_* targets).
The new correction avoids washed out areas near the shadow terminator,
preserving more detail from normal and bump maps.
It implements the method from the paper "A Microfacet-Based Shadowing
Function to Solve the Bump Terminator Problem" by Alejandro Conty Estevez,
Pascal Lecocq, and Clifford Stein.
Pull Request: https://projects.blender.org/blender/blender/pulls/135380
This implements three improvements to the energy preservation and albedo
scaling logic, which help the Principled BSDF pass the white-furnace test
when using the coat layers at high roughness.
Specifically, at roughness 0.3, the albedo scaling brings it from 60% at
the edge to 95%, and with the energy preservation it's 99.8%.
Pull Request: https://projects.blender.org/blender/blender/pulls/134620
To align better with the pixel and reduce the samples needed.
The paper was using gamma because the jacobian |d_gamma/d_h| approaches
infinity at the boundaries, but it seems that clamping at 0.999 is
enough for numerical stability.
In practice I did not notice a change in the noise level, but it
simplifies the range computation and renders faster due to reduced
sample amount.
Co-authored-by: Olivier Maury <omaury@meta.com>
Ref: !129616
Pull Request: https://projects.blender.org/blender/blender/pulls/134130
The oren_nayar_diffuse_bsdf closure in OSL had two issues:
- It broke when used with roughness of zero
- It only used the provided albedo for energy compensation, so it still
required the user to multiply with the albedo
Therefore, this handles the zero roughness corner case and includes
the albedo in the closure weight.
This makes no difference when using the closure through the Diffuse
or Principled BSDF nodes, only for custom OSL shaders.
Pull Request: https://projects.blender.org/blender/blender/pulls/133597
It is possible that the valid range computed from `theta` and the range
visible to the current pixel do not overlap. In this case the hair
section has no contribution.
Pull Request: https://projects.blender.org/blender/blender/pulls/133337
In Cycles, the subsurface scattering shader will switch to a
diffuse shader under a few different conditions to improve performance
and reduce noise.
This commit removes the "switch back to diffuse if the scattering
radius is less than a quarter of a pixel" optimization because in some
scenes, this can result in noticable lines as the shader transitions
between subsurface scattering and diffuse.
Pull Request: https://projects.blender.org/blender/blender/pulls/133245
Check was misc-const-correctness, combined with readability-isolate-declaration
as suggested by the docs.
Temporarily clang-format "QualifierAlignment: Left" was used to get consistency
with the prevailing order of keywords.
Pull Request: https://projects.blender.org/blender/blender/pulls/132361
* Use .empty() and .data()
* Use nullptr instead of 0
* No else after return
* Simple class member initialization
* Add override for virtual methods
* Include C++ instead of C headers
* Remove some unused includes
* Use default constructors
* Always use braces
* Consistent names in definition and declaration
* Change typedef to using
Pull Request: https://projects.blender.org/blender/blender/pulls/132361
The reason for this probably was the const nature of the shader data.
However, this is something counter-intuitive, as it potentially leads
to multiple BSDFs re-using the same LCG state.
Pull Request: https://projects.blender.org/blender/blender/pulls/132456
Previous implemenation of 5 < d < 50 was taken from the main paper,
fitting for smaller sizes are found in the supplemental. They are less
forward-scattering.
Pull Request: https://projects.blender.org/blender/blender/pulls/130234
`CLOSURE_WEIGHT_CUTOFF` avoids allocating a closure when its weight is
too small. It makes sense for surface closures, but for volume closures
the contribution also depends on the object size/ray length, such a
cutoff seems random and is causing problem in atmospheric scatterings.
Therefore remove the cutoff for volume, just make sure the weight is
positive.
Pull Request: https://projects.blender.org/blender/blender/pulls/131696
The cause is numerical issues with `fast_sinf()`. While fixing
`fast_sinf()` would ultimately fix the problem, it involves more
complications in other code paths, and it is safer to clamp the
integration range anyway.
Pull Request: https://projects.blender.org/blender/blender/pulls/131689
Turns out that with `-fassociative-math`, GCC turns
`(1.0f - cos_NH2) + alpha2 * cos_NH2` into
`cos_NH2 * (alpha2 - 1.0f) + 1.0f`.
Not sure why since the operation count is the same, but if alpha2 is very
small, `alpha2 - 1.0f` will be exactly -1.0f, which then causes issues.
Luckily, having one_minus_cos_NH2 as its own variable appears to be enough to
make GCC keep the original formulation.
Just to be safe, I've also used one_minus_cos_NH2 in the other branch to
hopefully reduce the chance of it being folded in again. Also turns a
division into a reciprocal, which is in theory slightly faster.
Pull Request: https://projects.blender.org/blender/blender/pulls/130469
Draine phase function sampling internally use Henyey-Greenstein and
Rayleigh sampling for degenerated cases, but the sampling pattern was
different between Draine and Rayleigh. The commit effectively replace
`rand` with `1 - rand` in Rayleigh sampling.
Pull Request: https://projects.blender.org/blender/blender/pulls/129261
When `g == 0`, the Draine phase function from
https://doi.org/10.1051/0004-6361/202142437 simplifies to
\[\Phi(\theta)=\frac{3}{4\pi(3+\alpha)}(1+\alpha\cos^2\theta).\]
Similar as Rayleigh sampling in https://doi.org/10.1364/JOSAA.28.002436,
The solution to the CDF of the marginal density function is
\[\cos^3\theta+a\cos\theta+b=0,\]
with
\[a=\frac{3}{\alpha},\quad b=\frac{3+\alpha}{\alpha}(2\xi_1-1),\]
which has only one real root since \(\alpha > 0\),
resulting in the sample technique
\[\cos\theta=u-\frac{1}{\alpha u}.\]
Pull Request: https://projects.blender.org/blender/blender/pulls/129259
- Deduplicate Fisheye projection code
- Replace spherical/cartesian conversions with shared helpers
- Replace transforms from/to local coordinate systems with shared helpers
The main type of repeated transform that's not covered here is `to/from_coords`, but with separate values for xy and z (e.g. BSDFs that already computed `dot(wi, N)` earlier, so they only need `dot(wi, X)` and `dot(wi, Y)` later). Could also be replaced, but it would feel weirdly specific for a helper function.
Pull Request: https://projects.blender.org/blender/blender/pulls/125999
Previously, Cycles only supported the Henyey-Greenstein phase function for volume scattering.
While HG is flexible and works for a wide range of effects, sometimes a more physically accurate
phase function may be needed for realism.
Therefore, this adds three new phase functions to the code:
Rayleigh: For particles with a size below the wavelength of light, mostly athmospheric scattering.
Fournier-Forand: For realistic underwater scattering.
Draine: Fairly specific on its own (mostly for interstellar dust), but useful for the next entry.
Mie: Approximates Mie scattering in water droplets using a mix of Draine and HG phase functions.
These phase functions can be combined using Mix nodes as usual.
Co-authored-by: Lukas Stockner <lukas@lukasstockner.de>
Pull Request: https://projects.blender.org/blender/blender/pulls/123532
The same random number was used for sampling color channel at each step,
which leads to bias. Fixed by rescaling the random number.
Another possibility would be to scramble `rng_offset` and use a new
random number each time, similar as in subsurface scattering, but
rescaling random number should be faster than computing a new one, and
is favorable here since the precision here is not very important
Pull Request: https://projects.blender.org/blender/blender/pulls/127454
This decreases BSDF_ROUGHNESS_SQ_THRESH so that the microfacet
roughness has a cutoff at much lower values and fixes a precision
issue in the bsdf_sample code that prevented this previously.
Pull Request: https://projects.blender.org/blender/blender/pulls/125919
Fix a NaN when rendering glossy materials that can appear due to a
division by zero in bsdf_D when rendering materials with low roughness.
Thank you to Weizhen for the fix after my incorrect
first attempt.
Pull Request: https://projects.blender.org/blender/blender/pulls/125756
Setting this option to a value above zero replaces the lambertian Diffuse term
with the modified energy-preserving Oren-Nayar BSDF, which matches the OpenPBR
behavior.
Pull Request: https://projects.blender.org/blender/blender/pulls/123616
This multiscattering term comes from the OpenPBR specification and nicely
preserves energy while correctly modeling increased saturation at high
roughness.
Preparation for adding a diffuse roughness option to the Principled BSDF.
To me, the difference in output and computation seems small enough to
not need an enum for the old behavior.
Note that this also switches sampling to cosine-weighted, in my tests this
gives lower noise. I also checked doing MIS between cosine and uniform,
using the A term as a weight for how often to use cosine (since that term
is Lambertian diffuse), but always using cosine was better.
A nice consequence of that is that you don't get a huge noise jump when
going from 0.0 to 0.01 roughness.
Pull Request: https://projects.blender.org/blender/blender/pulls/123345
when ray exceeds `max_bounce`, we do not allocate any closure at
intersection. However, Ray Portal BSDF still added `SD_BSDF` flag,
resulting in undefined behavior in
`integrate_surface_bsdf_bssrdf_bounce()`.
This part of code was similar to Transparent BSDF, however, Transparent
closure was still allocated in this case.
To fix the undefined behavior, add `SD_BSDF` flag only when the Ray
Portal closure was allocated.
This is an implementation of thin film iridescence in the Principled BSDF based on "A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence".
There are still several open topics that are left for future work:
- Currently, the thin film only affects dielectric Fresnel, not metallic. Properly specifying thin films on metals requires a proper conductive Fresnel term with complex IOR inputs, any attempt of trying to hack it into the F82 model we currently use for the Principled BSDF is fundamentally flawed. In the future, we'll add a node for proper conductive Fresnel, including thin films.
- The F0/F90 control is not very elegantly implemented right now. It fundamentally works, but enabling thin film while using a Specular Tint causes a jump in appearance since the models integrate it differently. Then again, thin film interference is a physical effect, so of course a non-physical tweak doesn't play nicely with it.
- The white point handling is currently quite crude. In short: The code computes XYZ values of the reflectance spectrum, but we'd need the XYZ values of the product of the reflectance spectrum and the neutral illuminant of the working color space. Currently, this is addressed by just dividing by the XYZ values of the illuminant, but it would be better to do a proper chromatic adaptation transform or to use the proper reference curves for the working space instead of the XYZ curves from the paper.
Pull Request: https://projects.blender.org/blender/blender/pulls/118477