this option was already unselectable in the UI, and is treated as GGX
with zero roughness. Upon building the shader graph, we only convert a
closure to `SHARP` when option Filter Glossy is not used and the
roughness is below certain threshold. The benefit is that we can avoid
calling `bsdf_eval()` or return earlier in some cases, but the thresholds
vary across files.
This patch removes `SHARP` closures altogether, and checks if the
roughness value is below a global threshold `BSDF_ROUGHNESS_THRESH`
after blurring, in which case the flag `SD_BSDF_HAS_EVAL` is not set.
The global threshold is set to be `5e-7f` because threshold smaller than
that seems to have caused problem in the past (c6aa0217ac). Also removes
a bunch of functions, variables and arguments that were only there
because we converted closures under certain conditions.
Pull Request: https://projects.blender.org/blender/blender/pulls/109902
The Voronoi distance output is clamped at 8, which is apparent for distance
metrics like Minkowski with low exponents.
This patch fixes that by setting the initial distance of the search loop to
FLT_MAX instead of 8. And for the Smooth variant of F1, the "h" parameter is set
to 1 for the first iteration using a signal value, effectively ignoring the
initial distance and using the computed distance at the first iteration instead.
Pull Request: https://projects.blender.org/blender/blender/pulls/109286
The Voronoi Smooth F1 mode breaks when the Smoothness is 0 for OSL. This is
due to a zero division in the shader.
To fix this, standard F1 is used when Smoothness is 0.
Pull Request: https://projects.blender.org/blender/blender/pulls/109255
Fractal noise is the idea of evaluating the same noise function multiple times with
different input parameters on each layer and then mixing the results. The individual
layers are usually called octaves.
The number of layers is controlled with a "Detail" slider.
The "Lacunarity" input controls a factor by which each successive layer gets scaled.
The existing Noise node already supports fractal noise. Now the Voronoi Noise node
supports it as well. The node also has a new "Normalize" property that ensures that
the output values stay in a [0.0, 1.0] range. That is except for the F2 feature where
in rare cases the output may be outside that range even with "Normalize" turned on.
How the individual octaves are mixed depends on the feature and output socket:
- F1/Smooth F1/F2:
- Distance/Color output:
The individual Distance/Color octaves are first multiplied by a factor of
`Roughness ^ (#layers - 1.0)` then added together to create the final output.
- Position output:
Each Position octave gets linearly interpolated with the combined output of the
previous octaves. The Roughness input serves as an interpolation factor with
0.0 resutling in only using the combined output of the previous octaves and
1.0 resulting in only using the current highest octave.
- Distance to Edge:
- Distance output:
The Distance octaves are mixed exactly like the Position octaves for F1/Smooth F1/F2.
It should be noted that Voronoi Noise is a relatively slow noise function, especially
at higher dimensions. Increasing the "Detail" makes it even slower. Therefore, when
optimizing a scene one should consider trying to use simpler noise functions instead
of Voronoi if the final result is close enough.
Pull Request: https://projects.blender.org/blender/blender/pulls/106827
So far, each closure in Cycles was either diffuse OR glossy OR
transmissive, and its color and contributions were assigned
to the corresponding direct/indirect/color passes.
However, since Glass is a single closure now, that is no longer enough,
since glass has both a glossy and a transmissive component.
Therefore, this commit adds support for splitting contributions from
the Glass closure between the two types.
For 4.0, we might want to also use this for Principled Hair since it
also technically has both types, but that would be a change from
the existing result so it's not part of 3.6 yet.
The input socket of Image Texture node is connected with the UV output
of Texture Coordinate node by default, the later reads the geometry UV,
which is not available for lights because they have no real geometry.
The current implementation simply retrieves UV from shader data.
Pull Request: https://projects.blender.org/blender/blender/pulls/108691
This is added so that some texture pipeline with point light and spot
light could work as before. Some people use the Normal socket from
Texture Coordinate node for texturing light, however the Normal there is
actually the incoming light direction and should be corrected. Using the
Parametric socket from Geometry node + normal transform from world to
object with Vector Transform node delivers the same result as using the
Normal socket from Texture Coordinate node.
Currently for lights only normal transformation works, because only
there we fetch light transform properly. This is a confusing behaviour,
but testing if it's a lamp in all relevant functions could have bad
impact on the performance. A more proper solution would be to change
lights to real objects, which is planned for the future.

Pull Request: https://projects.blender.org/blender/blender/pulls/108666
While the multiscattering GGX code is cool and solves the darkening problem at higher roughnesses, it's also currently buggy, hard to maintain and often impractical to use due to the higher noise and render time.
In practice, though, having the exact correct directional distribution is not that important as long as the overall albedo is correct and we a) don't get the darkening effect and b) do get the saturation effect at higher roughnesses.
This can simply be achieved by adding a second lobe (https://blog.selfshadow.com/publications/s2017-shading-course/imageworks/s2017_pbs_imageworks_slides_v2.pdf) or scaling the single-scattering GGX lobe (https://blog.selfshadow.com/publications/turquin/ms_comp_final.pdf). Both approaches require the same precomputation and produce outputs of comparable quality, so I went for the simple albedo scaling since it's easier to implement and more efficient.
Overall, the results are pretty good: All scenarios that I tested (Glossy BSDF, Glass BSDF, Principled BSDF with metallic or transmissive = 1) pass the white furnace test (a material with pure-white color in front of a pure-white background should be indistinguishable from the background if it preserves energy), and the overall albedo for non-white materials matches that produced by the real multi-scattering code (with the expected saturation increase as the roughness increases).
In order to produce the precomputed tables, the PR also includes a utility that computes them. This is not built by default, since there's no reason for a user to run it (it only makes sense for documentation/reproducibility purposes and when making changes to the microfacet models).
Pull Request: https://projects.blender.org/blender/blender/pulls/107958
Used to be https://archive.blender.org/developer/D17123.
Internally these are already using the same code path anyways, there's no point in maintaining two distinct nodes.
The obvious approach would be to add Anisotropy controls to the Glossy BSDF node and remove the Anisotropic BSDF node. However, that would break forward compability, since older Blender versions don't know how to handle the Anisotropy input on the Glossy BSDF node.
Therefore, this commit technically removes the Glossy BSDF node, uses versioning to replace them with an Anisotropic BSDF node, and renames that node to "Glossy BSDF".
That way, when you open a new file in an older version, all the nodes show up as Anisotropic BSDF nodes and render correctly.
This is a bit ugly internally since we need to preserve the old `idname` which now no longer matches the UI name, but that's not too bad.
Also removes the "Sharp" distribution option and replaces it with GGX, sets Roughness to zero and disconnects any input to the Roughness socket.
Pull Request: https://projects.blender.org/blender/blender/pulls/104445
In some cases comments at the end of control statements were wrapped
onto new lines which made it read as if they applied to the next line
instead of the (now) previous line.
Relocate comments to the previous line or in some cases the end of the
line (before the brace) to avoid confusion.
Note that in quite a few cases these blocks didn't read well
even before MultiLine was used as comments after the brace caused
wrapping across multiple lines in a way that didn't follow
formatting used everywhere else.
There were two issues here preventing the proper display of the IES
files in question.
The primary one was that these lights are actually vertical. Their
profiles actually point upwards from 90deg to 180deg but our parser was
trying hard to adjust it to start at 0deg incorrectly.
Lastly, the files in question ended with the parser in the `eof`
state - they are "missing" the final carriage return that other IES
files tend to have but other viewers don't seem to mind. Change the
`eof` check instead for a better one that will indicate if any parsing
errors occurred along the way.
Pull Request: https://projects.blender.org/blender/blender/pulls/107320
Due to floating point differences between importance sampling and
texture evaluation, disagreeing on whether or not a ray lies within
the sun disc.
* Use the same input values for geographical_to_direction() in
sky_radiance_nishita() and kernel_data.background.sun.
* The mathematical operations in pdf_uniform_cone() were adjusted to
match sky_radiance_nishita().
Pull Request: https://projects.blender.org/blender/blender/pulls/106764
The name #ensure_valid_reflection seems to indicate that the resulted
reflection must be valid, whereas in the reality it only ensure validity
for specular reflections. The new name matches the behavior better.
This function checks if the shading normal would result in an invalid reflection into the lower hemisphere; if it is the case, the function raises the shading normal just enough so that the specular reflection lies above the surface. This is a trick to prevent dark regions at grazing angles caused by normal/bump maps. However, the specular direction is not a good representation for a diffuse material, applying this function sometimes brightens the result too much and causes unexpected results. This patch applies the function to only glossy materials instead.
Pull Request: #105776
Currently, we use the closure type to encode the type of microfacet distribution
(GGX/Beckmann/Sharp/MultiGGX), the lobes we're interested in
(Reflection/Refraction/both) AND the Fresnel type (None or Principled v1).
This results in the mess of dozens of options that we currently have. Since
adding Principled v2 and the MaterialX OSL closures will involve adding more
Fresnel types, this clearly doesn't scale.
But, since the earlier Fresnel rework (D17101), the Fresnel type only matters
in one place now. This allows to significantly clean up the closure type
handling. To do this, MicrofacetBsdfs now separately store their Fresnel type,
and instead of a single MicrofacetExtra we have one struct per Fresnel type
(unless no extra data is needed).
Further, instead of having one _setup() function per combination, the Fresnel
setup is also split into separate functions. This decouples the implementation
of new Fresnel terms from most of the Microfacet logic, and makes it a very
simple and clean operation.
This commit replaces the current Glass approach, where Glass is a virtual closure
that gets replaced with a Glossy and a Refractive closure, with a combined
closure that handles Fresnel after sampling the microfacet. That way, the Fresnel
term is more accurate since it accounts for the microfacet normal, not the
shading normal.
Also updates the BSDF sampling to use a 3D sampler now, since we need two
dimensions to pick the microfacet normal and then a third dimension to pick
reflection/refraction. This can also be used to get rid of the LCG in the
Principled Hair BSDF, which means we can remove it altogether once MultiGGX is
gone.
Also, "sharp" is now supported as a microfacet distribution in OSL, and 2
is supported as the "refract" argument to microfacet() in order to get glass.
- Rename roughness variables for more clarity - before, the SVM/OSL code would
set s and v to the linear roughness values, and the setup function would over-
write them with the distribution parameters. This actually caused a bug in the
albedo code, since it intended to use the linear roughness value, but ended up
getting the remapped value.
- Deduplicate the evaluation and sample functions. Most of their code is the
same, only the middle part is different.
- Changed albedo computation to return the sum of the intensities of the four
BSDF lobes. Previously, the code applied the inverse of the color->sigma
mapping from the paper - this returns the color specified in the node, but
for very dark hair (e.g. when using the Melanin controls) the result is
extremely low (e.g. 0.000001) despite the hair still reflecting a significant
amount of light (since the R lobe is independent of sigma). This causes issues
with the light component passes, so this change fixes#104586.
- There's quite a few computations at the start of the evaluation function that
are needed for sampling, evaluation and albedo computation, but only depend on
the view direction. Therefore, just precompute them - we still have space in
PrincipledHairExtra after all.
- Fix a tiny bug - the direction sampling code did not account for the R lobe
roughness modifier.
Pull Request #104669
This is both a cleanup and a preparation for the Principled v2 changes.
Notable changes:
- Clearcoat weight is now folded into the closure weight, there's no reason
to track this separately.
- There's a general-purpose helper for computing a Closure's albedo, which is
currently used by the denoising albedo and diffuse/gloss/transmission color
passes.
- The d/g/t color passes didn't account for closure albedo before, this means
that e.g. metallic shaders with Principled v2 now have their color texture
included in the glossy color pass. Also fixes T104041 (sheen albedo).
- Instead of precomputing and storing the albedo during shader setup, compute
it when needed. This is technically redundant since we still need to compute
it on shader setup to adjust the sample weight, but the operation is cheap
enough that freeing up the storage seems worth it.
- Future changes (Principled v2) are easier to integrate since the Fresnel
handling isn't all over the place anymore.
- Fresnel handling in the Multiscattering GGX code is still ugly, but since
removing that entirely is the next step, putting effort into cleaning it up
doesn't seem worth it.
- Apart from the d/g/t color passes, no changes to render results are expected.
Differential Revision: https://developer.blender.org/D17101
Cycles ignores the size of spot lights, therefore the illuminated area doesn't match the gizmo. This patch resolves this discrepancy.
| Before (Cycles) | After (Cycles) | Eevee
|{F14200605}|{F14200595}|{F14200600}|
This is done by scaling the ray direction by the size of the cone. The implementation of `spot_light_attenuation()` in `spot.h` matches `spot_attenuation()` in `lights_lib.glsl`.
**Test file**:
{F14200728}
Differential Revision: https://developer.blender.org/D17129
The background evaluation samples the sky discretely, so if the sun is
too small, it can be missed in the evaluation. To solve this, the sun is
ignored during the background evaluation and its contribution is
computed separately.
This makes it possible to use `texture` and `texture3d` in custom
OSL shaders with a constant image file name as argument on the
GPU, where previously texturing was only possible through Cycles
nodes.
For constant file name arguments, OSL calls
`OSL::RendererServices::get_texture_handle()` with the file name
string to convert it into an opaque handle for use on the GPU.
That is now used to load the respective image file using the Cycles
image manager and generate a SVM handle that can be used on
the GPU. Some care is necessary as the renderer services class is
shared across multiple Cycles instances, whereas the Cycles image
manager is local to each.
Maniphest Tasks: T101222
Differential Revision: https://developer.blender.org/D17032
wi is the viewing direction, and wo is the illumination direction. Under this notation, BSDF sampling always samples from wi and outputs wo, which is consistent with most of the papers and mitsuba. This order is reversed compared with PBRT, although PBRT also traces from the camera.
When rendering in the viewport (or probably on instanced objects, but I didn't
test that), emissive objects whose scale is negative give the wrong value on the
"backfacing" input when multiple sampling is enabled.
The underlying problem was a corner case in how normal transformation is handled,
which is generally a bit messy.
From what I can tell, the pattern appears to be:
- If you first transform vertices to world space and then compute the normal from
them (as triangle light samping, MNEE and light tree do), you need to flip
whenever the transform has negative scale regardless of whether the transform
has been applied
- If you compute the normal in object space and then transform it to world space
(as the regular shader_setup_from_ray path does), you only need to flip if the
transform was already applied and was negative
- If you get the normal from a local intersection result (as bevel and SSS do),
you only need to flip if the transform was already applied and was negative
- If you get the normal from vertex normals, you don't need to do anything since
the host-side code does the flip for you (arguably it'd be more consistent to
do this in the kernel as well, but meh, not worth the potential slowdown)
So, this patch fixes the logic in the triangle emission code.
Also, turns out that the MNEE code had the same problem and was also having
problems in the viewport on negative-scale objects, this is also fixed now.
Differential Revision: https://developer.blender.org/D16952
Expands Color Mix nodes with new Exclusion mode.
Similar to Difference but produces less contrast.
Requested by Pierre Schiller @3D_director and
@OmarSquircleArt on twitter.
Differential Revision: https://developer.blender.org/D16543
The distinction existed for legacy reasons, to easily port of Embree
intersection code without affecting the main vector types. However we are now
using SIMD for these types as well, so no good reason to keep the distinction.
Also more consistently pass these vector types by value in inline functions.
Previously it was partially changed for functions used by Metal to avoid having
to add address space qualifiers, simple to do it everywhere.
Also removes function declarations for vector math headers, serves no real
purpose.
Differential Revision: https://developer.blender.org/D16146
* Return roughness and IOR for BSDF sampling
* Add functions to query IOR and label for given BSDF
* Default IOR to 1.0 instead of 0.0 for BSDFs that don't use it
* Ensure pdf >= 0.0 in case of numerical precision issues
Ref T92571, D15286
The multi-dimensional Sobol pattern required us to carefully use as low
dimensions as possible, as quality goes down in higher dimensions. Now that we
have two sampling patterns that are at least as good, there is no need to keep
it around and the implementation can be simplified.
Differential Revision: https://developer.blender.org/D15788
This patch is a response to T92588 and is implemented
as a Function/Shader node.
This node has support for Float, Vector and Color data types.
For Vector it supports uniform and non-uniform mixing.
For Color it now has the option to remove factor clamping.
It replaces the Mix RGB for Shader and Geometry node trees.
As discussed in T96219, this patch converts existing nodes
in .blend files. The old node is still available in the
Python API but hidden from the menus.
Reviewed By: HooglyBoogly, JacquesLucke, simonthommes, brecht
Maniphest Tasks: T92588
Differential Revision: https://developer.blender.org/D13749
The calculation was revised to address two issues:
* Discontinuities occurring when detail was a non-integer greater than 2.
* Levels of detail in the interval [0,1) repeating the levels of detail in
the interval [1,2).
This fixes Cycles, Eevee and geometry nodes.
Differential Revision: https://developer.blender.org/D15785
* Store compact ray differentials in ShaderData and compute full differentials
on demand. This reduces register pressure on the GPU.
* Remove BSDF differential code that was effectively doing nothing as the
differential orientation was discarded when making it compact.
This gives a 1-5% speedup with RTX A6000 + OptiX in our benchmarks, with the
bigger speedups in simpler scenes.
Renders appear to be identical except for the Both displacement option that
does both displacement and bump.
Differential Revision: https://developer.blender.org/D15677
These replace float3 and packed_float3 in various places in the kernel where a
spectral color representation will be used in the future. That representation
will require more than 3 channels and conversion to from/RGB. The kernel code
was refactored to remove the assumption that Spectrum and RGB colors are the
same thing.
There are no functional changes, Spectrum is still a float3 and the conversion
functions are no-ops.
Differential Revision: https://developer.blender.org/D15535