This is the first step of moving the create infos
back inside shader sources.
All info files are now treated as source files.
However, they are not considered in the include tree
yet. This will come in another following PR.
Each shader source file now generate a `.info` file
containing only the create info declarations.
This renames all info files so that they do not
conflict with their previous versions that were
copied (non-generated).
Pull Request: https://projects.blender.org/blender/blender/pulls/146676
This implement the design detailed in #135935.
A new per object property called `Shadow Terminator Normal Offset` is
introduced to shift the shadowed position along the shading normal.
The amount of shift is defined in object space on the object datablock.
This amount is modulated by the facing ratio to the light. Faces
already facing the light will get no offset. This avoids most light
leaking artifacts.
In case of multiple shading normal, the normal used for the shift
is arbitrary. Note that this is the same behavior for other biases.
The magnitude of the bias is controlled by `Shadow Terminator Normal Offset`.
The amount of faces affected by the bias is controlled using
`Shadow Terminator Geometry Offset` just like cycles.
Tweaking the `Shadow Terminator Geometry Offset` allows to avoid too much
shadow distortion on surfaces with bump mapping.
Cycles properties are copied from the Cycles object datablock to the
blender datablock. This break the python API for Cycles.
The defaults are set to no bias because:
- There is no good default. The best value depends on the geometry.
- The best value might depend on real-time displacement.
- Any bias will introduce light leaking on surfaces that do not need it.
- There is an additional cost of enabling it, which is proportional
to the amount of pixels on screen using it.
Pull Request: https://projects.blender.org/blender/blender/pulls/136935
Do this only when applicable.
This allow better compile time checking in Shader C++ compilation.
Moreover, this allows to have `constexpr` in shared code between
C++ and GLSL.
After investigation the `const` keyword in GLSL has the same
semantic than C/C++.
Rel #137333 and #137446
Pull Request: https://projects.blender.org/blender/blender/pulls/137497
This unify the C++ and GLSL codebase style.
The GLSL types are still in the backend compatibility
layers to support python shaders. However, the C++
shader compilation layer doesn't have them to enforce
correct type usage.
Note that this is going to break pretty much all PRs
in flight that targets shader code.
Rel #137261
Pull Request: https://projects.blender.org/blender/blender/pulls/137369
They are actually already some literals with the `f` suffix
that are in our shader codebase and we never had problem in
the past 5 years (or even 8 years).
So I think it is safe to do and improves convergence of codestyles.
Pull Request: https://projects.blender.org/blender/blender/pulls/137352
This allow to store the full object ID inside a `uint32`
buffer. This allows to get the per object data in deferred
passes and avoid to store object data inside the Gbuffer.
This data is only written if needed.
This had to modify the implementation of subpass input
for all backend to be able to bind layered texture.
This currently work because only the layer 0 is bound to the
framebuffer. This is fragile but I don't see a good builtin way
to fix it.
Rel #135935
#### Tasks
- [x] Replace light linking bits in Gbuffer
- [x] Replace Object ID in GBuffer for SSS
- [x] Conditional storage
- [x] Dummy storage if not needed
Pull Request: https://projects.blender.org/blender/blender/pulls/136428
This improves the situations where the shading normal is far from the
best shadow bias direction. This is particularly noticeable on low poly
meshes with smooth normals.
This patch fixes the issue by storing a quantized version of the
geometric normal in the Gbuffer.
Only 6 bits are used (each axis uses 2 bits). This is stored inside the
gbuffer header since it is always available and has some spare bits to
store this data.
This quantization is only done if the error introduced by using the
shading normal is higher than using the quantized normal. This means
that flat shaded surfaces will not have any quantization artifact.
For smooth shaded surfaces, the quantization is only effective if the
shading normal is quite different (greater than ~20° difference)
than the geometric normal.
The attached blendfile contains an example Material that shows
how the quantization is done. This was used to find the threshold
value with the least amount of error.
To compensate the quantization error, we increase the normal bias by
~20% which is subpixel if the shadow texel density is high enough.
This also changes the forward shading pipeline to use the geometric
normal for bias.
The first Light closure normal is now used for the attenuation function
since this is the most representative of the final shading. This normal
being inverted for transmission closures, we have to negate the normal
in the attenuation computation.
Pull Request: https://projects.blender.org/blender/blender/pulls/136136