Blender grid rendering interprets voxel transforms in such a way that the voxel
values are located at the center of a voxel. This is inconsistent with OpenVDB
where the values are located at the lower corners for the purpose or sampling
and related algorithms.
While it is possible to offset grids when communicating with the OpenVDB
library, this is also error-prone and does not add any major advantage.
Every time a grid is passed to OpenVDB we currently have to take care to
transform by half a voxel to ensure correct sampling weights are used that match
the density displayed by the viewport rendering.
This patch changes volume grid generation, conversion, and rendering code so
that grid transforms match the corner-located values in OpenVDB.
- The volume primitive cube node aligns the grid transform with the location of
the first value, which is now also the same as min/max bounds input of the
node.
- Mesh<->Grid conversion does no longer require offsetting grid transform and
mesh vertices respectively by 0.5 voxels.
- Texture space for viewport rendering is offset by half a voxel, so that it
covers the same area as before and voxel centers remain at the same texture
space locations.
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/138449
It works with the beta we are using to build Blender 4.5, but the official
release is a bit different. This fix was tested to work with OSL 1.14.7.
Thanks to Paul Zander for finding the OSL commit that lead to this.
Pull Request: https://projects.blender.org/blender/blender/pulls/144715
It has ~1.2x speed-up on CPU and ~1.5x speed-up on GPU (tested on Metal
M2 Ultra).
Individual samples are noisier, but equal time renders are mostly
better.
Note that volume emission renders differently than before.
Pull Request: https://projects.blender.org/blender/blender/pulls/144451
Guide the probability to scatter in or transmit through the volume.
Only applied for primary rays.
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
The distance sampling is mostly based on weighted delta tracking from
[Monte Carlo Methods for Volumetric Light Transport Simulation]
(http://iliyan.com/publications/VolumeSTAR/VolumeSTAR_EG2018.pdf).
The recursive Monte Carlo estimation of the Radiative Transfer Equation is
\[\langle L \rangle=\frac{\bar T(x\rightarrow y)}{\bar p(x\rightarrow
y)}(L_e+\sigma_s L_s + \sigma_n L).\]
where \(\bar T(x\rightarrow y) = e^{-\bar\sigma\Vert x-y\Vert}\) is the
majorant transmittance between points \(x\) and \(y\), \(p(x\rightarrow
y) = \bar\sigma e^{-\bar\sigma\Vert x-y\Vert}\) is the probability of
sampling point \(y\) from point \(x\) following exponential
distribution.
At each recursive step, we randomly pick one of the two events
proportional to their weights:
* If \(\xi < \frac{\sigma_s}{\sigma_s+\vert\sigma_n\vert}\), we sample
scatter event and evaluate \(L_s\).
* Otherwise, no real collision happens and we continue the recursive
process.
The emission \(L_e\) is evaluated at each step.
This also removes some unused volume settings from the UI:
* "Max Steps" is removed, because the step size is automatically specified
by the volume octree. There is a hard-coded threshold `VOLUME_MAX_STEPS`
to prevent numerical issues.
* "Homogeneous" is automatically detected during density evaluation
An option "Unbiased" is added to the UI. When enabled, densities above
the majorant are clamped.
Since we sample the same light for distance sampling and equiangular
sampling, the sample is invalid anyway, so just avoid sampling direct
light for distance sampling too.
This fits better with the way normal and displacement maps are typically
combined. Previously there was a mixing of displaced normal and undisplaced
tangent, which was broken behavior.
Additionally, to undisplaced_N and undisplaced_tangent attributes must now
always be used to get undisplaced coordinates. The regular N and tangent
attributes now always include displacement.
Ref #142022
Pull Request: https://projects.blender.org/blender/blender/pulls/143109
The Principled BSDF has a ton of inputs, and the previous SVM code just always
allocated stack space for all of them. This results in a ton of additional
NODE_VALUE_x SVM nodes, which slow down execution.
However, this is not really needed for two reasons:
- First, many inputs are only used consitionally. For example, if the
subsurface weight is zero, none of the other subsurface inputs are used.
- Many of the inputs have a "usual" value that they will have in most
materials, so if they happen to have that value we can just indicate that
by not allocating space for them.
This is a bit similar to the standard "pack the fixed value and provide
a stack offset if there's a link" pattern, except that the fixed value
is a constant in the code and we allocate a NODE_VALUE_x if a different
fixed value is used.
Therefore, this PR re-implements the parameter packing in a more efficient way:
- If we can determine that a component is disabled, all conditional inputs are
disconnected (to avoid generating upstream nodes).
- If we can determine that a component is disabled, we skip allocating all
conditional inputs on the stack.
- The inputs for which a reasonable "usual" value exists are changed to
respect that, and to only be allocated if they differ.
- param1 and param2 (which are fixed-value-packed as on all BSDF nodes) are
used to store IOR and roughness, which have a decent chance to be fixed
values.
- The parameter packing is more aggressive about using uchar4, which allows
to get rid of two SVM nodes while still storing the same inputs.
The result is a considerable speedup in scenes that make heavy use of the
Principled BSDF:
| Scene | CPU speedup | OptiX speedup |
| --- | --- | --- |
| attic | 5% | 9% |
| bistro | 5% | 8% |
| junkshop | 5% | 10% |
| monster | 3% | 4% |
| spring | 1% | 6% |
Pull Request: https://projects.blender.org/blender/blender/pulls/143910
This PR adds a new `fresnel_conductor_polarized` function, which calculates reflectance and phase shift (if requested) for both parallel and perpendicular polarized light. This is needed for applying thin film iridescence to conductors (see !141131).
For consistency, this PR also makes `fresnel_conductor` call `fresnel_conductor_polarized` instead of using a fast approximation of the Fresnel equations that is inaccurate at lower n and k values. This will change the output of some Metallic BSDF renders using Physical Conductor and prevent discrepancies when enabling thin film iridescence.
I didn't do any rigorous performance testing, but from timing the functions outside of Blender, `fresnel_conductor_polarized` is significantly slower than the approximation, between 1.5-3x depending on the compiler. This makes sense because it has three square roots and the approximation has none. In some informal tests with metallic_multiggx_physical.blend modified to have more spheres, the new renders took around 1-2% longer on both CPU and GPU.
There are some avoidable inefficiencies in this approach of just calling `fresnel_conductor_polarized`:
- one of the three square roots could be saved since `fresnel_conductor` never needs the phase shift and there are simplifications possible when only calculating the reflectance
- there are several unnecessary multiplications by 1.0 since `fresnel_conductor` uses relative IOR and `fresnel_conductor_polarized` doesn't, though those could get optimized out if inlined
Pull Request: https://projects.blender.org/blender/blender/pulls/143903
This replaces `stack_assign` with `stack_assign_if_linked`, which should save a few SVM nodes for constant parameters.
Running benchmarks (all scenes in the benchmark repo, 3 runs, median value for each) shows 1.0% improvement on CPU and 1.5% on OptiX. Not huge, but fairly (all between -0.2% and 3.0%).
Pull Request: https://projects.blender.org/blender/blender/pulls/143404
This change puts all the block size macros in the same common header, so
they can be included in host side code without needing to also include
the kernels that are defined in the device headers that contained these
values.
This change also removes a magic number used to enqueue a kernel, which
happened to agree with the GPU_PARALLEL_SORT_BLOCK_SIZE macro.
Pull Request: https://projects.blender.org/blender/blender/pulls/143646
`Eavg` can still be 1 for very small roughness, causing division by zero
when computing `Ems`.
A roughness of 1e-5 gives an `Evg` of 0.999998, seems reasonable.
Pull Request: https://projects.blender.org/blender/blender/pulls/143637
Add new "Linear 3D Curves" option in the Curves panel in the render
properties. This renders curves as linear segments rather than smooth
curves, for faster render time at the cost of accuracy.
On NVIDIA Blackwell GPUs, this can give a 6x speedup compared to smooth
curves, due to hardware acceleration. On NVIDIA Ada there is still
a 3x speedup, and CPU and other GPU backends will also render this
faster.
A difference with smooth curves is that these have end caps, as this
was simpler to implement and they are usually helpful anyway.
In the future this functionality will also be used to properly support
the CURVE_TYPE_POLY on the new curves object.
Pull Request: https://projects.blender.org/blender/blender/pulls/139735
Update use_shading, use_camera and the shading system pointers in the same
location, so that when the render is interrupted they are in a consistent state.
The added null pointer checks are not strictly needed, but just in case it
goes out of sync for another reason.
Pull Request: https://projects.blender.org/blender/blender/pulls/143467
`sd->dPdu`, `sd->dPdv`, `sd->du` and `sd->dv` are computed from
`sd->dP` by constructing a local frame, so both results are the same, subject to some numerical differences.
This avoids constructing the local frame again, so might be faster.
It allows to implement tricks based on a knowledge whether the path
ever cam through a portal or not, and even something more advanced
based on the number of portals.
The main current objective is for strokes shading: stroke shader
uses Ray Portal BSDF to place ray to the center of the stroke and
point it in the direction of the surface it is generated for. This
gives stroke a single color which matches shading of the original
object. For this usecase to work the ray bounced from the original
surface should ignore the strokes, which is now possible by using
Portal Depth input and mixing with the Transparent BSDF. It also
helps to make shading look better when there are multiple stroke
layers.
A solution of using portal depth is chosen over a single flag due
to various factors:
- Last time we've looked into it it was a bit tricky to implement
as a flag due to us running out of bits.
- It feels to be more flexible solution, even though it is a bit
hard to come up with 100% compelling setup for it.
- It needs to be slightly different from the current "Is Foo"
flags, and be more "Is Portal Descendant" or something.
An extra uint16 is added to the state to count the portal depth,
but it is only allocated for scenes that use Ray Portal BSDF.
Portal BSDF still increments Transparent bounce, as it is required
to have some "limiting" factor so that ray does not get infinitely
move to different place of the scene.
Ref #125213
Pull Request: https://projects.blender.org/blender/blender/pulls/143107
When "generated" is required via Attribute Node, it is not available for
World and Point Cloud.
Make OSL match the SVM behavior to use the object coordinates
(see `svm/attribute.h`),
Pull Request: https://projects.blender.org/blender/blender/pulls/143198
Previously with adaptive subdivision this happened to work with the N
attribute, but that was not meant to be undisplaced. This adds a new
undisplaced_N attribute specifically for this purpose.
For backwards compatibility in Blender 4.5, this also keeps N undisplaced.
But that will be changed in 5.0.
Pull Request: https://projects.blender.org/blender/blender/pulls/142090
Currently MetalRT renders always use extended limits, which is needed to correctly render scenes where the max primitive count can exceed 2^28 or the instance count can exceed 2^24. This patch adopts Metal best practices of only enabling this flag if it is needed.
This PR is similar to #133364, but there are some notable differences:
1) The old PR made an overly optimistic assumption that all the relevant visibility bits could be squeezed into 8 bits. This new PR adopts the same approach that Optix takes of using 8 bits as a primary HW filter, and checking the full 32 bit mask inside the SW intersection handler.
~~2) I moved the scene scanning check from Scene into MetalDevice. This avoids platform specific details leaking into platform agnostic areas.~~
~~3) In live viewport mode, we always use extended limits in case we tip over the threshold.~~
_EDIT:_
2) The limits are scanned in `Scene::update_kernel_features`, and given to the device by a new `set_bvh_limits` method which returns true if the BVH and kernels need to be reloaded.
Pull Request: https://projects.blender.org/blender/blender/pulls/142401
When `delta` is significantly larger than the difference between `tmin`
and `tmax`, the precision of `theta_a` and `theta_b` reduces, resulting
in banding artefacts.
For equiangular sampling, we use uniform sampling to fix this case.
For light tree, we use the equality
atan(a) + atan(b) = atan2(a + b, 1 - a*b).
Pull Request: https://projects.blender.org/blender/blender/pulls/142845
The code of the "oneapi_load_kernels" function before this modification
was loading kernels and compiling them, if needed, for all devices in
the associated GPU context. This makes sense for one GPU execution
scenario, as well as for execution scenario of multi identical GPU,
but in cases where Blender users have several different GPUs in
render, the previous implementation would compile all kernels
for all devices for each device, unnecessarily doing the same
work multiple times. Because of this, I am changing the
implementation so that now compilation happens only for the used
device per used device, ensuring that no unnecessary work is done.
No render performance changes are expected.
This improves the existing scene specialisation mechanism by replacing "kernel_data.kernel_features" with a function constant. It doesn't cause any additional compilation requests, but allows the backend compiler to eliminate more dead code. An additional compiler hint is provided for dead-stripping "volume_stack_enter_exit" which results in slightly faster rendering of non-volumetric scenes.
Pull Request: https://projects.blender.org/blender/blender/pulls/142235
Previously, we used precomputed Gaussian fits to the XYZ CMFs, performed
the spectral integration in that space, and then converted the result
to the RGB working space.
That worked because we're only supporting dielectric base layers for
the thin film code, so the inputs to the spectral integration
(reflectivity and phase) are both constant w.r.t. wavelength.
However, this will no longer work for conductive base layers.
We could handle reflectivity by converting to XYZ, but that won't work
for phase since its effect on the output is nonlinear.
Therefore, it's time to do this properly by performing the spectral
integration directly in the RGB primaries. To do this, we need to:
- Compute the RGB CMFs from the XYZ CMFs and XYZ-to-RGB matrix
- Resample the RGB CMFs to be parametrized by frequency instead of wavelength
- Compute the FFT of the CMFs
- Store it as a LUT to be used by the kernel code
However, there's two optimizations we can make:
- Both the resampling and the FFT are linear operations, as is the
XYZ-to-RGB conversion. Therefore, we can resample and Fourier-transform
the XYZ CMFs once, store the result in a precomputed table, and then just
multiply the entries by the XYZ-to-RGB matrix at runtime.
- I've included the Python script used to compute the table under
`intern/cycles/doc/precompute`.
- The reference implementation by the paper authors [1] simply stores the
real and imaginary parts in the LUT, and then computes
`cos(shift)*real + sin(shift)*imag`. However, the real and imaginary parts
are oscillating, so the LUT with linear interpolation is not particularly
good at representing them. Instead, we can convert the table to
Magnitude/Phase representation, which is much smoother, and do
`mag * cos(phase - shift)` in the kernel.
- Phase needs to be unwrapped to handle the interpolation decently,
but that's easy.
- This requires an extra trig operation in the kernel in the dielectric case,
but for the conductive case we'll actually save three.
Rendered output is mostly the same, just slightly different because we're
no longer using the Gaussian approximation.
[1] "A Practical Extension to Microfacet Theory for the Modeling of
Varying Iridescence" by Laurent Belcour and Pascal Barla,
https://belcour.github.io/blog/research/publication/2017/05/01/brdf-thin-film.html
Pull Request: https://projects.blender.org/blender/blender/pulls/140944
Supporting this on the Metallic BSDF will require some extra work,
and on the Glossy BSDF it doesn't make much sense conceptually
(for that kind of shader setup, we'll want to support layering in SVM),
but Glass BSDF just needs to be hooked up so might as well do that.
Pull Request: https://projects.blender.org/blender/blender/pulls/140832
Detect which volume attributes nodes have a linear mapping to their usage
as density / color / temperature in volume shader nodes, and use stochastic
sampling for them.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908
Stochastically turn a tricubic filter into a trilinear one. This
reduces the number of taps from 64 to 8. It combines ideas from
the "Stochastic Texture Filtering" paper and our previous GPU
sampling of 3D textures.
This is currently only used in a few places where we know stochastic
interpolation is valid or close enough in practice.
* Principled volume density, color and temperature
* Motion blur velocity
On an Macbook Pro M3 with the openvdb_smoke.blend regression test
and cubic sampling, this gives a ~2x speedup for CPU and ~4x speedup
for GPU. However it also increases noise, usually only a little. Equal
time renders for this scene show a clear reduction in noise for both
CPU and GPU.
Note we can probably get a bigger speedup with acceptable noise trade-off
using full stochastic sampling, but will investigate that separately.
Pull Request: https://projects.blender.org/blender/blender/pulls/132908