The Map UV and Displace nodes produce unexpected outputs. That's because
the derivatives computed for anisotropic filter were computed in the
sampler's space, while it should be in texel space, as expected bu the
textureGrad function.
The Corner Pin node as well as the Plane Track Deform nodes always
return a single color that appears to be the average of the input.
That's because the derivatives were computed in the sampler's space,
while they should be in texel space. Large derivatives meant that the
textureGrad function would always sample the lowest MIP level, hence the
constant average color.
This might be an issue with other uses of textureGrad in the compositor,
so their use should be investigated.
The Blur node takes too long to execute even though it is in a simple
configuration. That's because the CPU compositor uses variable size
blurring even if the size is constant. So ensure the input size is
actually variable before using variable size blurring.
This is because sse2neon.h might be used to emulate SSE intrinsics
on ARM64 architecture, and it uses some preprocessor which is not
available for C language when using MSVC.
The old-style math file math_matrix.c uses this header, so needed
to become C++. Simple rename did not work since there is a new math
utility math_matrix.cc exists. Following some existing convention
the math_matrix.c is renamed to math_matrix_c.cc. Eventually all the
code should switch to use C++ style math, and the C style removed,
so it seems reasonable to not mix old and new style of API in the
same file.
There should be no functional changes.
Pull Request: https://projects.blender.org/blender/blender/pulls/121335
This patch implements the Fast Gaussian blur mode for the Realtime
Compositor. This is a faster but less accurate implementation of
Gaussian blur.
This is implemented as a recursive Gaussian blur algorithm based on the
general method outlined in the following paper:
Hale, Dave. "Recursive gaussian filters." CWP-546 (2006).
In particular, based on the table in Section 5 Conclusion, for very low
radius blur, we use a direct separable Gaussian convolution. For medium
blur radius, we use the fourth order IIR Deriche filter based on the
following paper:
Deriche, Rachid. Recursively implementating the Gaussian and its
derivatives. Diss. INRIA, 1993.
For high radius blur, we use the fourth order IIR Van Vliet filter based
on the following paper:
Van Vliet, Lucas J., Ian T. Young, and Piet W. Verbeek. "Recursive
Gaussian derivative filters." Proceedings. Fourteenth International
Conference on Pattern Recognition (Cat. No. 98EX170). Vol. 1. IEEE,
1998.
That's because direct convolution is faster and more accurate for very
low radius, while the Deriche filter is more accurate for medium blur
radius, while Van Vliet is more accurate for high blur radius. The
criteria suggested by the paper is a sigma value threshold of 3 and 32
for the Deriche and Van Vliet filters respectively, which we apply on
the larger of the two dimensions.
Both the Deriche and Van Vliet filters are numerically unstable for high
blur radius. So we decompose the Van Vliet filter into a parallel bank
of smaller second order filters based on the method of partial fractions
discussed in the book:
Oppenheim, Alan V. Discrete-time signal processing. Pearson Education
India, 1999.
We leave the Deriche filter as is since it is only used for low radii
anyways.
Compared to the CPU implementation, this implementation is more
accurate, but less numerically stable, since CPU uses doubles, which is
not feasible for the GPU.
The only change of behavior between CPU and this implementation is that
this implementation uses the same radius, so Fast Gaussian will match
normal Gaussian, while the CPU implementation has a radius that is 1.5x
the size of normal Gaussian. A patch to change the CPU behavior #121211.
Pull Request: https://projects.blender.org/blender/blender/pulls/120431
This patch matches the size of the Fast Gaussian mode of blur with the
standard Gaussian mode. The sigma value was computed as half the radius,
while it should be third of the radius, since Blender's Gaussian
function is truncated at 3 of the standard deviation of the unit
Gaussian. The patch include versioning to adjust the size of existing
files.
Pull Request: https://projects.blender.org/blender/blender/pulls/121211
Blender crashes when a File Output was created when the Use Preview
option of the render output settings was enabled. This is a regression
introduced in 931c188ce5, where the image format of the node was
initialized from the scene image settings, so the preview option was
carried over, yet it is not supported nor exposed to the user in the
File Output node. To fix this, ensure the file output code will ignore
previews.
The compositor used to have a feature that would calculate tiles for the viewer based on a custom order. Since the removal of the tile based compositor, this code is unused.
Pull Request: https://projects.blender.org/blender/blender/pulls/121176
Blender crashes when the user scales up an image into huge scales. This
is due to integer overflow when doing memory allocations. So we fix this
by using larger integers when doing math on memory allocations and
indexing.
Note that while this solves the random crashes, users scaling up images
to huge scales will face things like the OOM killer activating or at
best significant slow downs due to swapping.
It is not clear how to handle such cases, but something like a global
maximum size option that is set to 16k by default might be worth adding.
Pull Request: https://projects.blender.org/blender/blender/pulls/120921
The File Output node writes single elements as full images in 4.1, while
such values were skipped in 4.0. This included invalid outputs, for
instance, when the Render Layers node does not have a result for the
selected view layer. Which would then just write an image with an
arbitrary color.
To fix this, we detect single element values and skip writing file
outputs for them.
Pull Request: https://projects.blender.org/blender/blender/pulls/120749
This patch replaces the is_set_operation flag with the
is_constant_operation flag to allow input constants to propagate
through the node tree using the constant folder.
This PR adds a context function to consider all
buffer bindings obsolete. This is in order to
track missing binds and invalid lingering states
accross `draw::Pass`es.
The functions `GPU_storagebuf_debug_unbind_all`
and `GPU_uniformbuf_debug_unbind_all` do nothing
more than resetting the internal debug slot bits
to zero. This is what OpenGL backend does as it
doesn't track the bindings themselves.
Other backends might have other way to detect
missing bindings. If not they should be
implemented separately anyway.
I renamed the function to `debug_unbind_all` to
denote that it actually does something related to
debugging.
This also add SSBO binding check for OpenGL as it
was also missing.
#### Future
This error checking logic is pretty much backend
agnostic. While it would be nice to move it at
`gpu::Context` level, we don't have the resources
for that now.
Pull Request: https://projects.blender.org/blender/blender/pulls/120716
The compositor execute functions have a `rendering` argument to specify
if the compositor is executing as part of the render pipeline. But the
render context argument is null if we are not rendering, so the
`rendering` arguement is redundant and can be removed.
Additionally, we no longer use use_file_output as a hack to detect
rendering.
Pull Request: https://projects.blender.org/blender/blender/pulls/120659
This patch improves the interactivity of the GPU compositor for
interactive node tree edits by waiting on GPU work to finish to support
more granular canceling.
This does have a performance penalty, but it does not affect final
rendering and it seems worth it because even though it is slower it will
feel faster for users.
Pull Request: https://projects.blender.org/blender/blender/pulls/120656
This patch adds support for inter-operation canceling. Though it should
be noted that canceling will actually not take place except in certain
circumstances because operations are already submitted to the GPU by
this point and can't be canceled.
However, for operations that do CPU<->GPU transfers, like OIDN denoise,
which is one of the most expensive operations, this will actually cancel
the evaluation and greatly improve interactivity.
Pull Request: https://projects.blender.org/blender/blender/pulls/119917
The Vector Blur implementation slightly differs between CPU and GPU due
to different precision in the sqrt2 constant, so use the float variant
in CPU to make it closer to GPU.
This patch ports the GPU Vector Blur node to the CPU, which is in turn
ported from EEVEE. This is a breaking change since it produces different
motion blur results that are more similar to EEVEE's motion blur.
Further, the Curved, Minimum, and Maximum options were removed on the
user level since they are not used in the new implementation.
There are no significant changes to the code, except in the max velocity
computation as well as the velocity dilation passes. The GPU code uses
atomic indirection buffers, while the CPU runs single threaded for the
dilation pass, since it is a fast pass anyways. However, we impose
artificial constraints on the precision of the dilation process for
compatibility with the atomic implementation.
There are still tiny differences between CPU and GPU that I haven't been
able to solve, but I shall solve them in a later patch.
Pull Request: https://projects.blender.org/blender/blender/pulls/120135
Set the w component to 1 in Float to Vector implicit conversion for the
Realtime Compositor. That's because the Vector Output node expects a
vector for the velocity input, but the CPU compositor fakes it as a
color input, while assumes a fourth component of 1.
Retain the alpha channel in Color to Vector implicit conversion for the
Realtime Compositor. That's because speed passes are exposed as color
sockets, so connecting them to a node like Vector Blur will lose the
y component of the next vectors.
The CPU compositor handles that by faking vector sockets as color
sockets internally, while the realtime compositor does not do that.
This patch unifies the Defocus node between the CPU and GPU compositors.
Both nodes now use a variable sized bokeh kernel which is always odd
sized for a center pixel guarantee. Further the CPU implementation now
properly handles half pixel offsets when doing interpolation, and always
sets the threshold to zero similar to the GPU implementation.
This patch unifies the CPU and GPU implementation for the Bokeh Image
node. The bokeh is now evaluated at the center of pixels and allows
different sizes for the output kernel.
The Sun Beams node is off by half a pixel. That's because we add a half
pixel offset to the initial coordinates, but then sample the texture
with the half pixel. To fix thus, use the texture_bilinear_extend
utility to sample the image, which takes care of the half pixel offset.
The Lens Distortion node is off by half a pixel because their normalized
coordinates were at the pixel corners as opposed to their centers, where
this patch changes the behavior to the latter.
The Plane Track and Corner Pin nodes are off by half a pixel because
their mask is computed at the pixel corners as opposed to their center,
where this patch changes the behavior to the latter.
The Directional Blur node is off by half a pixel because it transforms
the pixels at their corner as opposed to their center, where this patch
changes the behavior to the latter.
The Movie Distort node is off by half a pixel because it evaluates the
distortion at the corner of pixels as opposed to their center, where
this patch changes the behavior to the latter.
The compositor does not correctly free, wrap, or allocate images. That's
because the resetting mechanism didn't reset all members. This patch
rests all members and only retains the important ones.
This patch refactors the backdrop offset to be stored as a float instead
of an int and to be stored in the image runtime structure instead of the
image itself.
Pull Request: https://projects.blender.org/blender/blender/pulls/119877