We render linear distance to the light in a R32 texture and store it into an octahedron projection inside a 2D texture array.
This render the sampling function much more simpler and without edge artifacts.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
Shader stages need to agree about interpolation qualifiers. Apparently implicit smooth (the default) and explicit smooth are considered different by some GLSL compilers. Found by @letterrip on Linux + Intel.
Follow-up to 941e739d70
prefer vector math over scalar
prefer * over /
shorten vec3(x, x, x) to vec3(x)
use clamp, max, etc. instead of custom logic
declare loop vars as part of for loop
spacing
vertexColor output was not being written --> shader failed to link --> assert hit while setting that shader's uniforms.
Vertex attribs are smooth by default, so I shortened the declaration.
@fclem or @dfelinto: is color = 0 ok here?
This still has a couple of issues:
* Instancing is not working when multiple particle systems use the same
primitive. Only the last particle system to be drawn with a particular
primitive shows up.
* Because of colors being passed as uniforms with static variables, the
color of the collection of the last object to be evauluated is used for
all particles being displayed.
Also, note that while this is being drawn in the clay engine, this might
be moved to the object mode later intead.
Part of T51378
This was introduced by a removed line in
rB0eb32ab22809d9c0c41bfff5082f58b1a7aa9965
If we want to change the value to a differen default, it should be done
in a separated commit.
We only keep this as a way to get GPU_stubs to run, in case we want to do a
throughout cleanup in the codebase and want code using legacy calls to
fail to build.
There is no more point of keep those around. ES20 may need special case
when/if we dabble with it again. Meanwhile no point on polluting the
code with this.
(ghost still has reference for the PROFILE, but that's reasonable)
I wouldn't mind switching fully to Google style, but i am against of
mixing two different styles in same project. So just stick to brace
at the new line after function definition.
There were following issues with ccl_restrict_ptr:
- We already had ccl_restrict for all platforms.
- It was secretly adding `const` qualifier to the declaration,
which is quite weird since non-const pointer can also be
declared as restricted.
- We never in Blender are using foo_ptr or FooPtr type definitions,
so not sure why we should introduce such a thing here.
- It is absolutely wrong from semantic point of view to put pointer
into the restrict macro -- const is a part of type, not part of
hint for compiler that some pointer is never aliased.
Denoise commit introduced kernel_write_result() which saves light passes, so
no need to call both kernel_write_result() and kernel_write_light_passes() from
the split kernel.
Weirdly enough. kernel_write_result() does not take care about debug passes.
The issue is coming from some weird semi-finished canvas feature, which
was remapping coordinate without applying any differential on the sampling
ellipse (in fact, there is no ellipse, sampling think is always a single
pixel).
The whole thing is just weak in the compositor, for now just bring behavior
back to how it was prior to optimization (multithreading) commit.