This function is not performance critical, but I prefer the branch-free code and no hack needed to appease gcc.
Follow-up to recent 23035cf46f and f637145450.
Since we already have a rather advanced PovRay exporter, makes sense to
also nicely display generated 'code'.
Patch by Maurice Raybaud (@mauriceraybaud), thanks!
Cleanup (mostly styling) by @mont29.
Brightness/contrast node was changing color but did not modify alpha
or ensured colors are premultiplied on the output. This was giving
artifacts later on unless alpha was manually converted.
Compositor is supposed to work in premultiplied alpha (except of
some really corner cases) so it makes sense to ensure premultiplied
alpha after brightness/contrast node.
This is now done as an option enabled by default, so we:
(a) Keep compatibility with old files.
(b) Have correct behavior for newly created files.
Later on we can get rid of this option.
We were looping over all vgroups in destination mesh and making string
comparison, for every vgroup of every vertex of merged mesh! Crazy!
Now we simply create a temp mapping of vgroup indices, seriously
simplifies things (and gives significant speedup when merging huge meshes
with lots of vgroups, here with quick stupid test went from 120ms in
vgroup merging to less than 5ms, 25 times quicker!).
Root of the issue here was that two stupid modifiers could create named
vgroup CD layers (vgroup editing ones... shame on me :") ).
Fix that, and added some versionning code to also fix 'corrupted' blend
files created by those so far.
Seems re-loading module invalidates memory pointers by the looks of it,
which gives an error on the next kernel call.
Not sure how to move memory pointer from one CUDA module to another one,
so for now simply disabling kernel re-load for CUDA devices. Not ideal,
but better than failing render.
Feature-selective option for CUDA is not an official feature anyway.
`screen_findedge()` is not expected to return NULL in that case, but
checking against that does not hurt (we do it in all its other call
cases anyway), better than crashing.
For some reason GCC-6 successfully compiles test program with
-Wno-implicit-fallthrough passed via command line. It just
silently ignores the unknown arguments which are starting with
-Wno-.
The issue is, if some other waning happens in the code, then
GCC will complain about unknown -Wno- argument which is not
supported by current GCC version.
This makes some misleading warning prints about unknown
command line argument when any other warning happens in code
from extern/.
Goal is to make most of the API independent of OpenGL, Vulkan, any other backend.
Able to remove default case from ElementList_size because IndexType only covers index types. Not that and *everything else* like GLenum.
Volume shaders without anything connected to the surface output are treated
as if they had a transparent BSDF as the surface shader in Cycles, so the
denoiser should skip feature pass writing for them just as it does with an
actual transparent BSDF.
If the central pixel is an outlier, the denoiser is supposed to predict its
value from the surrounding pixels. However, in some cases the confidence
interval test would reject every single surrounding pixel, which leaves the
model fitting with no data to work with.
We render linear distance to the light in a R32 texture and store it into an octahedron projection inside a 2D texture array.
This render the sampling function much more simpler and without edge artifacts.
- Some arguments were inapproriatry tagged as unused
using (void)foo semantic.
Only use such semantic in tricky casses, when something
needs to be ignored in release builds or something is
dependent on tricky ifndef policy.
For rest of the cases just use void foo(int /bar*/)
semantic, which ensures variable is not used. Solves
confusion and code running out of sync with later
development.
- Used proper unused semantic to some arguments.
- Added braces to make code easier to follow, tricky
indentation with ifdef, uh.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
Shader stages need to agree about interpolation qualifiers. Apparently implicit smooth (the default) and explicit smooth are considered different by some GLSL compilers. Found by @letterrip on Linux + Intel.
Follow-up to 941e739d70