Currently, the Render mode of the GPU Cryptomatte mode extracts the
Cryptomatte layers based on information in the RenderResult of the
scene. This means the node will not work if no RenderResult exists,
which is typically the case for the viewport compositor, and especially
after #123378.
To fix this, we simply acquire the passes directly from the appropriate
view layer based on the node's layer name. The render compositor context
implementation will handle the extraction from the RenderResult, while
the viewport compositor will just return the DRW passes.
Pull Request: https://projects.blender.org/blender/blender/pulls/123817
Extensions with a manifest that can't be parsed caused can exception
in the add-ons UI.
Account for errors loading the manifest, falling back to dummy values
& show a warning that the exceptions manifest could not be parsed.
Changes to an extensions manifest weren't accounted for.
This was particularly a problem for "System" extensions which aren't
intended to be managed inside Blender however the problem existed for
any changes made outside of Blender.
Now enabled extensions are checked on startup to ensure:
- They are compatible with Blender.
- The Python wheels are synchronized.
Resolves#123645.
Details:
- Any extension incompatibilities prevent the add-on being enabled
with a message printing the reason for it being disabled.
- Incompatible add-ons are kept enabled in the preferences to avoid
loosing their own preferences and allow for an upgrade to restore
compatibility.
- To avoid slowing down Blender's startup:
- Checks are skipped when no extensions are enabled
(as is the case for `--factory-startup` & running tests).
- Compatibility data is cached so in common case,
the cache is loaded and all enabled extensions `stat` their
manifests to detect changes without having to parse them.
- The cache is re-generated if any extensions change or the
Blender/Python version changes.
- Compatibility data is updated:
- On startup (when needed).
- On an explicit "Refresh Local"
(mainly for developers who may edit the manifest).
- When refreshing extensions after install/uninstall etc.
since an incompatible extensions may become compatible
after an update.
- When reloading preferences.
- Additional info is shown when the `--debug-python` is enabled,
if there are ever issues with the extension compatibility cache
generation not working as expected.
- The behavior for Python wheels has changed so they are only setup
when the extension is enabled. This was done to simplify startup
checks and has the benefit that an installed but disabled extension
never runs code - as the ability to install wheels means it could
have been imported from other scripts. It also means users can disable
an extension to avoid wheel version conflicts.
This does add the complication however that enabling add-on which is
an extension must first ensure it's wheels are setup.
See `addon_utils.extensions_refresh(..)`.
See code-comments for further details.
This was caused by the denoiser of the slight out of focus
pass having too much influence at low CoC values.
Making the transition start at 0.5 fixes the issue.
Fixes#123822
This happened in scenes with high light count and
with some local light being culled.
The culled lights indices would still be processed
and load undefined data. This undefined data
might be interpreted as a sunlight and go into
the cascade setup loop with an undefined number
of levels.
This created loops of 1 billion iteration per thread
which triggered the TDR on windows.
The fix is to skip the culled light indices.
Fixes#123413Fixes#123190
This writes the `init_time` curve attribute and the `delta_time`
point attribute.
These are used by the build modifier in the "natural drawing speed"
mode to mimic the actual speed at which the stroke was drawn.
Pull Request: https://projects.blender.org/blender/blender/pulls/123899
The new brushes don't update `SculptSession::orig_cos` (which is good, it's
not necessary since it's just a copy of the active shape key data or the original
mesh's vertex positions). To fix the problem with the undo system, access those
two arrays directly instead. Once all the uses of the "proxy" system are removed,
`orig_cos` can be removed too.
Pull Request: https://projects.blender.org/blender/blender/pulls/123922
Avoid copying positions and normals from their source arrays. This is
simplified by using separate loops for the original data and accumulate
cases. I observed a performance improvement in the typical benchmark file
of about 13%, from 0.54s to 0.48s for a brush stroke affecting most of a
6 million vertex grid.
Avoid the need to acess mesh data for every node which will become more
important if we decide to make nodes smaller. Also makes some possible
performance improvements simpler.
The function direction_to_fisheye_lens_polynomial computes the inverse of
fisheye_lens_polynomial_to_direction.
Previously the function worked almost correctly if all parameters except k_0
and k_1 were zero (in that case it was correct except for flipping the x-axis).
I replaced the fixed-point iteration (?) by Newton's method and implemented a
test to make sure it works correctly with a wider range of parameter sets.
Pull Request: https://projects.blender.org/blender/blender/pulls/123737
Part of #118145.
Similar to 851505752f.
This doesn't affect performance much at all, the vast majority of time
in typical scenarios is spent modifying topology anyway. It does significantly
ease development though, because now we can just do slight modifications
on the "new" version of brush code instead also creating a refactored "old"
version.
This fixes a kernel crash on NVidia GP100.
This splits the tilemap finalize shader into another shader
and split the valid tile gather into its own loop.
Simplifying the shader seems to avoid the issue. But the
cause of the issue remains unknown.
Pull Request: https://projects.blender.org/blender/blender/pulls/123850
When preview is downscaled and transformation origin is not the center
of the image, this causes unexpected offset. This happened, because one
matrix combined image downscaling, so it fits into preview and user
defined scale. When origin was not center of the image, this results in
incorrect offset.
Solved by splitting 1 matrix in `sequencer_image_crop_transform_matrix`
into 2 matrices. First matrix just centers and scales image to expected
size. Second matrix performs rest of transform operations. This code is
bit easier to read as well.
Pull Request: https://projects.blender.org/blender/blender/pulls/123776