A couple of edit mesh operators are still using "VCols" terminology,
which should be Color Attributes now. This just renames text seen
in redo panels. Internally it's still called VCols.
Pull Request: https://projects.blender.org/blender/blender/pulls/120075
This was meant to be the same as `BLI_rct*_is_empty`
but wasn't because the `less_or_equal_than` was
effectively doing a logical "and", when it should have
been doing a logical "or".
This act in multiple phases:
- A shader scan the whole clipmap after tilemap finalize to gather
where valid tiles from lower LODs are available and write the page
location and LOD offset at invalid tiles location.
- At sampling time we add the LOD offset before the pixel page
modulo operation. This offset is equal to the amount of pages of
the **sampling** LOD needed to have the same modulo result as the
**sampled** LOD.
The whole thing being very tricky, I added a lot of unit testing.
This has no use for now but the system is needed to implement:
- Shadows from Volumetrics at lower cost & memory footprint.
- Fixing soft shadows artifacts.
These will be implemented in separate PRs.
Pull Request: https://projects.blender.org/blender/blender/pulls/120031
The simplify modifier uses
`bke::curves_copy_point_selection` but didn't
build the `points_to_keep` index mask correctly.
This fixes the indices in the index mask and also
optimizes the edge cases of removing all the points/
keeping all the points.
The Perlin noise algorithms suffer from precision issues when a coordinate
is greater than about 250000.
To fix this the Perlin noise texture is repeated every 100000 on each axis.
This causes discontinuities every 100000, however at such scales this
usually shouldn't be noticeable.
Pull Request: https://projects.blender.org/blender/blender/pulls/119884
Generally sculpt uses a combination of data from the original,
deformed, and final evaluated meshes. Keeping track of all that
is confusing and using a more specific variable name helps a bit.
Instead of only referencing an existing one. This will be used for
collection exporter and presets, to make sure the operator instance
stays alive long enough for the preset to be able to be applied.
Pull Request: https://projects.blender.org/blender/blender/pulls/120034
The issue was introduced by b35831ad6c.
Since that commit `tree->boundary` will always be non-nullptr, even
when the target mesh had no boundaries. Some code was still relying
on the fact that `tree->boundary != nullptr` means the mesh has
boundary.
Update shrinkwrap code for this fact, avoiding access past array
boundaries.
Pull Request: https://projects.blender.org/blender/blender/pulls/120054
Removal of confirmation dialogs for the following five operators. For
each of these the UI module felt that they are all either very explicit
actions and/or are easily undone.
* ARMATURE_OT_separate (Move selected bones to a separate armature)
* CURVE_OT_separate - “Move selected points to a new object”
* OBJECT_OT_vertex_parent_set (Parent selected ob to selected vertices)
* OBJECT_OT_parent_no_inverse_set (Make Parent w/o inverse correction)
* FILE_OT_directory_new - (Create New Directory)
Pull Request: https://projects.blender.org/blender/blender/pulls/120036
The active key was accessed to check if the key-blocks were empty,
the variable was then shadowed by the iterator.
Check if the key blocks list is empty instead.
This PR adds the *Line Hide* tool and the corresponding
`PAINT_OT_hide_show_line_gesture` operator to Sculpt Mode.
*Line Hide* supports common modal functionality including:
* Snapping to angles
* Flipping the selection area
* Moving selection area
Addresses one of the tools in #80390
Pull Request: https://projects.blender.org/blender/blender/pulls/119671
Also access the evaluated deform mesh with a function rather than
directly from object runtime data. The goal is to make it easier to use
implicit sharing for these meshes and to improve overall const
correctness.
See b99c1abc3a for more information about how using fewer
threads for just copying data can improve performance. In my simple
test file with mesh data re-uploaded every frame, this improved
performance from 23.5 FPS to 25.5 FPS (almost 9%).
After bace4c9a29, the vertex position and vertex normal VBOs
are split. The `overlay_paint_point` shader depends on the normals VBO
because the selection is stored in the `.w` component of the vector.
SubdvigCCG is null for the evaluated mesh in the render depsgraph
because of the `!for_render` check in `MOD_multires.cc`. But the PBVH
type is still `PBVH_GRIDS`. That's a weird inconsistency that ideally
wouldn't happen, but probably isn't simple to change. The simplest and
most obviously harmless fix is to just check whether the list of PBVH
nodes to update is empty.
Recently an internal compiler error has been popping up for folks
stemming from our MatBase matrix `operator<<`.
My guess is that the nested fold-expression (coming from `unroll`) and
the lambda is causing MSVC to become very upset in some instances.
Regardless of the actual cause, using simple for loops results in less
generated code and the use of `unroll` isn't required since these output
operators are mainly for debugging.
Unfortunately I've been unable to reproduce it in simpler contexts to
report it upstream.
Pull Request: https://projects.blender.org/blender/blender/pulls/119982
The issue seems to be caused with the recent NVidia drivers, which will
crash if OptiX context is created prior to the OpenGL context on Linux.
This happens with drivers 545.29.06, 550.54.14, 550.67 and kernel 6.8
on Ubuntu 24.04, and also happens with NVidia driver 550.67 on Gentoo.
While it does seem the issue is likely on the driver side, the timeline
for it being resolved there is unknown. Until then it is possible to
apply workaround on Blender side, which will initialize GPU context
prior to rendering with an external render engine when it is known the
GPU context will be needed later on.
Note this is not a complex fix, because in general it's still possible to
render a frame without and then with grease pencil, in which case OptiX
will still be initialized first. But we don't want to always initialize
OpenGL, and we can't predict future operations.
Pull Request: https://projects.blender.org/blender/blender/pulls/120026
For some reason, the vertex paint modal operator
was calling `CTX_data_ensure_evaluated_depsgraph` on
every event update. For obvious reasons, this caused
performance to go down drastically in bigger files.
Specifically in one of the Blender Studio files.
The fix uses `CTX_data_depsgraph_pointer` instead to
get the `Depsgraph` pointer, avoiding a full reevaluation
of the scene.
No functional changes.
The function `get_keyframe_values` existed twice.
Once in a legacy state, which also did NLA logic.
And also in a new implementation where it only returned keyframe values.
This PR removes the legacy function and moves the NLA logic to a separate function.
Pull Request: https://projects.blender.org/blender/blender/pulls/119798
The issue was that the `PointerRNA` passed to `BKE_animsys_get_nla_keyframing_context`
needs to point to an `ID` which wasn't the case when keying bones.
That is because internally the `FCurve` path is used to resolve the property.
This can only work from the `ID` because the `FCurve` path is always stored relative to that.
While the function doesn't fail when the property can't resolve the path, it won't actually do
the remapping when passing it to `BKE_animsys_nla_remap_keyframe_values` later.
Pull Request: https://projects.blender.org/blender/blender/pulls/120008
Handling animation of GPv3 in itself is relatively straightforward, it's
mainly a matter of duplicating animdata into the new GreasePencil ID.
In case some propoerties need to be remapped, this will be done in a
similar way as e.g. GP object's modifiers animation for Object-level
animation.
The complex and ugly part of this PR is in the need to move animation
from GPdata to Object level for some properties. This PR tackles the
'layer adjustments to modifiers' aspect (i.e. adjustments on tint and
thickness).
Known limitations currently with these GPData to Object animation:
* NLA is not supported (i.e. if an NLA in legacy GP data controls these
adjustments animations, it won't be converted to Object-level NLA to
control matching modifiers settings).
* Drivers targets are not handled either, i.e. in case a driver is using
data from legacy GPdata as input, these will be left as-is (this is
true for all anim handling currently).
* There is no adjustments of values for animation (e.g. the thickness
adjustment values would need to be devided by 2000).
Most of these limitations can be addressed at some point, depending on
how critical they are to support. This would have a cost (in time and
code complexity) though.
Pull Request: https://projects.blender.org/blender/blender/pulls/119214