Previously it was checking the actually active screen - it was leading
to the issue where it's no longer possible to override context.screen
(#108763) which was possible with old context override method.
Now the screen can be overridden, keeping the window & workspace
consistent.
Ref !114269.
Co-authored-by: Andrej730 <azhilenkov@gmail.com>
This was reported for UV editing but also e.g. some modeling operations
were affected.
In 295bc1249a, a listener for `NC_GEOM` > `ND_DATA` was added to the
Outliner in order to update when renaming bones (c4622c405e).
Since we are only interested in the naming part (no need to update the
Outliner otherwise), the notifier/listener combo was made more specific
by including the `NA_RENAME` action here.
Since this was a very general thing to listen to, other operations might
have relied on this to properly update, but having checked many things,
I could spot only one case where an Outliner update was missing after
the initial change and that was adding images.
This case was added separately now.
Pull Request: https://projects.blender.org/blender/blender/pulls/115799
After previous commits, there is a new function to copy a BMesh
custom data block that doesn't go through the "find common layers
between two formats" code. This is *much* faster when there is a
large amount of layers with the same type, since that code is
quadratic currently. It may not be noticeable in many simpler
setups though.
Related to #115776
Overload the attribute copy function for each element type, avoiding
the switch of different abstraction levels. The two extra arguments
besides the meshes and elements were constant, so the resulting
logic can be inlined as well.
When the BMesh source and result arguments are the same, restore
performance lost by 9175d9b7c2, which made copying layers
have quadratic time complexity. When we know the custom data format
is the same between the source and result, the copying can be much
simpler, so it's worth specializing this case. There is still more
to be done, because often we only know that the two meshes are the
same at runtime. A followup commit will add that check.
The quadratic runtime means performance is fine for low layer counts,
and terrible with higher layer counts. For example, in my testing with
47 boolean attributes, copying 250k vertices went from 2.3 seconds to
316 ms.
The implementation uses a new CustomData function that copies an entire
BMesh custom data block, called by a function in the BMesh module
overloaded for every BMesh element type. That handles the extra data
like flags, normals, and material indices.
Related to #115776
For now it has the same implementation as the function that allows
passing separate source and destination custom data formats. But
copying to the same format can potentially be much simpler.
Recent refactor to viewport navigation caused a regression where
"Orbit Around Selected" logic to impacted "Free" NDOF navigation logic.
Resolve by disabling orbit around selection with "Free" navigation.
Report #115815 shows a time where our calculation of rows and columns
should also include the category name lines. The code was calculating
where to break not including these and is too short if it has a
category.
Pull Request: https://projects.blender.org/blender/blender/pulls/115822
Apparently this problem has existed at least since b6b61681ef.
When converting pointers to generic structs in
`transform_convert_action.cc`, elements of type `TransData` and
`TransData2D` must align by maintaining synchronized indices within
their respective arrays.
Despite both arrays having equivalent lengths, some `TransData2D`
instances were omitted when they come from types `ANIMTYPE_GPLAYER` or
`ANIMTYPE_MASKLAYER`.
This misalignment resulted in `TransData` elements not properly
corresponding with their `TransData2D` counterparts.
A potential fix could be incrementing `td2d++` for each
`ANIMTYPE_GPLAYER` or `ANIMTYPE_MASKLAYER` occurrence. This approach,
while introducing blank `TransData2D` entries, would preserve index
alignment with `TransData`.
However, I opted for a less workaround-centric approach by converting
`tGPFtransdata` elements into `TransData2D`.This solution, albeit
slightly fragile due to the lack of a dedicated member in `TransData2D`
for integer value flushing, appears better than allowing blank
`TransData2D` fields.
For context, see 6d09fa3577. Overall, these values were still
written in some cases, but never used. Nowadays the viewer node and
attribute overlays give even better answers to these needs.
Also use const arguments, move a null check from the callback to the
PBVH function, and reorganice the PBVH code to be in a consistent
place in the file and to simplify the logic.
Move the contents of `ANIM_bone_collections.h` into its C++
`ANIM_bone_collections.hh` sibling. Blender is C++ by now that we can do
without the C header.
No functional changes.
The multithreaded algorithm works by atomically assigning each face's
group ID to the surrounding edges. If the ID for the edge is different
than one set previously, the edge becomes a boundary.
Using the edge to face topology map was also tested, but it wasn't
faster, and given the large memory usage of the map, the increased
complexity of this algorithm was considered worthwhile.
Speed improvement for attached example file is listed in table:
| Cube resolution | Main | PR |
| -- | -- | -- |
| 20x20x20 | `71920 ns` | `97400 ns` |
| 100x100x100 | `1.27 ms` | `1.17 ms` |
| 500x500x500 | `79.37 ms` | `23.16 ms` |
| 1000x1000x1000 | `520.31 ms` | `142.21 ms` |
Pull Request: https://projects.blender.org/blender/blender/pulls/115138
The whole logic of selecting a parent bone's tip when selecting a bone
should only happen when they are actually **connected**
(BONE_CONNECTED), same as when picking a bone in the UI
(`ED_armature_edit_select_pick_bone`).
Pull Request: https://projects.blender.org/blender/blender/pulls/115663
This caused a crash further down the line where the code expected geometry
instances after calling `ensure_geometry_instances`, which seems reasonable
to assume.
When framing a single keyframe in the Graph Editor
the padding to the y axis wasn't applied correctly.
This meant that the viewport would zoom in too far
making it hard to zoom out again.
Pull Request: https://projects.blender.org/blender/blender/pulls/115792
The issue was that the IK constraint wasn't set up correctly,
in such a way that the bone that had the constraint had no parents.
(refer to the file in the bugreport)
This lead to a crash when trying to get the tip of the chain because
`pchan_tip->parent` was a `nullptr`.
Fix it by only getting the parent if it's not a `nullptr`.
This has the side effect of not being able to move the bone in the test file.
Caused by: 0c2afa7c17
Pull Request: https://projects.blender.org/blender/blender/pulls/115788
Before this PR, whenever the frame changed during scrubbing,
the whole outliner tree would be rebuilt.
This rebuilding did not happen when playing back.
This was a major bottleneck, especially on scenes with many objects.
As far as I can tell there is no need to do that,
since there is no function that changes the
scene structure when the frame is changed.
The only way to do that is to add a python handler
for `post_frame_change`, but that also fires when playing back.
The performance gains can be quite significant.
I've made a scene with 32.000 cubes
and the `wm_draw_update` function goes from
~60ms
to
~30ms
Pull Request: https://projects.blender.org/blender/blender/pulls/114845
This PR adds the drawing placement modes from GPv2.
The drawing placement defines the depth (origin, view, surface, etc.) and a plane (view, cursor, xz, etc.).
This introduces a new helper class `DrawingPlacement` that does all of the internals to find the correct projection and just exposes a simple function to project from screen space to the drawing plane/surface.
Note: Drawing on other strokes currently doesn't work, because GPv3 can't be rendered to image yet. We use the depth buffer of the grease pencil render result to find the right depth.
Pull Request: https://projects.blender.org/blender/blender/pulls/115602
This is in preparation for eventual hierarchical bone collections.
The motivation here is that this will allow us to efficiently specify
children as an index range, which would be inefficient with a listbase
due to the list traversal overhead incurred for index-based look ups.
We're still saving to blend files as a list base for forwards compatibility
with Blender 4.0, but storing as an array at runtime for efficient indexing.
This should not result in any user-visible changes.
Pull Request: https://projects.blender.org/blender/blender/pulls/115354
The compositor texture node produces artifacts in its Color output.
This was due to uninitialized alpha channel. Set the alpha channel to
1.0 if it was not written to fix the issue.
Pull Request: https://projects.blender.org/blender/blender/pulls/115768
Resolve regression in [0] which replaced a null check with an
identity comparison. This wasn't correct as the null check indicated
that transform was applied or not, where situations that require a
transform could have an identity matrix.
Resolve by checking if the matrix that would transform vertices is an
identity which is a reliable way to know if any transformation is needed.
Ref: !115736
[0]: e24c7f1954
Suppress false positive Valgrind warnings which flooded the output.
- BLI_mempool alloc/free & iteration.
- Set alignment padding bytes at the end of MEM_* allocations
as "defined" since this causes many false positive warnings
in blend file writing and MEMFILE comparisons.
- Set MEM_* allocations as undefined when `--debug-memory`
is passed in to account for debug initialization.
- Initialize pad bytes in TextLine allocations.
The range check was applied before padding up, in the unlikely case
sizes (INT_MAX - 2) or (INT_MAX - 1) were passed in,
the value overflow into a negative integer.
- Support passing in unterminated C-strings when clamped by the size
argument.
- Pair string and it's size arguments together in IDP_NewStringMaxSize.
- Remove redundant size check which made it seem as if the string
might not be null terminated.
- Replace clamping the result of strlen(..) with BLI_strnlen,
to avoid calculating the length past the size checked.
- Add doc-string for unclamped string creation.