`BKE_nlastrip_find_active()` and `update_active_strip()` now do a more
thorough search for the active strip, by also recursing into
meta-strips.
There were some assumptions in the NLA code that the active strips is
contained in the active track. This assumption turned out to be false:
when there is a meta-strip, the active strip could be inside that, and
then it's not contained in `active_track.strips` directly.
Apart from the above, there are other situations in which the track
pointed to by `AnimData::act_track` does *not* contain the active strip
(even indirectly).
Both these cases can happen when the transform system is moving a strip
that's in tweak mode. Entering tweak mode doesn't just search for the
active track/strip, but also falls back to the last-selected ones. This
means that the track/strip pointers & flags can be out of sync with
what's being tweaked/transformed. Because of this, the assumption that
there is any relation between "active strip" and "active track" has been
removed from the `update_active_strip()` function.
All this searching could be prevented by passing the `AnimData` to the
code that duplicates tracks & strips. That way, when the active
track/strip is dup'ed, the `AnimData` pointers can be updated as well.
That would require more refactoring and thus could introduce other bugs,
so that's left for later.
Also fixed an unreported issue of incorrect interpolation of thickness.
Reviewed By: Aleš Jelovčan (frogstomp), Antonio Vazquez (antoniov)
Differential Revision: https://developer.blender.org/D15005
The final normalization step of sculpt normal calculation iterates over
all unique vertices in each node and marks them as done. However,
storing the done mask in a bitmap meant that multiple threads could
write to a single byte at the same time, because the bits for separate
threads could be stored in the same byte. This is not threadsafe
Fixing this issue seems to improve performance as well. First I tested
just clearing the entire bitmap after the normal calculation. Then I
tested using an array of booleans instead, which turned out to be
slightly better, and simplifies code a bit.
I tested on a Ryzen 3800x, on an 8 million polygon subdivided
Suzanne by using the grab brush with a radius large enough to
affect most of the mesh.
| Original | Clear Entire Bitmap | Boolean Array |
| --------- | ------------------- | ------------- |
| 67.9 ms | 59.1 ms | 57.9 ms |
| 1.0x | 1.15x | 1.17x |
Differential Revision: https://developer.blender.org/D14985
Existing code for the `Move` operator, and some `Collections` panel
operations (Object properties) was absolutely not override-safe, and
sometimes not even linked-data safe.
That max number of `10000` level of recursivity was a typo (should have
been `1000`), but even that is way too high, typical sane situation
should not lead to more than a few tens of levels, so reducing the max
level to 200.
Also improve error message with more context info about the issue.
Found while investigating issues for the Blender Studio's Heist production.
That max number of `10000` level of recursivity was a typo (should have
been `1000`), but even that is way too high, typical sane situation
should not lead to more than a few tens of levels, so reducing the max
level to 200.
Also improve error message with more context info about the issue.
Found while investigating issues for the Blender Studio's Heist production.
In some cases (when there is an evaluated curve), the conversion code
would try to free the evaluated data-block twice, because freeing the
object would free it from `data_eval` and then the data-block was freed
again explicitly. Now check if the data-block is stored in `data_eval`
before freeing `object.data` manually. This is another area that's made
more complex by the fact that we change the meaning of `object.data`
for evaluated objects. The solution is more complicated than it should
be, but it works whether or not an evaluated mesh or curve exists.
Sometimes integers are mixed using float weights. In those cases
the mixed result has to be converted from into float again.
Previously, this was done using a simple cast, which was unexpected
because e.g. 14.999 would be cast to 14 instead of 15.
Now, the values are rounded properly.
This can affect existing files unfortunately without a good option
for versioning. Gladly, very few files seem to depend on the details
of the old behavior.
Differential Revision: https://developer.blender.org/D14892
This makes changes to the opensubdiv module to support additional vertex data
besides the vertex position, that is smootly interpolated the same way. This is
different than varying data which is interpolated linearly.
Fixes T96596: wrong generated texture coordinates with GPU subdivision. In that
bug lazy subdivision would not interpolate orcos.
Later on, this implementation can also be used to remove the modifier stack
mechanism where modifiers are evaluated a second time for orcos, which is messy
and inefficient. But that's a more risky change, this is just the part to fix
the bug in 3.2.
Differential Revision: https://developer.blender.org/D14973
Previously, the depsgraph assumed that every node tree might contain
a reference to a video. This resulted noticeable overhead when there
was no video.
Checking whether a node tree contained a video was relatively expensive
to do in the depsgraph. It is cheaper now due to the structure of the new
node tree updater.
This also adds an additional run-time field to `bNodeTree` (there are
quite a few already). We should move those to a separate run-time
struct, but not as part of a bug fix.
Differential Revision: https://developer.blender.org/D14957
If making liboverride of an empty collection, this (root of override
hierarchy) collection would get untagged in code when checking for
collections that do not need to be overridden, leading to not overriding
this root collection, and later in code to using NULL pointer.
The problem is that depsgraph evaluation happens before the OpenGL context
is initialized, and so modifier evaluation happens without GPU subdivision.
Later the BKE_subsurf_modifier_can_do_gpu_subdiv test in the draw code gives
a different result.
This just checks if the mesh has information for GPU subdivision in the draw
code, and if so uses it. This is only set if the test for supported GPU
subdivision passes in the modifier evaluation.
Additionally it may be good to perform OpenGL context initialization earlier
so background render can take advantage of GPU subdivision, but this is more
complicated.
Differential Revision: https://developer.blender.org/D14969
Moving Text.as_string() from Python to C [0] added an extra new-line
causing a round-trip from_string/to_string to add a new-line,
this also broke the undo test `test_undo.text_editor_simple` in
`../lib/tests/ui_simulate/run.py`.
[0]: 231eac160e
The 'OVERRIDE_HIDDEN' extra collection would often be mistakenly added
to a linked collection, which is totally forbidden and guaranteed to
crash on undo/redo.
Reworked the code instantiating that extra collection in a more generic
and hopefully robust way now.
We really need to fix how unprojected radius (scene unit) works.
What happened is the paint code updates the brush's normal radius
with the current unprojected pixel radius, which was then
used by texture brush tiled mode.
To fix this I just cached the pixel radius at stroke start in
UnifiedPaintSettings->start_pixel_radius.
This was due to using `BKE_scene_has_object` function, which uses the
cache of bases of the viewlayers, which do not have entries for the
content of excluded collections... Now use
`BKE_collection_has_object_recursive` instead.
GLSL has different max number of ssbo per glsl stage.
This patch checks if the number of compute ssbo blocks matches
our requirements for the GPU Subdiv, before enabling it.
Some platforms allow more ssbo bindings then blocks per stage.
Code was not really designed to hanlde corrupted (e.g. local ID) root
hierarchies, now it should handle better those invalid cases and restore
proper sane situation as best as possible.
Fixes crashes with some corrupted files from Blender studio.
`TEMPERATURE` type was also missing, not only the new-ish
`TIME_ABSOLUTE` one...
Added a static assert on the size of the `bpyunits_ucategories_items`
array, and a comment on anonymous enum of `B_UNIT_`, in the hope this
won't happen again in the future.
Support merging UV's that share the same vertex and are very close when
applying modifiers.
This is needed to prevent UV's becoming "detached" which can happen when
applying the subdivision surface modifier.
This regression was caused by [0] which removed selection threshold for
nearby coordinates. While restoring the UV selection threshold could be
done - some selection operations that walk around connected UV fans
wouldn't behave in a deterministic way (such as select shortest path).
There are also other cases where UV's may be compared without a
threshold such as tangent calculation and exporters which have their own
logic to handling UV's.
Also resolves T86896, T89903.
[0]: b88dd3b8e7
Reviewed By: sergey
Ref D14841
Similar issue/solution as in rB5188c14718c5 from this Monday actually,
there may be more of those still lurking around... Quite surprising they
all get reported now, this behavior has been in Blender since years.
Fix regression described in T97799.
Apply layer transform and layer parenting to all visible frames, i.e. active frame + onion skinning frames.
Reviewed By: #grease_pencil, antoniov
Maniphest Tasks: T97799
Differential Revision: https://developer.blender.org/D14829
When the profile only has one control point, its segment count was
determined incorrectly. There needs to be a special case for a single
control point, when there are no segments even if the curve is cyclic.
These kinds of depsgraph evaluations should not be marked as low priority as
this could negatively affect playback performance. Low priority should mainly
be used for background tasks.
Regression from rB1a7757b0bc69/rBa0acb9bd0cc0. Special handling
(averaging) of weights on merged center vertices also requires to be
'reversed' when new correct merge order is used, compared to previous
behavior.
This commit implements copying of materials and material indices from
all of the boolean node's input meshes. The materials are added to the
final mesh in the order that they appear when looking through the
materials of the input meshes in the same order of the multi-socket
input node.
All material remapping is done with mesh-level materials. Object-level
materials are not considered, since the meshes don't come from objects.
Merging all materials rather than just the materials on the first mesh
requires a change to the boolean-mesh conversion. This subtly changes
the behavior for object linked materials, but in a good way I think;
now the material remap arrays are respected no matter the number
of materials on the first mesh input.
Differential Revision: https://developer.blender.org/D14788
Relinking code would weirdly enough allow clearing of extra/fake user
status on IDs used by affected ID, which would be utterly wrong.
Fairly unclear why this was working OK in reported case before rBa71a513def20,
could not spot any obvious reason just from reading code...
Also, in `libblock_remap_data_update_tags`, only transfer fake user
status if `new_id` is not NULL (otherwise that would have removed that
falg from `old_id`, without actually transferring it to anything).
This patch adds allows the user to select the initial fill color when
adding a new color attribute layer.
---
{F13035372}
Reviewed By: JulienKaspar, joeedh
Differential Revision: https://developer.blender.org/D14743
* BLENDER_VERSION_CYCLE set to beta
* Update pipeline_config.yaml to point to 3.2 branches and svn tags
* Update and uncomment BLENDER_VERSION in download.cmake
- Fix default_value initialization of custom node tree interface:
This was crashing when adding a custom interface socket to a tree.
The node_socket_set_typeinfo function was called too early, creating a
default float socket, which then doesn't match the socket type after
changing to the custom type.
The node_socket_set_typeinfo only allocates and initializes
default_value when it isn't already set. That is because the function is
used either when creating new sockets or to initialize typeinfo after
loading files. So default_value has to be either null or has to be
matching the current type already.
- Fix RNA flag for string return value of the valid_socket_type callback:
String return values of registerable RNA functions need a
PROP_THICK_WRAP flag since they don't have a fixed buffer to write into.
While code deleting old (relocated to new ones) IDs would work fine in
typical cases, it would fail badly in others, when e.g. drivers would
create 'reversed' dependency from the obdata ID to the object ID.
This commit uses a less efficient, but much safer method. It also
ensures no relocated old IDs is left over in the file (previous version
could easily leave some old IDs from the old library until a full
save/reload cycle happened).
While this only had minor potential effect, both code incrementing
usercount of newly remapped IDs were wrong.
Original one would by-pass any 'ensured user' handling, newer one would
systematically make the ID directly linked...
`id_us_plus_no_lib` is to be used here.
Fact that those options are only used in a specific case, and that the
same behavior is ensured in a different part of the code in other cases,
is fairly confusing and unfortunate... At least document it.
When pushing down an Action onto an NLA track, set the new Strip's
influence to the Action's influence. This is done by setting a key due
to the way the NLA Strip influence works (it's either animated, or
ignored).
Reviewed By: sybren, RiggingDojo
Differential Revision: https://developer.blender.org/D14719
For a single day in 2015 between rBff3d535bc2a6309 and rB945f32e66d6ada,
custom data structs could be written with an incorrect maxlayer field.
This means that custom data structs read from those files would think
they have more space to add new layers than they actually did, causing
a crash if more layers were added. This was found while investigating
a crash from D14365 which adds new face corner layers in versioning.
The fix is to reset all maxlayer integers to totlayer, which is
done when writing files in current Blender anyway.
The file tests/render/motion_blur/camera_zoom_blur_perspective.blend
has this problem as it was added on 2015-07-21, right between the two
commits. Adding three custom data layers in versioning code would crash.
The problem was originally found and investigated by Martijn Versteegh
(@Baardaap), thanks!
Differential Revision: https://developer.blender.org/D14786
When the object's position depends on the geometry and the geometry
depends on the object's position, we can't count on the object's
evaluated geometry to be available. Lattices and mesh objects have
equivalent checks in this vertex parenting function.
Differential Revision: https://developer.blender.org/D14781
When height is limited, it is defined by space occupied by strips,
but at least channels 1 to 7 will be always visible. This allows it to
easily overview timeline content by zooming out to maximum extent in Y
axis and panning in X axis.
More channels can be "created" on demand by moving strip to higher
channel. When strip is removed and highest channel becomes empty, view
will stay as is until it is moved down. Then new highest point is
remembered and it is not possible to pan upwards until strip is moved to
higher channel.
Limiting takes into account height of scrubbing and markers area as
well as scrollers. This means that when zoomed out to maximum extent,
no strips are obstructed by fixed UI element.
Fixes T57976
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D14263