Since the code was added, it used the active indices of the input and
output custom data layers incorrectly. 82b88f130a exposed that
by actually modifying the active indices of the ouput custom data
correctly, but it didn't update a couple other places to take that into
account.
Large int64_t values were calculated and assigned int however the
calculation was performed on integer types which would truncate the
result before casting to an in64_t.
Generally sculpt uses a combination of data from the original,
deformed, and final evaluated meshes. Keeping track of all that
is confusing and using a more specific variable name helps a bit.
The issue was introduced by b35831ad6c.
Since that commit `tree->boundary` will always be non-nullptr, even
when the target mesh had no boundaries. Some code was still relying
on the fact that `tree->boundary != nullptr` means the mesh has
boundary.
Update shrinkwrap code for this fact, avoiding access past array
boundaries.
Pull Request: https://projects.blender.org/blender/blender/pulls/120054
Also access the evaluated deform mesh with a function rather than
directly from object runtime data. The goal is to make it easier to use
implicit sharing for these meshes and to improve overall const
correctness.
SubdvigCCG is null for the evaluated mesh in the render depsgraph
because of the `!for_render` check in `MOD_multires.cc`. But the PBVH
type is still `PBVH_GRIDS`. That's a weird inconsistency that ideally
wouldn't happen, but probably isn't simple to change. The simplest and
most obviously harmless fix is to just check whether the list of PBVH
nodes to update is empty.
The issue was that the `PointerRNA` passed to `BKE_animsys_get_nla_keyframing_context`
needs to point to an `ID` which wasn't the case when keying bones.
That is because internally the `FCurve` path is used to resolve the property.
This can only work from the `ID` because the `FCurve` path is always stored relative to that.
While the function doesn't fail when the property can't resolve the path, it won't actually do
the remapping when passing it to `BKE_animsys_nla_remap_keyframe_values` later.
Pull Request: https://projects.blender.org/blender/blender/pulls/120008
Handling animation of GPv3 in itself is relatively straightforward, it's
mainly a matter of duplicating animdata into the new GreasePencil ID.
In case some propoerties need to be remapped, this will be done in a
similar way as e.g. GP object's modifiers animation for Object-level
animation.
The complex and ugly part of this PR is in the need to move animation
from GPdata to Object level for some properties. This PR tackles the
'layer adjustments to modifiers' aspect (i.e. adjustments on tint and
thickness).
Known limitations currently with these GPData to Object animation:
* NLA is not supported (i.e. if an NLA in legacy GP data controls these
adjustments animations, it won't be converted to Object-level NLA to
control matching modifiers settings).
* Drivers targets are not handled either, i.e. in case a driver is using
data from legacy GPdata as input, these will be left as-is (this is
true for all anim handling currently).
* There is no adjustments of values for animation (e.g. the thickness
adjustment values would need to be devided by 2000).
Most of these limitations can be addressed at some point, depending on
how critical they are to support. This would have a cost (in time and
code complexity) though.
Pull Request: https://projects.blender.org/blender/blender/pulls/119214
Caused by 6a79a6a24a
`Mask` `AnimData` is read by `BKE_animdata_blend_read_data` since above
commit, so `BLO_read_data_address` on `AnimData` is already done there.
However, `mask_blend_read_data` does it again
Similar calls to `BLO_read_data_address` in `blend_read_data` callbacks
were usually removed in 6a79a6a24a, Masks being an exception.
Accoding to @mont29 the reason while a double remapping will fail is
because there can only be one remapping from old addresses to new ones.
Once the new address has been read, this new address cannot be used as
'key' again (it will likely map to nothing, or worse, remap to something
completely unrelated!)
Pull Request: https://projects.blender.org/blender/blender/pulls/119961
Remove all BLF "_ex" versions of functions by using default arguments.
These functions only differ by having an optional argument that can
return extra details about the result of the operation. This PR just
make these part of the main function as optional arguments with default
values - all nullptr.
Pull Request: https://projects.blender.org/blender/blender/pulls/119994
This is mainly to make the computation threadsafe, to allow computing
the cache on a const mesh, and also to decrease the cost of copying
meshes. Computing caches on const meshes generally makes it easier
to avoid copying meshes unnecessarily in other ways, which would be
useful for some pending fixes and cleanups to modifier evaluation.
- Use C++ Array type
- Move to blender::bke::shrinkwrap namespace
- Use edge_is_boundary instead of edge_mode in a few places
- Avoid writing to edge_mode unnecessarily
Move the public functions from the editors/object (`ED_object.hh`)
header to the `blender::ed::object` namespace, and move all of the
implementation files to the namespace too. This provides better code
completion, makes it easier to use other C++ code, removes unnecessary
redundancy and verbosity from local uses of public functions, and more
cleanly separates different modules.
See the diff in `ED_object.hh` for the main renaming changes.
Pull Request: https://projects.blender.org/blender/blender/pulls/119947
Store RNG on per thread data, instead of the effector itself which may
be used by multiple objects evaluated in different threads.
This has been causing the blendfile_versioning test to fail randomly.
Thanks Ray and Aras for helping track this down.
Pull Request: https://projects.blender.org/blender/blender/pulls/119967
Previously retrieving a collection from the context like "selected_ids"
would give a linked list of allocated items. Now it returns a vector of
RNA pointers. Though the number of items is typically fairly small,
using contiguous memory and avoiding many small allocations are
typical performance improvements that could still be beneficial
when there are many items. Iteration also becomes much simpler.
Pull Request: https://projects.blender.org/blender/blender/pulls/119939
There are still a few places that are more complicated where the replacement
to `IDP_New` isn't obvious, but this commit replaces most uses of the ugly
`IDPropertyTemplate` usage.
This patch refactors the backdrop offset to be stored as a float instead
of an int and to be stored in the image runtime structure instead of the
image itself.
Pull Request: https://projects.blender.org/blender/blender/pulls/119877