Fix a crash when inserting a key with tweak mode enabled, but where
`AnimData::actstrip` was NULL.
The root cause of this is that two pointers in the `AnimData` struct
(`act_track` and `actstrip`) are expected to be set when NLA tweak mode
is enabled, BUT these are not exposed to RNA and thus invisible to the
library overrides system. As such, they are NULL when loading from disk,
while the `ADT_NLA_EDIT_ON` flag still indicates they are to be used.
Rather than adding a NULL pointer check (and having to add that in many
more places), I used this two-pronged approach:
- Extend the 'NLA tweakmode' override apply code, to set the `act_track`
and `actstrip` pointers when they are incorrectly NULL. This is done
by lookup of the track and strip by name.
- Add versioning code to exit out of tweak mode whenever the
`ADT_NLA_EDIT_ON` flag is set, but those two pointers are still NULL.
The last step was necessary with the example file attached to the bug
report, as that was saved with a buggy blender version. New saves work
just fine.
Pull Request: https://projects.blender.org/blender/blender/pulls/119632
Mistake in a96f1208cc.
When markers are present, offset should be added to the height.
Earlier it was subtracted like so `v2d->tot.ymin -= MarkerMargin` and that worked
because ymin is assigned like so `v2d->tot.ymin = -height`.
Now that the marker margin is added to the height it needs to be additive.
Pull Request: https://projects.blender.org/blender/blender/pulls/119647
Bring back the `INSERTKEY_XYZ_TO_RGB` enum item for the
`keyframe_insert()` function (it was removed in 30b0c5b225). This way
any Python code that targets Blender 4.x can safely pass this flag,
without having to check specific Blender versions.
Note that the flag is implemented as a no-op, as the behaviour change
introduced in 30b0c5b225 (just looking at the user preference) is still
retained. The purpose of this commit is simply to avoid the `ValueError`
exception that would otherwise be raised.
This should also fix Rigify report blender/blender-addons#105241.
Pull Request: https://projects.blender.org/blender/blender/pulls/119625
If the bevel "Harden Normals" option is on, custom normals will be
generated. In that case, the automatic sharp edge tagging based on the
angle shouldn't run. This PR extends the earlier fix to #116395 to
handle this case and also extends the check to not just check the last
modifier, which doesn't work in this test file which has a collision
modifier at the end. That makes sense anyway, since what we really care
about is whether the evaluated mesh has custom normals or not.
Pull Request: https://projects.blender.org/blender/blender/pulls/119638
This PR fixes several issues with the versioning that replaces the old
auto smooth flag with a modifier.
One issue is that the flag wasn't cleared in the initial versioning
code. That means some objects have the replacement modifier but their
meshes still have the flag set. The fix for that is to make the
versioning idempotent by trying to find an existing node group before
adding a new one. The versioning is now re-run on all objects to clear
the flags. Flags on all meshes are cleared too, even unused meshes.
That could cause loss of the auto-smooth when the mesh is linked from
a different blend-file, but that situation should be very rare.
Another issue was that the versioning wasn't run when linking objects.
That was simple to solve by adding the versioning where the proxy
versioning already existed for that case.
Finally, arguably the largest issue was that the the newly added node
groups were always added as local data-blocks. When linking, having
library data-blocks point to local data-blocks not in that library is
quite bad and breaks assumptions around Blender. This is solved by
having an auto smooth node group per library.
Resolves#119516, #119455, #119447
Pull Request: https://projects.blender.org/blender/blender/pulls/119539
Since 660867fa00, having tree-view items uncollapsed by default using
the `set_collapsed()` functions wouldn't work anymore. An attempt to do
this would assert even, so eb71d9f7bc disabled the assert.
I think a function designed to handle exactly this is the best solution,
it makes the intent & behavior more clear than before, and highlights
that this is a special case.
Mitigates #117957, in that it solves the regression, but tree-views still
don't remember their state on screen-layout changes. This is a known
limitation and not supported.
Pull Request: https://projects.blender.org/blender/blender/pulls/119166
Resolves custom attribute types for ints and booleans by ensuring
conversion mode is correct. Previously, the attribute declarations
were assumed to be linear. However, patch ensures the correct
attribute index is now fetched, ensuring the conversion mode
is correctly specified for non-linear attribute ID's.
Authored by Apple: Michael Parkin-White
Pull Request: https://projects.blender.org/blender/blender/pulls/119569
If a bone was specified (but now missing), the driver would fallback to
using the object as a target (which can lead to unintended behavior).
So now check if a bone is specified, if it is missing, mark the driver
invalid. If no bone is specified at all, then use the armature object as
the target.
NOTE: `DTAR_FLAG_INVALID` is not granular enough to distinguish the
object and bone targets, so both will be marked in red in the UI (there
is already comments about it in code). If necessary, we could introduce
an additional DTAR_FLAG_BONE_INVALID and use that in a couple of places.
Pull Request: https://projects.blender.org/blender/blender/pulls/119533
This makes it so GPv3 objects are rendered using the current grease pencil render engine.
A new `gpencil_next` engine was added at the beginning of the project, but it couldn't
be finished in time. This commit removes the `gpencil_next` engine as it is no longer
used. The current status of the new engine been pushed to the `gpencil-next` branch on
the `blender` repository.
Note: Onion skinning is not supported yet. This work will be done in a separate PR.
Fixes#115467 and #116347.
Pull Request: https://projects.blender.org/blender/blender/pulls/118664
The `IndexMask` data structure was designed to allow us to implement set
operations like `union`, `intersection` and `difference` efficiently
(2cfcb8b0b8). This patch adds an evaluator for
arbitrary expressions involving the mentioned operations. The evaluator makes
use of the design of the `IndexMask` data structure to be quite efficient.
In some common cases, the evaluator runs in constant time. So it's very fast
even if the mask contains many millions of indices. If possible the evaluator
works on entire segments at once instead of looking at the individual indices.
This results in a very low constant factor even if the evaluation time is
linear. If the evaluator has to look at the individual indices to be able to
perform the operation, it can make use of multi-threading.
The evaluation consists of the following steps:
1. A coarse evaluation that looks at entire segments at once.
2. All segments that couldn't be fully evaluated by the coarse evaluation are
evaluated exactly by looking at the actual indices. There are two evaluators
for this case. One that is based on `std::set_union` etc. The other one first
converts the index masks to bit spans, then does bit operations to evaluate
the expression, and then converts the bits back into indices. Depending on
the expression, one or the other can be more efficient.
3. Construct an index mask from the evaluated segments.
Showing the performance of the evaluator is kind of difficult because it highly
depends on the input data. Comparing the performance to something that does not
short-circuit when there are full ranges is meaningless, because one can
construct an example where the new evaluator is arbitrarily faster. I'm still
working on a case where performance can be compared to e.g. using
`std::set_union`. This comparison is only fair when the input data when
constructing a case where the new evaluator can't short-circuit.
One of the main remaining bottlenecks are the calls to `slice_content` on large
index masks. I think the impact of those can still be reduced.
We are not using this evaluator much yet, except through `IndexMask::complement`
calls. I intend to use it when I get to refactoring the field evaluator for
geometry nodes to optimize the evaluation of selections.
Pull Request: https://projects.blender.org/blender/blender/pulls/117805
Resolves render pass export for EEVEE Next on Metal.
Reads from texture views was previously utilising the
root texture rather than the view variant, resulting
in views into texture arrays being incorrectly sampled.
Authored by Apple: Michael Parkin-White
Pull Request: https://projects.blender.org/blender/blender/pulls/119563
The buttons to sync and update individual repositories only work for
remote repositories. This PR just updates their poll functions to
disable them for local repos.
Pull Request: https://projects.blender.org/blender/blender/pulls/119568
Experimental flag for "Extensions Development Utilities" that can be
enabled separately from "Extensions". Note this PR does not enable the
use of this flag (coming later) as uses are in addons-contrib. This
flag is requested in #119521
Pull Request: https://projects.blender.org/blender/blender/pulls/119562
This affect all local lights (non-sun light).
We hide the artifact caused by different tracing results from
two adjacent projection. This is visible as the shading point
switches projections.
We fix this by randomizing which shadow map projection (face)
to trace. We do that by using the point at half the ray instead
of the shading point to choose the projection. This gives
a soft enough look proportional to the light shape.
This also has the benefit of being stupidly simple.
Pull Request: https://projects.blender.org/blender/blender/pulls/119555
This reverts commit c6497dd9f7.
This wasn't working the way I expected, it seems the LIB_TAG_NEW isn't
set here. Better to address the fundamental issues anyway. Sorry for
the noise.
For small confirmations that appear at mouse position, change the
position slightly so that the mouse pointer is less likely to obscure
the confirm button text.
Pull Request: https://projects.blender.org/blender/blender/pulls/119536
When appending objects from an older file, versioning needs to run to add
an auto smooth modifier if necessary. However, this was running for all
objects rather than just the newly appended objects. It's quite wrong to
modify existing objects here, so add an explicit check for that. This could
improve performance as well, but skipping checks for objects when
the work is unnecessary.
The crash will hopefully be resolved by other improvements to the auto
smooth versioning (making it properly idempotent and running it again
should fix the problem with the hidden legacy flags still set). But I still think
this PR is worth committing, to be very explicit about only modifying new
objects in versioning code.
Pull Request: https://projects.blender.org/blender/blender/pulls/119467
Previously I misunderstood the subsurf modifier's handling of custom
normals. The "use custom normals" check in 4.0 checked if there were
custom normals and whether the auto smooth flag was checked. I wrongly
changed that to check the mesh normals domain instead of whether there
was custom normals. In 4.1, auto smooth isn't required to use custom
normals, but that should be the only change here.
In this PR, that change is done for CPU and GPU subdivision, and for
the versioning which adds a modifier. The versioning now only puts the
new modifier before the subsurf modifier if it would have used the
custom normals interpolation in 4.0.
The last change causes two test failures which I also misunderstood
before. The previous results were arguably incorrect, because the
Cycles experimental adaptive subdivision ignored the auto smooth
angle, which was 5 degrees. It should have been 180 degrees.
I will modify those test files to remove auto smooth from the meshes.
Pull Request: https://projects.blender.org/blender/blender/pulls/119485
When using mixed resolution rendering with an orthographic projection
the display was not correct. The reason was that the view boundaries
were decomposed from the winmat correctly, but when re-composing
the matrix it assumed to be in perspective.
This is fixed by composing an orthographic winmat when it was sourced
from a orthographic winmat.
Fixes#119514
Pull Request: https://projects.blender.org/blender/blender/pulls/119517
On some GPUs/drivers (seemingly nvidia) and screen sizes, VSE
vectorscope and sometimes waveform have "garbage" artifacts around
them.
Root cause unknown (driver bug?), for now similar fix as a while ago
in #112665: explicitly draw opaque background (with alpha=1), and then
use alpha blending for the scopes texture display on top of that.
Pull Request: https://projects.blender.org/blender/blender/pulls/119512
An alternative fix would be calling `update_on_change_` in the
attribute `try_create`function, but sticking with this more
conservative fix seems better for 4.1.
Pull Request: https://projects.blender.org/blender/blender/pulls/119515
This is a regression since d579ac2b3f
Ensure that the image buffers used by render passes have metadata at the
end of render, similarly to how stamping happens. This solves the reported
issue, and makes the metadata behave consistently, independent on how the
image buffer is accessed.
Thanks Philipp Oeser for investigation and pin-pointing the bad commit!
Pull Request: https://projects.blender.org/blender/blender/pulls/119503
When drawing is visible in 3d-view and playhead is not over keyframe, in
such case erase tool is erases everything (or let's say it adds empty
keyframe). Erasing works correctly if additive drawing is enabled but
this is also expected to work when additive drawing is disabled.
Pull Request: https://projects.blender.org/blender/blender/pulls/119051
With the recent changes introduced by the mixed resolution rendering
overscan and border rendering broke. Overscan is also used by the test
cases and fails the current tests.
This is a quick fix to get the overscan working.
NOTE: overscan + mixed resolution rendering will look blurry.
NOTE: this doesn't fix border rendering.
As part of PR #118396 the "generator" effect strips (color, text,
adjustment) also started to put their "raw" images into the cache.
However, seq_cache_timeline_frame_to_frame_index that has optimization
that tries to use only one cache entry based on how many frames are
present in the strip was not expecting to see effect strips, ever.
As a result, something like adjustment layer would effectively just
cache a single frame, since it has no "source frames" to speak of.
Fixed that by only doing this optimization for non-effect strips.
Similar to the recently introduced `Solver` enum. This is just friendlier
and doesn't require including `DNA_node_types.h` in the geometry
module header. There's no strong benefit to declaring these enums in
DNA in practice.