If a non-instanced collection is linked, any collection exporters on the
linked collection would be active and invokable. This is probably not
desired as it could inadvertently overwrite files from the original.
Disable the operators in this case.
See PR for how each of the various append/link scenarios behave.
Pull Request: https://projects.blender.org/blender/blender/pulls/123149
This is the root cause of broken updates
on local lights.
The same local frustum was used for all of the
tilemap (up to 6) of a light. This made the
`intersect(frustum, box)` call buggy which
would return true only if the object would
intersect with the first tilemap of the light.
This lead to improper updates when the object
would hit this path.
Fix#122533
This is an alternative fix to #123524.
This is necessary, because `sculpt_update_object` is run after
the mesh is evaluated, but before the geometry depsgraph operation
is done. Only after this depsgraph node is done, `DEG_object_geometry_is_evaluated`
will return true.
This approach of `unchecked` methods has been preferred for now
over moving the call to `BKE_sculpt_update_object_after_eval`
to a separate depsgraph node or after depsgraph evaluation.
Basically this tries to make the API to stop and kill jobs more explicit &
consistent, so intent is expressed clearly & behavior as expected.
- Remove use of the job start callback address as identifier for the job.
6887dea786 already removed this pattern from the jobs system internals, this
commit also removes it from the API.
- Make stop & kill API and implementation consistent. E.g. don't stop/kill jobs
by either owner **or** type/callback in one function, and by owner (if
provided) **and** type/callback in another. Causes some small behavior
changes, documented inline.
- Use the same job type and API for all preview render jobs (change by Brecht).
There doesn't seem to be a need for the separated types, in fact the
separation might have caused some issues earlier (and added code complexity).
- Add/improve function documentation.
This does actually have subtle behavior changes that are known, see PR, but
they were investigated carefully and seem like implementing wanted behavior.
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/123086
The callback-based identification was introduced before job types were added in
7b60529517. The job type should be a more predictable/sane way to identify jobs
that should be exclusive. Using anything else is confusing and non-obvious from
the API usage side. In fact it really confused me when working on #123027.
Checked all existing jobs to make sure behavior is unchanged. Found
two issues:
- `WM_JOB_TYPE_OBJECT_SIM_FLUID` is used for both
`fluid_bake_startjob()` and `fluid_free_startjob()`. It makes sense to
me that they would be exclusive though, so leaving it this way
(meaning they are exclusive now).
- Alembic and USD job types were reused, split them up now to not change
behavior.
Pull Request: https://projects.blender.org/blender/blender/pulls/123033
When many text using BLF the glymp texture could be re-written.
In this case the new upload should be done in a separate render
graph node group. This wasn't the case and resulted in
validation warnings about the glyph texture being in an layout
that wasn't expected.
This PR simplifies the group extraction a bit by looking ahead
when the group ends.
Pull Request: https://projects.blender.org/blender/blender/pulls/123547
Previously, the node checked for all possible missing evaluations first.
However, some of the outputs may still work even if using another one
could cause a dependency cycle.
The issue is a combination of following aspects:
- Missing null-pointer check in the image operation, which is probably
why the result was buggy. It is addressed by #123493.
- In certain conditions loading image was wrongfully failing.
The reason for failing to read image were items with a null-pointer
image buffer left by the cache limit enforcer, which was considered
to be an indication of failed load from disk. The reason why the cache
limiter leaves items with null-ptr as an image buffer is kind of a
legacy limitation which was never resolved. Long story short: the
system expects put() to be called on the cache to clear its empty
items.
To solve the original issue of files considered to be unreadable
only set the cache-empty if the image buffer was added empty.
Pull Request: https://projects.blender.org/blender/blender/pulls/123496
The first input of the compositor Mix node determines resolution,
leading to situation when the second input will always be attempted
to be evaluated. If the first input is a longer image sequence than
the second input it leads to a crash.
Do a null-pointer check and return transparent image in this cases,
similar to what the Movie Clip operation is doing.
Pull Request: https://projects.blender.org/blender/blender/pulls/123493
Add back the "Add-ons" preferences, removing add-on logic from
extensions.
- Add support for filtering add-ons by tags
(separate from extension tags).
- Tags now respect the "Only Enabled" option.
- Remove the ability to enable/disable add-ons from extensions.
- Remove add-on preferences from extensions.
- Remove "Legacy" & "Core" prefix from add-on names.
- Remove "Show Legacy Add-ons" filtering option.
Implements design task #122735.
Details:
- Add-on names and descriptions are no longer translated,
since it's impractical to translate text which is mostly
maintained outside of Blender.
- Extensions names have a `[disabled]` suffix when disabled so it's
possible to identify installed but disabled extensions.
- The add-on "type" is shown in the details,
so it's possible to tell the difference between an extension,
a core add-on & a legacy user add-on.
- Icons are also used to differentiate the add-on type.
- User add-on's must be uninstalled from the add-ons section
(matching 4.1 behavior).
- Simplify logic for filtering tags, move into a function.
Primarily the `evaluation_mode` enum prop was incorrectly grouped with
the previous `xform_op_mode` enum causing them to combine in the UI.
Additionally, group the `allow_unicode` option under the Blender Data
sub layout as was intended but got lost in a merge.
Pull Request: https://projects.blender.org/blender/blender/pulls/123513
Make half-size waveforms default in new files and Video Editing template.
They are more space efficient and display more detail at small sizes.
This does not change existing files.
Pull Request: https://projects.blender.org/blender/blender/pulls/123511
Some of the existing colors were hard to read with the new
strips design.
Tried following the concept from 2.83 redesign rationale:
* Same saturation for regular strips.
* Lower saturation for effect strips.
* Tried to reduce the hue shift between certain similar effects.
Other changes:
* Match saturation of all regular strips.
* Reduce value and saturation (mostly value) of color tags so
they are readable in both light and dark text.
* Image: Follow node editor Image node socket color.
* Color: Use the same hue as the color node socket.
* Text: Change it so it doesn’t use the same as Image.
* Sound: Use a greener color, less movie-like blue.
* Scene: Light gray, similar fashion to Collections.
* Other strips had minor adjustments.
Images and details in the pull request.
Pull Request: https://projects.blender.org/blender/blender/pulls/123446
This was caused by the tangent basis computation that
had a threshold that was too noticeable.
Increasing the threshold makes the artifact
unoticeable.
Fix#122949
This storage of the entire mesh for topology-changing operations doesn't
need to be stored in every single undo node. Move it to the data for the
undo step instead. This decreases the size of undo nodes from 3880 to
1816 bytes. That saves about 4MB of memory on a single stroke affecting
most of a 6 million vertex mesh.
I didn't change anything BMesh related here because it's trickier to get
right and not quite as encapsulated. Moving all BMesh undo data out
of `undo::Node` would be a good step though, because only one undo
node is used anyway.
Currently all nodes pushed in a single undo step are expected to have
the same type. Storing the undo type in the undo step itself rather than
duplicating it in every node makes it easier to move some "entire mesh"
data out of the undo nodes and into the undo step, with the goal of
making undo nodes smaller and simpler.
Pushing multiple nodes at the same time helps to reduce the amount
of time spent waiting for threads to unlock while they manipulate the
nodes map, and equalizes the amount of work per thread, since we
can iterate over just the nodes that need data stored. I observed a
2.6% speedup in the benchmark file from #118145 (0.59s to 0.57s).
Instead of counting the size of undo steps as they're being built,
add up the size of all undo nodes as a final step when the undo step
is finished. This is faster because it avoids incrementing the same
size variable from many threads (which also wasn't threadsafe).
I measured a 4% performance improvement in the brush benchmark
file from #118145 (from 0.61s to 0.59s).
Instead of locking for the whole time the undo data is being stored,
only lock while the step's per-node undo node map is being accessed.
This is fine because each PBVH node is only processed by a single thread.
Changing the node vector to not store anything until the undo step is
finalized makes this process a bit simpler because we don't have to build
both the map and the vector at the same time.
Overall this improved the performance of the sculpt brush benchmark
from #118145 by 12%, from 0.68s to 0.61s.
By itself this change doesn't look great since it's mixing up the
responsibilities of different functions. However, it's preferred
since we want to separate geometry undo state and per-node
undo state further in the future. Eventually geometry undo state
should be stored in `StepData` instead.