link->multi_input_socket_index, which is used to calculate the links
position on the multi-input socket, was not set.
Now it is set to the sockets current link count.
Review: Jacques Lucke (JacquesLucke)
Differential Revision: https://developer.blender.org/D11082
With Constrain to Image Bounds selected, UVs will be constrained to the
correct/closest UDIM if the image is tiled.
UVs will be constrained to the 0-1 UV space if the image is not tiled.
This will override the present behavior of always constraining selected
UVs to the 0-1 UV space (UDIM 1001).
Reviewed By: campbellbarton
Ref D11202
Mistake in {rBe48c4d73d378}.
Was using the vertex index as a lookup for the loop color (instead of
the loop index).
(Issue was not present in original D1429 btw).
Maniphest Tasks: T88145
Differential Revision: https://developer.blender.org/D11212
We were not assigning the amount of sound channels to the output frames.
Newer ffmpeg releases has sanity checks in place and doesn't fall back
to two channels anymore.
this is a followup to rB2bd85d9cc623, we cannot forcefully delete
obsolete overrides of object data (meshes etc.), as this implies also
deleting their user object, which might still be a perfectly valid
override, albeit in conflict regarding its obdata ID pointer...
Code detecting overrides which reference linked data is missing was
actually missing many cases, leading to too much garbage data being kept
around after resync process.
The check to include particle edit mode in the object-mode drop-down
didn't match the poll function to edit particle edit mode.
Share the check between both functions.
Fix trying to use cross product on parallel vectors.
Fix intersection checks failing because we run into floating point
issues with very small numbers.
One of current annoying limitations of Blender re Collections/Objects is
that objects are forbidden to not be instantiated in at least one
collection.
Code ensuring that as a pst-processing step of override creation/resync
operations would be a bit too eager to add those objects to an external
'ad-hoc' collection, which poses several issues (both in term of keeping
the scene well organized, and related to override hierarchy handling).
So now be very conservative and only generate and use external 'storage'
collection for those objects when it is absolutely mandatory.
In pratice, it means this should never happen anymore on any decently
organized data source.
Caused by {rB0d9f79b163ee}.
IDP_SyncGroupTypes was now syncing from src to src (leading to
unexpected operator properties).
Assume this is rather critical, dont know the part of the code well, but
above commit clearly shows a change from 'dest->data.group' to 'src-
>data.group' which shouldnt be there.
Maniphest Tasks: T88030
Differential Revision: https://developer.blender.org/D11171
Not all python-defined ID properties are overridable (yet), this needs
to be detected by libquery 'foreach id' code, such that those ID
pointers can be ignored by override code when working on override
hierarchies.
Fixes part of the issues found while investigating studio files (namely,
some py-defined ID pointer properties from rigify that are not currently
overridable would cause issues and false detections during resync).
Move the detection/decision of whether an ID pointer should be taken
into account in library override hierarchy processing to the LibQuery
area of code, by introducing a new callback flag.
This allows to factorize the test logic, be explicit in liboverride code
about ID relationships that can be ignored when exploring the override
hierarchy, and adds the possibility to do more checks about pointers to
be tagged as non-overridable in the future.
Note that all but the 'special' ID pointers (loop-back, embedded, etc.)
should be overridable. If some is not, relevant IDType 'foreach_id'
callback code is reponsible to tag it properly.
Python-defined IDProperties however are not systematicaly overridable
(yet), so this should allow us to detect that case and act accordingly
in an incomming commit.
No behavioral change expected in this commit.
Text data block were not considered special in the recursive purge
function. So they would get deleted if they had no actual users.
To fix this we instead make text data block use "fake user" so that
addon authors can specify script files that should be removed if nothing
is using it anymore.
Per default, new text object have "fake user" set. So functionality
wise, the user has to explicitly specify that they want the text object
to be purge-able.
Reviewed By: Bastien
Differential Revision: http://developer.blender.org/D10983
When saving a file in Edit mode with Multiframe enabled, the render did not include the modifiers.
Now the multiframe is not enabled if it's doing a render.
Multi-overrides of a same linked ID in a same override hierarchy are
currently not supported and can cause all kind of issues.
In some cases they could lead to infinite loop trying to resync the same
ID over and over, this is now prevented.
Found in some Blender studio production files.
The algorithm that calcualted the direction (inside/outside) of the
polyline was not always returing the correct result. This mean that the
polyline was filled "inside-out".
The fix uses the winding number to calculate the inside and outside.
Reviewed By: antoniov, pepeland
Maniphest Tasks: T87718
Differential Revision: https://developer.blender.org/D11054
Generally, it would be good to not allow this from happening in the
first place but that is quite tricky because an object does not know
which other object instances it. Similar checks might be necessary
in other places, but this fixes the bug already.
Differential Revision: https://developer.blender.org/D11086
This fixes T87666 and T83252.
The boolean modifier and geometry nodes can depend on the geometry
of an entire collection. Before, the modifiers had to manually create relations
to all the objects in the collection. This worked for the most part, but was
cumbersome and did not solve all issues. For example, the modifiers were not
properly updated when objects were added/removed from the referenced collection.
This commit introduces the concept of "collection geometry" in the depsgraph.
The geometry of a collection depends on the transforms and geometry of all
the objects in it. The boolean modifier and geometry nodes can now just depend
on the collection geometry instead of creating all the dependencies themselves.
Differential Revision: https://developer.blender.org/D11053
Though to my knowledge we haven't had a report about this yet, this
looks like a clear oversight-- the ids are just more data stored by the
instances component, and should be cleared and copied like other data.
This might have resulted in incorrect random IDs for instances in
renderers in some cases where the component had to be copied.
If an object named for example `Suzanne` is converted to Gpencil, a material called `Suzanne_Fill` will be created for the gpencil fill.
When this material already exists, the new material will be called `Suzanne_Fill.001` and the operator will not see that this material is already present the next iteration. This leads to a new material being created for every polygon.
This commit changes the code to search for a material starting with `ObjectName_Fill` instead of being equal to.
Reviewed By: filedescriptor, antoniov
Differential Revision: https://developer.blender.org/D11067
This is complex situation. Tagged ID deletion (used to delete several
data-blocks at once) removes IDs to be deleted from Main.
But when we remove deleted IDs' usages of other IDs (using
`BKE_libblock_relink_ex`), some specific post-process is required on
some types, like Collections. Those post-processes would in some cases
rely on data actually being in Main.
That failing condition would lead in existing code on missing processing
the very ID (collection) we were working on, leading to missing removing
some child collection pointers, leading to the crash (later on in
LayerCollection resync process).
For now we go with an optimization & fix that avoids processing all
collections in Main when we actually know which one we are working one
(case of `BKE_libblock_relink_ex`, but not of
`BKE_libblock_remap_locked`).
This is however yet another demonstration of the need to rework that
whole collection/layer resync process, since it is not only extremely
inneficient currently, but it also requires valid Main/ID state way too
deep into the remapping code.
NOTE: This fix may very well not catch/address all possible fail cases
here, dealing with the double parent/child relationships of collections
is challenging...
Issue reported by @eyecandy from the studio, thanks.
The `bisect_distance` in the mirror modifier was hard-coded to `0.001`.
This would result in some unexpected behavior like vertices close
to the mirror plane being deleted or merged.
The fix now adds a parameter to the mirror modifier to expose the
bisect distance to the user. The default is set to the previous
hard-coded value to not "change" previous files.
Ref D10201
Some persistent data code was disable due to a deeper design issue, which
meant some updates were not communicated to renderers.
Dependency graph updates work in two passes, once where Blender scene
animation updates are done, then app handler scripts can run to make further
scene modifications, and then the depsgraph is updated again to take those
into account.
Previously the viewport would update renderers twice when such app handler
scripts were present. Now both viewport and persistent data rendering update
the renderers only once, accumulating updates from both passes.
`ob->runtime.geometry_set_eval` can contain instances as well.
This only affected instances generated by geometry nodes.
We should probably have a separate function that tells us if an object
has instances or not..
In the past, custom attributes were rarely used in practice, because the
only way to use them was from Python. Since geometry nodes, more
users started to add their own attributes. Those attributes should not
be removed automatically. It is still possible to remove them in
geometry nodes explictly to improve performance.
This introduces a context path to the spreadsheet editor, which contains
information about what data is shown in the spreadsheet. The context
path (breadcrumbs) can reference a specific node in a node group
hierarchy. During object evaluation, the geometry nodes modifier checks
what data is currently requested by visible spreadsheets and stores
the corresponding geometry sets separately for later access.
The context path can be updated by the user explicitely, by clicking
on the new icon in the header of nodes. Under some circumstances,
the context path is updated automatically based on Blender's context.
This patch also consolidates the "Node" and "Final" object evaluation
mode to just "Evaluated". Based on the current context path, either
the final geometry set of an object will be displayed, or the data at
a specific node.
The new preview icon in geometry nodes now behaves more like
a toggle. It can be clicked again to clear the context path in an
open spreadsheet editor.
Previously, only an object could be pinned in the spreadsheet editor.
Now it is possible to pin the entire context path. That allows two
different spreadsheets to display geometry data from two different
nodes.
The breadcrumbs in the spreadsheet header can be collapsed by
clicking on the arrow icons. It's not ideal but works well for now.
This might be changed again, if we get a data set region on the left.
Differential Revision: https://developer.blender.org/D10931
This patch adds domain and data type information to each row of the
attribute search menu. The data type is displayed on the right, just
like how the list is exposed for the existing point cloud and hair
attribute panels. The domain is exposed on the left like the menu
hierarchy from menu search.
For the implementation, the attribute hint information is stored as a
set instead of a multi-value map so that every item (which we need to
point to descretely in the search process) contains the necessary data
type and domain information by itself. We also need to allocate a new
struct for every button, which requires a change to allow passing a
newly allocated argument to search buttons.
Note that the search does't yet handle the case where there are two
attributes with the same name but different domains or data types in
the input geometry set. That will be handled as a separate improvement.
Differential Revision: https://developer.blender.org/D10623
Previously, the bone position outside of "fit to curve length" mode was
incorrect.
It assumed that the curve was completely straight with no bends or
turns. This would lead to bones being scaled down as their final
position would be servery underestimated in some cases.
The solution is to do a sphere -> curve intersection test to see where
to put the bones while still preserving their length. As we are using
the tessellated curve data this essentially boils down to us doing a
sphere -> line intersection check.
Reviewed By: Sybren
Differential Revision: http://developer.blender.org/D10849
This is especially useful when trying to add a node group instance, e.g. via
drag & drop from the Outliner or Asset Browser.
Previously this would just silently fail, with no information why. This is a
source of confusion, e.g. earlier, it took me a moment to realize I was
dragging a node group into itself, which failed of course.
Blender should always try to help the user with useful error messages.
Adds error messages like: "Nesting a node group inside of itself is not
allowed", "Not a compositor node tree", etc.
Adds a disabled hint return argument to node and node tree polling functions.
On error the hint is reported, or could even be shown in advance (e.g. if
checked via an operator poll option).
Differential Revision: https://developer.blender.org/D10422
Reviewed by: Jacques Lucke
Before rBf674976edd88, the flag indicating whether a curve was 2D or 3D was
ignored by Surfaces objects.
So it can be said that Surfaces objects were always 3D.
We could remove updates to 2D on Surface objects, so the behavior is
identical to what it was before.
But this would also cause the return of `data.dimensions` to be misleading,
complicate the code a bit and add a micro overhead.
So the solution here is just to init all Surface objects as 3D.
Surface objects can now be constrained to 2D with the command:
```
data.dimensions = '2D'
```