2023-08-16 00:20:26 +10:00
|
|
|
/* SPDX-FileCopyrightText: 2023 Blender Authors
|
2023-05-31 16:19:06 +02:00
|
|
|
*
|
|
|
|
|
* SPDX-License-Identifier: GPL-2.0-or-later */
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2024-05-23 14:31:16 +02:00
|
|
|
#include <fmt/format.h>
|
|
|
|
|
|
2025-02-11 16:59:42 +01:00
|
|
|
#include "BLI_listbase.h"
|
2021-12-21 15:18:56 +01:00
|
|
|
#include "BLI_map.hh"
|
|
|
|
|
#include "BLI_multi_value_map.hh"
|
|
|
|
|
#include "BLI_noise.hh"
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
#include "BLI_rand.hh"
|
2021-12-21 15:18:56 +01:00
|
|
|
#include "BLI_set.hh"
|
|
|
|
|
#include "BLI_stack.hh"
|
2025-02-28 19:07:02 +01:00
|
|
|
#include "BLI_string.h"
|
2024-05-23 14:31:16 +02:00
|
|
|
#include "BLI_string_utf8_symbols.h"
|
2021-12-21 15:18:56 +01:00
|
|
|
#include "BLI_vector_set.hh"
|
|
|
|
|
|
|
|
|
|
#include "DNA_anim_types.h"
|
|
|
|
|
#include "DNA_modifier_types.h"
|
|
|
|
|
#include "DNA_node_types.h"
|
|
|
|
|
|
2024-02-28 11:51:03 +01:00
|
|
|
#include "BKE_anim_data.hh"
|
2024-11-12 15:21:59 +01:00
|
|
|
#include "BKE_image.hh"
|
2025-01-17 12:17:17 +01:00
|
|
|
#include "BKE_lib_id.hh"
|
2023-12-01 19:43:16 +01:00
|
|
|
#include "BKE_main.hh"
|
2023-05-15 15:14:22 +02:00
|
|
|
#include "BKE_node.hh"
|
2024-01-26 12:40:01 +01:00
|
|
|
#include "BKE_node_enum.hh"
|
2025-01-09 20:03:08 +01:00
|
|
|
#include "BKE_node_legacy_types.hh"
|
2022-05-30 12:54:07 +02:00
|
|
|
#include "BKE_node_runtime.hh"
|
2024-10-07 12:59:39 +02:00
|
|
|
#include "BKE_node_tree_reference_lifetimes.hh"
|
2023-11-16 11:41:55 +01:00
|
|
|
#include "BKE_node_tree_update.hh"
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2023-07-10 13:14:15 +02:00
|
|
|
#include "MOD_nodes.hh"
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2025-04-13 12:27:01 +02:00
|
|
|
#include "NOD_geo_closure.hh"
|
2024-12-05 18:02:14 +01:00
|
|
|
#include "NOD_geometry_nodes_dependencies.hh"
|
Geometry Nodes: support attaching gizmos to input values
This adds support for attaching gizmos for input values. The goal is to make it
easier for users to set input values intuitively in the 3D viewport.
We went through multiple different possible designs until we settled on the one
implemented here. We picked it for it's flexibility and ease of use when using
geometry node assets. The core principle in the design is that **gizmos are
attached to existing input values instead of being the input value themselves**.
This actually fits the existing concept of gizmos in Blender well, but may be a
bit unintutitive in a node setup at first. The attachment is done using links in
the node editor.
The most basic usage of the node is to link a Value node to the new Linear Gizmo
node. This attaches the gizmo to the input value and allows you to change it
from the 3D view. The attachment is indicated by the gizmo icon in the sockets
which are controlled by a gizmo as well as the back-link (notice the double
link) when the gizmo is active.
The core principle makes it straight forward to control the same node setup from
the 3D view with gizmos, or by manually changing input values, or by driving the
input values procedurally.
If the input value is controlled indirectly by other inputs, it's often possible
to **automatically propagate** the gizmo to the actual input.
Backpropagation does not work for all nodes, although more nodes can be
supported over time.
This patch adds the first three gizmo nodes which cover common use cases:
* **Linear Gizmo**: Creates a gizmo that controls a float or integer value using
a linear movement of e.g. an arrow in the 3D viewport.
* **Dial Gizmo**: Creates a circular gizmo in the 3D viewport that can be
rotated to change the attached angle input.
* **Transform Gizmo**: Creates a simple gizmo for location, rotation and scale.
In the future, more built-in gizmos and potentially the ability for custom
gizmos could be added.
All gizmo nodes have a **Transform** geometry output. Using it is optional but
it is recommended when the gizmo is used to control inputs that affect a
geometry. When it is used, Blender will automatically transform the gizmos
together with the geometry that they control. To achieve this, the output should
be merged with the generated geometry using the *Join Geometry* node. The data
contained in *Transform* output is not visible geometry, but just internal
information that helps Blender to give a better user experience when using
gizmos.
The gizmo nodes have a multi-input socket. This allows **controlling multiple
values** with the same gizmo.
Only a small set of **gizmo shapes** is supported initially. It might be
extended in the future but one goal is to give the gizmos used by different node
group assets a familiar look and feel. A similar constraint exists for
**colors**. Currently, one can choose from a fixed set of colors which can be
modified in the theme settings.
The set of **visible gizmos** is determined by a multiple factors because it's
not really feasible to show all possible gizmos at all times. To see any of the
geometry nodes gizmos, the "Active Modifier" option has to be enabled in the
"Viewport Gizmos" popover. Then all gizmos are drawn for which at least one of
the following is true:
* The gizmo controls an input of the active modifier of the active object.
* The gizmo controls a value in a selected node in an open node editor.
* The gizmo controls a pinned value in an open node editor. Pinning works by
clicking the gizmo icon next to the value.
Pull Request: https://projects.blender.org/blender/blender/pulls/112677
2024-07-10 16:18:47 +02:00
|
|
|
#include "NOD_geometry_nodes_gizmos.hh"
|
2023-08-03 18:04:36 +02:00
|
|
|
#include "NOD_geometry_nodes_lazy_function.hh"
|
2021-12-21 15:18:56 +01:00
|
|
|
#include "NOD_node_declaration.hh"
|
2023-07-02 21:01:57 +02:00
|
|
|
#include "NOD_socket.hh"
|
2022-01-24 16:18:30 -06:00
|
|
|
#include "NOD_texture.h"
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2024-12-05 18:02:14 +01:00
|
|
|
#include "DEG_depsgraph_build.hh"
|
|
|
|
|
|
2024-05-23 14:31:16 +02:00
|
|
|
#include "BLT_translation.hh"
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
using namespace blender::nodes;
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* These flags are used by the `changed_flag` field in #bNodeTree, #bNode and #bNodeSocket.
|
2025-05-17 09:18:03 +10:00
|
|
|
* This enum is not part of the public API. It should be used through the `BKE_ntree_update_tag_*`
|
|
|
|
|
* API.
|
2021-12-21 15:18:56 +01:00
|
|
|
*/
|
|
|
|
|
enum eNodeTreeChangedFlag {
|
|
|
|
|
NTREE_CHANGED_NOTHING = 0,
|
|
|
|
|
NTREE_CHANGED_ANY = (1 << 1),
|
|
|
|
|
NTREE_CHANGED_NODE_PROPERTY = (1 << 2),
|
|
|
|
|
NTREE_CHANGED_NODE_OUTPUT = (1 << 3),
|
2023-09-14 14:13:07 +02:00
|
|
|
NTREE_CHANGED_LINK = (1 << 4),
|
|
|
|
|
NTREE_CHANGED_REMOVED_NODE = (1 << 5),
|
|
|
|
|
NTREE_CHANGED_REMOVED_SOCKET = (1 << 6),
|
|
|
|
|
NTREE_CHANGED_SOCKET_PROPERTY = (1 << 7),
|
|
|
|
|
NTREE_CHANGED_INTERNAL_LINK = (1 << 8),
|
|
|
|
|
NTREE_CHANGED_PARENT = (1 << 9),
|
2021-12-21 15:18:56 +01:00
|
|
|
NTREE_CHANGED_ALL = -1,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static void add_tree_tag(bNodeTree *ntree, const eNodeTreeChangedFlag flag)
|
|
|
|
|
{
|
2022-05-30 12:54:07 +02:00
|
|
|
ntree->runtime->changed_flag |= flag;
|
2022-12-20 13:05:02 +01:00
|
|
|
ntree->runtime->topology_cache_mutex.tag_dirty();
|
Geometry Nodes: add simulation support
This adds support for building simulations with geometry nodes. A new
`Simulation Input` and `Simulation Output` node allow maintaining a
simulation state across multiple frames. Together these two nodes form
a `simulation zone` which contains all the nodes that update the simulation
state from one frame to the next.
A new simulation zone can be added via the menu
(`Simulation > Simulation Zone`) or with the node add search.
The simulation state contains a geometry by default. However, it is possible
to add multiple geometry sockets as well as other socket types. Currently,
field inputs are evaluated and stored for the preceding geometry socket in
the order that the sockets are shown. Simulation state items can be added
by linking one of the empty sockets to something else. In the sidebar, there
is a new panel that allows adding, removing and reordering these sockets.
The simulation nodes behave as follows:
* On the first frame, the inputs of the `Simulation Input` node are evaluated
to initialize the simulation state. In later frames these sockets are not
evaluated anymore. The `Delta Time` at the first frame is zero, but the
simulation zone is still evaluated.
* On every next frame, the `Simulation Input` node outputs the simulation
state of the previous frame. Nodes in the simulation zone can edit that
data in arbitrary ways, also taking into account the `Delta Time`. The new
simulation state has to be passed to the `Simulation Output` node where it
is cached and forwarded.
* On a frame that is already cached or baked, the nodes in the simulation
zone are not evaluated, because the `Simulation Output` node can return
the previously cached data directly.
It is not allowed to connect sockets from inside the simulation zone to the
outside without going through the `Simulation Output` node. This is a necessary
restriction to make caching and sub-frame interpolation work. Links can go into
the simulation zone without problems though.
Anonymous attributes are not propagated by the simulation nodes unless they
are explicitly stored in the simulation state. This is unfortunate, but
currently there is no practical and reliable alternative. The core problem
is detecting which anonymous attributes will be required for the simulation
and afterwards. While we can detect this for the current evaluation, we can't
look into the future in time to see what data will be necessary. We intend to
make it easier to explicitly pass data through a simulation in the future,
even if the simulation is in a nested node group.
There is a new `Simulation Nodes` panel in the physics tab in the properties
editor. It allows baking all simulation zones on the selected objects. The
baking options are intentially kept at a minimum for this MVP. More features
for simulation baking as well as baking in general can be expected to be added
separately.
All baked data is stored on disk in a folder next to the .blend file. #106937
describes how baking is implemented in more detail. Volumes can not be baked
yet and materials are lost during baking for now. Packing the baked data into
the .blend file is not yet supported.
The timeline indicates which frames are currently cached, baked or cached but
invalidated by user-changes.
Simulation input and output nodes are internally linked together by their
`bNode.identifier` which stays the same even if the node name changes. They
are generally added and removed together. However, there are still cases where
"dangling" simulation nodes can be created currently. Those generally don't
cause harm, but would be nice to avoid this in more cases in the future.
Co-authored-by: Hans Goudey <h.goudey@me.com>
Co-authored-by: Lukas Tönne <lukas@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/104924
2023-05-03 13:18:51 +02:00
|
|
|
ntree->runtime->tree_zones_cache_mutex.tag_dirty();
|
2025-01-21 12:53:24 +01:00
|
|
|
ntree->runtime->inferenced_input_socket_usage_mutex.tag_dirty();
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void add_node_tag(bNodeTree *ntree, bNode *node, const eNodeTreeChangedFlag flag)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, flag);
|
2022-05-30 15:31:13 +02:00
|
|
|
node->runtime->changed_flag |= flag;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void add_socket_tag(bNodeTree *ntree, bNodeSocket *socket, const eNodeTreeChangedFlag flag)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, flag);
|
2022-05-30 15:31:13 +02:00
|
|
|
socket->runtime->changed_flag |= flag;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
namespace blender::bke {
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Common datatype priorities, works for compositor, shader and texture nodes alike
|
|
|
|
|
* defines priority of datatype connection based on output type (to):
|
|
|
|
|
* `< 0`: never connect these types.
|
|
|
|
|
* `>= 0`: priority of connection (higher values chosen first).
|
|
|
|
|
*/
|
|
|
|
|
static int get_internal_link_type_priority(const bNodeSocketType *from, const bNodeSocketType *to)
|
|
|
|
|
{
|
|
|
|
|
switch (to->type) {
|
|
|
|
|
case SOCK_RGBA:
|
|
|
|
|
switch (from->type) {
|
|
|
|
|
case SOCK_RGBA:
|
|
|
|
|
return 4;
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
return 3;
|
|
|
|
|
case SOCK_INT:
|
|
|
|
|
return 2;
|
|
|
|
|
case SOCK_BOOLEAN:
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
return -1;
|
|
|
|
|
case SOCK_VECTOR:
|
|
|
|
|
switch (from->type) {
|
|
|
|
|
case SOCK_VECTOR:
|
|
|
|
|
return 4;
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
return 3;
|
|
|
|
|
case SOCK_INT:
|
|
|
|
|
return 2;
|
|
|
|
|
case SOCK_BOOLEAN:
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
return -1;
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
switch (from->type) {
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
return 5;
|
|
|
|
|
case SOCK_INT:
|
|
|
|
|
return 4;
|
|
|
|
|
case SOCK_BOOLEAN:
|
|
|
|
|
return 3;
|
|
|
|
|
case SOCK_RGBA:
|
|
|
|
|
return 2;
|
|
|
|
|
case SOCK_VECTOR:
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
return -1;
|
|
|
|
|
case SOCK_INT:
|
|
|
|
|
switch (from->type) {
|
|
|
|
|
case SOCK_INT:
|
|
|
|
|
return 5;
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
return 4;
|
|
|
|
|
case SOCK_BOOLEAN:
|
|
|
|
|
return 3;
|
|
|
|
|
case SOCK_RGBA:
|
|
|
|
|
return 2;
|
|
|
|
|
case SOCK_VECTOR:
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
return -1;
|
|
|
|
|
case SOCK_BOOLEAN:
|
|
|
|
|
switch (from->type) {
|
|
|
|
|
case SOCK_BOOLEAN:
|
|
|
|
|
return 5;
|
|
|
|
|
case SOCK_INT:
|
|
|
|
|
return 4;
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
return 3;
|
|
|
|
|
case SOCK_RGBA:
|
|
|
|
|
return 2;
|
|
|
|
|
case SOCK_VECTOR:
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
return -1;
|
2024-02-27 13:10:26 +01:00
|
|
|
case SOCK_ROTATION:
|
|
|
|
|
switch (from->type) {
|
|
|
|
|
case SOCK_ROTATION:
|
|
|
|
|
return 3;
|
|
|
|
|
case SOCK_VECTOR:
|
|
|
|
|
return 2;
|
|
|
|
|
case SOCK_FLOAT:
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
return -1;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* The rest of the socket types only allow an internal link if both the input and output socket
|
|
|
|
|
* have the same type. If the sockets are custom, we check the idname instead. */
|
2025-01-08 16:34:41 +01:00
|
|
|
if (to->type == from->type && (to->type != SOCK_CUSTOM || to->idname == from->idname)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return 1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
2023-09-14 14:13:07 +02:00
|
|
|
/* Check both the tree's own tags and the interface tags. */
|
|
|
|
|
static bool is_tree_changed(const bNodeTree &tree)
|
|
|
|
|
{
|
|
|
|
|
return tree.runtime->changed_flag != NTREE_CHANGED_NOTHING || tree.tree_interface.is_changed();
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
using TreeNodePair = std::pair<bNodeTree *, bNode *>;
|
|
|
|
|
using ObjectModifierPair = std::pair<Object *, ModifierData *>;
|
|
|
|
|
using NodeSocketPair = std::pair<bNode *, bNodeSocket *>;
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Cache common data about node trees from the #Main database that is expensive to retrieve on
|
|
|
|
|
* demand every time.
|
|
|
|
|
*/
|
|
|
|
|
struct NodeTreeRelations {
|
|
|
|
|
private:
|
|
|
|
|
Main *bmain_;
|
|
|
|
|
std::optional<Vector<bNodeTree *>> all_trees_;
|
|
|
|
|
std::optional<MultiValueMap<bNodeTree *, TreeNodePair>> group_node_users_;
|
|
|
|
|
std::optional<MultiValueMap<bNodeTree *, ObjectModifierPair>> modifiers_users_;
|
|
|
|
|
|
|
|
|
|
public:
|
2023-03-29 16:50:54 +02:00
|
|
|
NodeTreeRelations(Main *bmain) : bmain_(bmain) {}
|
2021-12-21 15:18:56 +01:00
|
|
|
|
|
|
|
|
void ensure_all_trees()
|
|
|
|
|
{
|
|
|
|
|
if (all_trees_.has_value()) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
all_trees_.emplace();
|
|
|
|
|
if (bmain_ == nullptr) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
FOREACH_NODETREE_BEGIN (bmain_, ntree, id) {
|
|
|
|
|
all_trees_->append(ntree);
|
|
|
|
|
}
|
|
|
|
|
FOREACH_NODETREE_END;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void ensure_group_node_users()
|
|
|
|
|
{
|
|
|
|
|
if (group_node_users_.has_value()) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
group_node_users_.emplace();
|
|
|
|
|
if (bmain_ == nullptr) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
this->ensure_all_trees();
|
|
|
|
|
|
|
|
|
|
for (bNodeTree *ntree : *all_trees_) {
|
2022-12-02 11:12:51 -06:00
|
|
|
for (bNode *node : ntree->all_nodes()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (node->id == nullptr) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
ID *id = node->id;
|
|
|
|
|
if (GS(id->name) == ID_NT) {
|
|
|
|
|
bNodeTree *group = (bNodeTree *)id;
|
|
|
|
|
group_node_users_->add(group, {ntree, node});
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void ensure_modifier_users()
|
|
|
|
|
{
|
|
|
|
|
if (modifiers_users_.has_value()) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
modifiers_users_.emplace();
|
|
|
|
|
if (bmain_ == nullptr) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
LISTBASE_FOREACH (Object *, object, &bmain_->objects) {
|
|
|
|
|
LISTBASE_FOREACH (ModifierData *, md, &object->modifiers) {
|
|
|
|
|
if (md->type == eModifierType_Nodes) {
|
|
|
|
|
NodesModifierData *nmd = (NodesModifierData *)md;
|
|
|
|
|
if (nmd->node_group != nullptr) {
|
|
|
|
|
modifiers_users_->add(nmd->node_group, {object, md});
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Span<ObjectModifierPair> get_modifier_users(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
BLI_assert(modifiers_users_.has_value());
|
|
|
|
|
return modifiers_users_->lookup(ntree);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Span<TreeNodePair> get_group_node_users(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
BLI_assert(group_node_users_.has_value());
|
|
|
|
|
return group_node_users_->lookup(ntree);
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct TreeUpdateResult {
|
|
|
|
|
bool interface_changed = false;
|
|
|
|
|
bool output_changed = false;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
class NodeTreeMainUpdater {
|
|
|
|
|
private:
|
|
|
|
|
Main *bmain_;
|
2025-01-09 17:00:05 +01:00
|
|
|
const NodeTreeUpdateExtraParams ¶ms_;
|
2021-12-21 15:18:56 +01:00
|
|
|
Map<bNodeTree *, TreeUpdateResult> update_result_by_tree_;
|
|
|
|
|
NodeTreeRelations relations_;
|
2024-12-05 18:02:14 +01:00
|
|
|
bool needs_relations_update_ = false;
|
2021-12-21 15:18:56 +01:00
|
|
|
|
|
|
|
|
public:
|
2025-01-09 17:00:05 +01:00
|
|
|
NodeTreeMainUpdater(Main *bmain, const NodeTreeUpdateExtraParams ¶ms)
|
2021-12-21 15:18:56 +01:00
|
|
|
: bmain_(bmain), params_(params), relations_(bmain)
|
|
|
|
|
{
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void update()
|
|
|
|
|
{
|
|
|
|
|
Vector<bNodeTree *> changed_ntrees;
|
|
|
|
|
FOREACH_NODETREE_BEGIN (bmain_, ntree, id) {
|
2023-09-14 14:13:07 +02:00
|
|
|
if (is_tree_changed(*ntree)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
changed_ntrees.append(ntree);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
FOREACH_NODETREE_END;
|
|
|
|
|
this->update_rooted(changed_ntrees);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void update_rooted(Span<bNodeTree *> root_ntrees)
|
|
|
|
|
{
|
|
|
|
|
if (root_ntrees.is_empty()) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool is_single_tree_update = false;
|
|
|
|
|
|
|
|
|
|
if (root_ntrees.size() == 1) {
|
|
|
|
|
bNodeTree *ntree = root_ntrees[0];
|
2023-09-14 14:13:07 +02:00
|
|
|
if (!is_tree_changed(*ntree)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
const TreeUpdateResult result = this->update_tree(*ntree);
|
|
|
|
|
update_result_by_tree_.add_new(ntree, result);
|
|
|
|
|
if (!result.interface_changed && !result.output_changed) {
|
|
|
|
|
is_single_tree_update = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (!is_single_tree_update) {
|
|
|
|
|
Vector<bNodeTree *> ntrees_in_order = this->get_tree_update_order(root_ntrees);
|
|
|
|
|
for (bNodeTree *ntree : ntrees_in_order) {
|
2023-09-14 14:13:07 +02:00
|
|
|
if (!is_tree_changed(*ntree)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
if (!update_result_by_tree_.contains(ntree)) {
|
|
|
|
|
const TreeUpdateResult result = this->update_tree(*ntree);
|
|
|
|
|
update_result_by_tree_.add_new(ntree, result);
|
|
|
|
|
}
|
|
|
|
|
const TreeUpdateResult result = update_result_by_tree_.lookup(ntree);
|
|
|
|
|
Span<TreeNodePair> dependent_trees = relations_.get_group_node_users(ntree);
|
|
|
|
|
if (result.output_changed) {
|
|
|
|
|
for (const TreeNodePair &pair : dependent_trees) {
|
|
|
|
|
add_node_tag(pair.first, pair.second, NTREE_CHANGED_NODE_OUTPUT);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if (result.interface_changed) {
|
|
|
|
|
for (const TreeNodePair &pair : dependent_trees) {
|
|
|
|
|
add_node_tag(pair.first, pair.second, NTREE_CHANGED_NODE_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
for (const auto item : update_result_by_tree_.items()) {
|
|
|
|
|
bNodeTree *ntree = item.key;
|
|
|
|
|
const TreeUpdateResult &result = item.value;
|
|
|
|
|
|
|
|
|
|
this->reset_changed_flags(*ntree);
|
|
|
|
|
|
|
|
|
|
if (result.interface_changed) {
|
|
|
|
|
if (ntree->type == NTREE_GEOMETRY) {
|
|
|
|
|
relations_.ensure_modifier_users();
|
|
|
|
|
for (const ObjectModifierPair &pair : relations_.get_modifier_users(ntree)) {
|
|
|
|
|
Object *object = pair.first;
|
|
|
|
|
ModifierData *md = pair.second;
|
|
|
|
|
|
|
|
|
|
if (md->type == eModifierType_Nodes) {
|
|
|
|
|
MOD_nodes_update_interface(object, (NodesModifierData *)md);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-03 18:04:36 +02:00
|
|
|
if (result.output_changed) {
|
|
|
|
|
ntree->runtime->geometry_nodes_lazy_function_graph_info.reset();
|
|
|
|
|
}
|
|
|
|
|
|
2025-01-17 12:17:17 +01:00
|
|
|
ID *owner_id = BKE_id_owner_get(&ntree->id);
|
|
|
|
|
ID &owner_or_self_id = owner_id ? *owner_id : ntree->id;
|
2025-01-09 17:00:05 +01:00
|
|
|
if (params_.tree_changed_fn) {
|
2025-01-17 12:17:17 +01:00
|
|
|
params_.tree_changed_fn(*ntree, owner_or_self_id);
|
2025-01-09 17:00:05 +01:00
|
|
|
}
|
|
|
|
|
if (params_.tree_output_changed_fn && result.output_changed) {
|
2025-01-17 12:17:17 +01:00
|
|
|
params_.tree_output_changed_fn(*ntree, owner_or_self_id);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
2024-12-05 18:02:14 +01:00
|
|
|
|
|
|
|
|
if (needs_relations_update_) {
|
|
|
|
|
if (bmain_) {
|
|
|
|
|
DEG_relations_tag_update(bmain_);
|
|
|
|
|
}
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
private:
|
|
|
|
|
enum class ToposortMark {
|
|
|
|
|
None,
|
|
|
|
|
Temporary,
|
|
|
|
|
Permanent,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
using ToposortMarkMap = Map<bNodeTree *, ToposortMark>;
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Finds all trees that depend on the given trees (through node groups). Then those trees are
|
|
|
|
|
* ordered such that all trees used by one tree come before it.
|
|
|
|
|
*/
|
|
|
|
|
Vector<bNodeTree *> get_tree_update_order(Span<bNodeTree *> root_ntrees)
|
|
|
|
|
{
|
|
|
|
|
relations_.ensure_group_node_users();
|
|
|
|
|
|
|
|
|
|
Set<bNodeTree *> trees_to_update = get_trees_to_update(root_ntrees);
|
|
|
|
|
|
|
|
|
|
Vector<bNodeTree *> sorted_ntrees;
|
|
|
|
|
|
|
|
|
|
ToposortMarkMap marks;
|
|
|
|
|
for (bNodeTree *ntree : trees_to_update) {
|
|
|
|
|
marks.add_new(ntree, ToposortMark::None);
|
|
|
|
|
}
|
|
|
|
|
for (bNodeTree *ntree : trees_to_update) {
|
|
|
|
|
if (marks.lookup(ntree) == ToposortMark::None) {
|
|
|
|
|
const bool cycle_detected = !this->get_tree_update_order__visit_recursive(
|
|
|
|
|
ntree, marks, sorted_ntrees);
|
|
|
|
|
/* This should be prevented by higher level operators. */
|
|
|
|
|
BLI_assert(!cycle_detected);
|
|
|
|
|
UNUSED_VARS_NDEBUG(cycle_detected);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
std::reverse(sorted_ntrees.begin(), sorted_ntrees.end());
|
|
|
|
|
|
|
|
|
|
return sorted_ntrees;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool get_tree_update_order__visit_recursive(bNodeTree *ntree,
|
|
|
|
|
ToposortMarkMap &marks,
|
|
|
|
|
Vector<bNodeTree *> &sorted_ntrees)
|
|
|
|
|
{
|
|
|
|
|
ToposortMark &mark = marks.lookup(ntree);
|
|
|
|
|
if (mark == ToposortMark::Permanent) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
if (mark == ToposortMark::Temporary) {
|
|
|
|
|
/* There is a dependency cycle. */
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
mark = ToposortMark::Temporary;
|
|
|
|
|
|
|
|
|
|
for (const TreeNodePair &pair : relations_.get_group_node_users(ntree)) {
|
|
|
|
|
this->get_tree_update_order__visit_recursive(pair.first, marks, sorted_ntrees);
|
|
|
|
|
}
|
|
|
|
|
sorted_ntrees.append(ntree);
|
|
|
|
|
|
|
|
|
|
mark = ToposortMark::Permanent;
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Set<bNodeTree *> get_trees_to_update(Span<bNodeTree *> root_ntrees)
|
|
|
|
|
{
|
|
|
|
|
relations_.ensure_group_node_users();
|
|
|
|
|
|
|
|
|
|
Set<bNodeTree *> reachable_trees;
|
|
|
|
|
VectorSet<bNodeTree *> trees_to_check = root_ntrees;
|
|
|
|
|
|
|
|
|
|
while (!trees_to_check.is_empty()) {
|
|
|
|
|
bNodeTree *ntree = trees_to_check.pop();
|
|
|
|
|
if (reachable_trees.add(ntree)) {
|
|
|
|
|
for (const TreeNodePair &pair : relations_.get_group_node_users(ntree)) {
|
|
|
|
|
trees_to_check.add(pair.first);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return reachable_trees;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
TreeUpdateResult update_tree(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
TreeUpdateResult result;
|
|
|
|
|
|
2025-05-09 04:06:00 +02:00
|
|
|
ntree.runtime->link_errors.clear();
|
Nodes: improve drawing with invalid zone links
Previously, whenever the zone detection algorithm could not find a result, zones
were just not drawn at all. This can be very confusing because it's not
necessarily obvious that something is wrong in this case.
Now, invalid zones and links that made them invalid have an error.
Note, we can't generally detect the "valid part" of zones when there are invalid
links, because it's ambiguous which links are valid. However, the solution here
is to remember the last valid zones, and to look at which links would invalidate
those. Since the zone-detection results in runtime-only data currently, the
error won't show when reopening the file for now.
Implementation wise, this works by keeping a potentially outdated version of the
last valid zones around, even when the zone detection failed. For that to work,
I had to change some node pointers to node identifiers in the zone structs, so
that it is safe to access them even if the nodes have been removed.
Pull Request: https://projects.blender.org/blender/blender/pulls/139044
2025-05-19 17:25:36 +02:00
|
|
|
ntree.runtime->invalid_zone_output_node_ids.clear();
|
2024-05-23 14:31:16 +02:00
|
|
|
|
2025-02-28 19:07:02 +01:00
|
|
|
if (this->update_panel_toggle_names(ntree)) {
|
|
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
this->update_socket_link_and_use(ntree);
|
|
|
|
|
this->update_individual_nodes(ntree);
|
|
|
|
|
this->update_internal_links(ntree);
|
|
|
|
|
this->update_generic_callback(ntree);
|
2021-12-21 15:18:56 +01:00
|
|
|
this->remove_unused_previews_when_necessary(ntree);
|
2023-08-08 17:36:06 +02:00
|
|
|
this->make_node_previews_dirty(ntree);
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
this->propagate_runtime_flags(ntree);
|
2021-12-21 15:18:56 +01:00
|
|
|
if (ntree.type == NTREE_GEOMETRY) {
|
2024-01-26 12:40:01 +01:00
|
|
|
if (this->propagate_enum_definitions(ntree)) {
|
|
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
if (node_field_inferencing::update_field_inferencing(ntree)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
2023-12-18 13:01:06 +01:00
|
|
|
this->update_from_field_inference(ntree);
|
2024-10-07 12:59:39 +02:00
|
|
|
if (node_tree_reference_lifetimes::analyse_reference_lifetimes(ntree)) {
|
2023-01-05 14:05:30 +01:00
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
Geometry Nodes: support attaching gizmos to input values
This adds support for attaching gizmos for input values. The goal is to make it
easier for users to set input values intuitively in the 3D viewport.
We went through multiple different possible designs until we settled on the one
implemented here. We picked it for it's flexibility and ease of use when using
geometry node assets. The core principle in the design is that **gizmos are
attached to existing input values instead of being the input value themselves**.
This actually fits the existing concept of gizmos in Blender well, but may be a
bit unintutitive in a node setup at first. The attachment is done using links in
the node editor.
The most basic usage of the node is to link a Value node to the new Linear Gizmo
node. This attaches the gizmo to the input value and allows you to change it
from the 3D view. The attachment is indicated by the gizmo icon in the sockets
which are controlled by a gizmo as well as the back-link (notice the double
link) when the gizmo is active.
The core principle makes it straight forward to control the same node setup from
the 3D view with gizmos, or by manually changing input values, or by driving the
input values procedurally.
If the input value is controlled indirectly by other inputs, it's often possible
to **automatically propagate** the gizmo to the actual input.
Backpropagation does not work for all nodes, although more nodes can be
supported over time.
This patch adds the first three gizmo nodes which cover common use cases:
* **Linear Gizmo**: Creates a gizmo that controls a float or integer value using
a linear movement of e.g. an arrow in the 3D viewport.
* **Dial Gizmo**: Creates a circular gizmo in the 3D viewport that can be
rotated to change the attached angle input.
* **Transform Gizmo**: Creates a simple gizmo for location, rotation and scale.
In the future, more built-in gizmos and potentially the ability for custom
gizmos could be added.
All gizmo nodes have a **Transform** geometry output. Using it is optional but
it is recommended when the gizmo is used to control inputs that affect a
geometry. When it is used, Blender will automatically transform the gizmos
together with the geometry that they control. To achieve this, the output should
be merged with the generated geometry using the *Join Geometry* node. The data
contained in *Transform* output is not visible geometry, but just internal
information that helps Blender to give a better user experience when using
gizmos.
The gizmo nodes have a multi-input socket. This allows **controlling multiple
values** with the same gizmo.
Only a small set of **gizmo shapes** is supported initially. It might be
extended in the future but one goal is to give the gizmos used by different node
group assets a familiar look and feel. A similar constraint exists for
**colors**. Currently, one can choose from a fixed set of colors which can be
modified in the theme settings.
The set of **visible gizmos** is determined by a multiple factors because it's
not really feasible to show all possible gizmos at all times. To see any of the
geometry nodes gizmos, the "Active Modifier" option has to be enabled in the
"Viewport Gizmos" popover. Then all gizmos are drawn for which at least one of
the following is true:
* The gizmo controls an input of the active modifier of the active object.
* The gizmo controls a value in a selected node in an open node editor.
* The gizmo controls a pinned value in an open node editor. Pinning works by
clicking the gizmo icon next to the value.
Pull Request: https://projects.blender.org/blender/blender/pulls/112677
2024-07-10 16:18:47 +02:00
|
|
|
if (nodes::gizmos::update_tree_gizmo_propagation(ntree)) {
|
|
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
2024-08-20 16:15:52 +02:00
|
|
|
this->update_socket_shapes(ntree);
|
2024-12-05 18:02:14 +01:00
|
|
|
this->update_eval_dependencies(ntree);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
result.output_changed = this->check_if_output_changed(ntree);
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
this->update_socket_link_and_use(ntree);
|
2021-12-21 15:18:56 +01:00
|
|
|
this->update_link_validation(ntree);
|
|
|
|
|
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
if (this->update_nested_node_refs(ntree)) {
|
|
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
if (ntree.type == NTREE_TEXTURE) {
|
|
|
|
|
ntreeTexCheckCyclics(&ntree);
|
|
|
|
|
}
|
|
|
|
|
|
2023-09-14 14:13:07 +02:00
|
|
|
if (ntree.tree_interface.is_changed()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
result.interface_changed = true;
|
|
|
|
|
}
|
|
|
|
|
|
2023-12-04 15:13:06 +01:00
|
|
|
#ifndef NDEBUG
|
2022-12-01 14:53:27 -06:00
|
|
|
/* Check the uniqueness of node identifiers. */
|
|
|
|
|
Set<int32_t> node_identifiers;
|
2022-12-11 20:23:18 -06:00
|
|
|
const Span<const bNode *> nodes = ntree.all_nodes();
|
|
|
|
|
for (const int i : nodes.index_range()) {
|
|
|
|
|
const bNode &node = *nodes[i];
|
|
|
|
|
BLI_assert(node.identifier > 0);
|
|
|
|
|
node_identifiers.add_new(node.identifier);
|
|
|
|
|
BLI_assert(node.runtime->index_in_tree == i);
|
2022-12-01 14:53:27 -06:00
|
|
|
}
|
|
|
|
|
#endif
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
return result;
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
void update_socket_link_and_use(bNodeTree &tree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
tree.ensure_topology_cache();
|
|
|
|
|
for (bNodeSocket *socket : tree.all_input_sockets()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (socket->directly_linked_links().is_empty()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
socket->link = nullptr;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
else {
|
2022-08-31 12:15:57 +02:00
|
|
|
socket->link = socket->directly_linked_links()[0];
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
this->update_socket_used_tags(tree);
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
void update_socket_used_tags(bNodeTree &tree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
tree.ensure_topology_cache();
|
|
|
|
|
for (bNodeSocket *socket : tree.all_sockets()) {
|
2023-06-03 12:34:58 +02:00
|
|
|
const bool socket_is_linked = !socket->directly_linked_links().is_empty();
|
|
|
|
|
SET_FLAG_FROM_TEST(socket->flag, socket_is_linked, SOCK_IS_LINKED);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
void update_individual_nodes(bNodeTree &ntree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-12-02 11:12:51 -06:00
|
|
|
for (bNode *node : ntree.all_nodes()) {
|
2025-02-19 13:44:11 +01:00
|
|
|
bke::node_declaration_ensure(ntree, *node);
|
2022-08-31 12:15:57 +02:00
|
|
|
if (this->should_update_individual_node(ntree, *node)) {
|
2024-05-13 16:07:12 +02:00
|
|
|
bke::bNodeType &ntype = *node->typeinfo;
|
2023-10-15 20:28:23 +02:00
|
|
|
if (ntype.declare) {
|
|
|
|
|
/* Should have been created when the node was registered. */
|
|
|
|
|
BLI_assert(ntype.static_declaration != nullptr);
|
|
|
|
|
if (ntype.static_declaration->is_context_dependent) {
|
|
|
|
|
nodes::update_node_declaration_and_sockets(ntree, *node);
|
|
|
|
|
}
|
2023-01-16 15:47:10 -06:00
|
|
|
}
|
Fix #134283: defer freeing tree/node/socket types
Currently, tree, node and socket types are always freed immediately when the
Python code unregisters them. This is problematic, because there may still be
references to those type pointers in evaluated data owned by potentially various
depsgraphs. It's not possible to change data in these depsgraphs, because they
may be independent from the original data and might be worked on by a separate
thread. So when the type pointers are freed directly, there will be a lot of
dangling pointers in evaluated copies. Since those are used to free the nodes,
there will be a crash when the depsgraph updates. In practice, this does not
happen that often, because typically custom node tree addons are not disabled
while in use. They still used to crash often, but only when Blender exits and
unregisters all types.
The solution is to just keep the typeinfo pointers alive and free them all at
the very end. This obviously has the downside that the list of pointers we need
to keep track of can grow endlessly, however in practice that doesn't really
happen under any normal circumstances.
I'm still getting some other crashes when enabling/disabling Sverchok while
testing, but not entirely reliably and also without this patch (the crash there
happens in RNA code). So some additional work will probably be needed later to
make this work properly in all cases.
Pull Request: https://projects.blender.org/blender/blender/pulls/134360
2025-02-11 17:25:10 +01:00
|
|
|
else if (node->is_undefined()) {
|
|
|
|
|
/* If a node has become undefined (it generally was unregistered from Python), it does
|
|
|
|
|
* not have a declaration anymore. */
|
|
|
|
|
delete node->runtime->declaration;
|
|
|
|
|
node->runtime->declaration = nullptr;
|
2025-02-17 12:37:23 +01:00
|
|
|
LISTBASE_FOREACH (bNodeSocket *, socket, &node->inputs) {
|
|
|
|
|
socket->runtime->declaration = nullptr;
|
|
|
|
|
}
|
|
|
|
|
LISTBASE_FOREACH (bNodeSocket *, socket, &node->outputs) {
|
|
|
|
|
socket->runtime->declaration = nullptr;
|
|
|
|
|
}
|
Fix #134283: defer freeing tree/node/socket types
Currently, tree, node and socket types are always freed immediately when the
Python code unregisters them. This is problematic, because there may still be
references to those type pointers in evaluated data owned by potentially various
depsgraphs. It's not possible to change data in these depsgraphs, because they
may be independent from the original data and might be worked on by a separate
thread. So when the type pointers are freed directly, there will be a lot of
dangling pointers in evaluated copies. Since those are used to free the nodes,
there will be a crash when the depsgraph updates. In practice, this does not
happen that often, because typically custom node tree addons are not disabled
while in use. They still used to crash often, but only when Blender exits and
unregisters all types.
The solution is to just keep the typeinfo pointers alive and free them all at
the very end. This obviously has the downside that the list of pointers we need
to keep track of can grow endlessly, however in practice that doesn't really
happen under any normal circumstances.
I'm still getting some other crashes when enabling/disabling Sverchok while
testing, but not entirely reliably and also without this patch (the crash there
happens in RNA code). So some additional work will probably be needed later to
make this work properly in all cases.
Pull Request: https://projects.blender.org/blender/blender/pulls/134360
2025-02-11 17:25:10 +01:00
|
|
|
}
|
2024-01-09 10:52:46 +01:00
|
|
|
if (ntype.updatefunc) {
|
|
|
|
|
ntype.updatefunc(&ntree, node);
|
|
|
|
|
}
|
2022-09-18 21:06:56 +02:00
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
bool should_update_individual_node(const bNodeTree &ntree, const bNode &node)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-05-30 12:54:07 +02:00
|
|
|
if (ntree.runtime->changed_flag & NTREE_CHANGED_ANY) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
if (node.runtime->changed_flag & NTREE_CHANGED_NODE_PROPERTY) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
2022-05-30 12:54:07 +02:00
|
|
|
if (ntree.runtime->changed_flag & NTREE_CHANGED_LINK) {
|
2023-01-16 15:47:10 -06:00
|
|
|
/* Currently we have no way to tell if a node needs to be updated when a link changed. */
|
|
|
|
|
return true;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2023-09-14 14:13:07 +02:00
|
|
|
if (ntree.tree_interface.is_changed()) {
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node.is_group_input() || node.is_group_output()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
Geometry Nodes: add simulation support
This adds support for building simulations with geometry nodes. A new
`Simulation Input` and `Simulation Output` node allow maintaining a
simulation state across multiple frames. Together these two nodes form
a `simulation zone` which contains all the nodes that update the simulation
state from one frame to the next.
A new simulation zone can be added via the menu
(`Simulation > Simulation Zone`) or with the node add search.
The simulation state contains a geometry by default. However, it is possible
to add multiple geometry sockets as well as other socket types. Currently,
field inputs are evaluated and stored for the preceding geometry socket in
the order that the sockets are shown. Simulation state items can be added
by linking one of the empty sockets to something else. In the sidebar, there
is a new panel that allows adding, removing and reordering these sockets.
The simulation nodes behave as follows:
* On the first frame, the inputs of the `Simulation Input` node are evaluated
to initialize the simulation state. In later frames these sockets are not
evaluated anymore. The `Delta Time` at the first frame is zero, but the
simulation zone is still evaluated.
* On every next frame, the `Simulation Input` node outputs the simulation
state of the previous frame. Nodes in the simulation zone can edit that
data in arbitrary ways, also taking into account the `Delta Time`. The new
simulation state has to be passed to the `Simulation Output` node where it
is cached and forwarded.
* On a frame that is already cached or baked, the nodes in the simulation
zone are not evaluated, because the `Simulation Output` node can return
the previously cached data directly.
It is not allowed to connect sockets from inside the simulation zone to the
outside without going through the `Simulation Output` node. This is a necessary
restriction to make caching and sub-frame interpolation work. Links can go into
the simulation zone without problems though.
Anonymous attributes are not propagated by the simulation nodes unless they
are explicitly stored in the simulation state. This is unfortunate, but
currently there is no practical and reliable alternative. The core problem
is detecting which anonymous attributes will be required for the simulation
and afterwards. While we can detect this for the current evaluation, we can't
look into the future in time to see what data will be necessary. We intend to
make it easier to explicitly pass data through a simulation in the future,
even if the simulation is in a nested node group.
There is a new `Simulation Nodes` panel in the physics tab in the properties
editor. It allows baking all simulation zones on the selected objects. The
baking options are intentially kept at a minimum for this MVP. More features
for simulation baking as well as baking in general can be expected to be added
separately.
All baked data is stored on disk in a folder next to the .blend file. #106937
describes how baking is implemented in more detail. Volumes can not be baked
yet and materials are lost during baking for now. Packing the baked data into
the .blend file is not yet supported.
The timeline indicates which frames are currently cached, baked or cached but
invalidated by user-changes.
Simulation input and output nodes are internally linked together by their
`bNode.identifier` which stays the same even if the node name changes. They
are generally added and removed together. However, there are still cases where
"dangling" simulation nodes can be created currently. Those generally don't
cause harm, but would be nice to avoid this in more cases in the future.
Co-authored-by: Hans Goudey <h.goudey@me.com>
Co-authored-by: Lukas Tönne <lukas@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/104924
2023-05-03 13:18:51 +02:00
|
|
|
}
|
|
|
|
|
/* Check paired simulation zone nodes. */
|
2025-01-09 15:28:57 +01:00
|
|
|
if (all_zone_input_node_types().contains(node.type_legacy)) {
|
|
|
|
|
const bNodeZoneType &zone_type = *zone_type_by_node_type(node.type_legacy);
|
2023-09-20 14:40:56 +02:00
|
|
|
if (const bNode *output_node = zone_type.get_corresponding_output(ntree, node)) {
|
2023-07-11 22:36:10 +02:00
|
|
|
if (output_node->runtime->changed_flag & NTREE_CHANGED_NODE_PROPERTY) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2024-03-11 15:39:02 +01:00
|
|
|
struct InternalLink {
|
|
|
|
|
bNodeSocket *from;
|
|
|
|
|
bNodeSocket *to;
|
2024-03-19 13:42:09 +01:00
|
|
|
int multi_input_sort_id = 0;
|
2024-03-11 15:39:02 +01:00
|
|
|
|
2024-03-19 13:42:09 +01:00
|
|
|
BLI_STRUCT_EQUALITY_OPERATORS_3(InternalLink, from, to, multi_input_sort_id);
|
2024-03-11 15:39:02 +01:00
|
|
|
};
|
|
|
|
|
|
2024-04-09 13:52:41 +10:00
|
|
|
const bNodeLink *first_non_dangling_link(const bNodeTree & /*ntree*/,
|
2024-03-11 15:39:02 +01:00
|
|
|
const Span<const bNodeLink *> links) const
|
|
|
|
|
{
|
|
|
|
|
for (const bNodeLink *link : links) {
|
2024-04-08 16:19:40 +02:00
|
|
|
if (!link->fromnode->is_dangling_reroute()) {
|
2024-03-11 15:39:02 +01:00
|
|
|
return link;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
void update_internal_links(bNodeTree &ntree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
bke::node_tree_runtime::AllowUsingOutdatedInfo allow_outdated_info{ntree};
|
|
|
|
|
ntree.ensure_topology_cache();
|
|
|
|
|
for (bNode *node : ntree.all_nodes()) {
|
|
|
|
|
if (!this->should_update_individual_node(ntree, *node)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
/* Find all expected internal links. */
|
2024-03-11 15:39:02 +01:00
|
|
|
Vector<InternalLink> expected_internal_links;
|
2022-08-31 12:15:57 +02:00
|
|
|
for (const bNodeSocket *output_socket : node->output_sockets()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (!output_socket->is_available()) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
if (output_socket->flag & SOCK_NO_INTERNAL_LINK) {
|
2021-12-21 15:18:56 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
2025-05-11 05:23:43 +02:00
|
|
|
const bNodeSocket *input_socket = this->find_internally_linked_input(ntree, output_socket);
|
2024-03-11 15:39:02 +01:00
|
|
|
if (input_socket == nullptr) {
|
|
|
|
|
continue;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2024-03-11 15:39:02 +01:00
|
|
|
|
|
|
|
|
const Span<const bNodeLink *> connected_links = input_socket->directly_linked_links();
|
|
|
|
|
const bNodeLink *connected_link = first_non_dangling_link(ntree, connected_links);
|
|
|
|
|
|
2024-03-19 13:42:09 +01:00
|
|
|
const int index = connected_link ? connected_link->multi_input_sort_id :
|
2024-03-11 15:39:02 +01:00
|
|
|
std::max<int>(0, connected_links.size() - 1);
|
|
|
|
|
expected_internal_links.append(InternalLink{const_cast<bNodeSocket *>(input_socket),
|
|
|
|
|
const_cast<bNodeSocket *>(output_socket),
|
|
|
|
|
index});
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2024-03-11 15:39:02 +01:00
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
/* Rebuilt internal links if they have changed. */
|
2022-11-18 13:46:21 +01:00
|
|
|
if (node->runtime->internal_links.size() != expected_internal_links.size()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
this->update_internal_links_in_node(ntree, *node, expected_internal_links);
|
2024-03-11 15:39:02 +01:00
|
|
|
continue;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2024-03-11 15:39:02 +01:00
|
|
|
|
|
|
|
|
const bool all_expected_internal_links_exist = std::all_of(
|
|
|
|
|
node->runtime->internal_links.begin(),
|
|
|
|
|
node->runtime->internal_links.end(),
|
|
|
|
|
[&](const bNodeLink &link) {
|
2024-03-19 13:42:09 +01:00
|
|
|
const InternalLink internal_link{link.fromsock, link.tosock, link.multi_input_sort_id};
|
2024-03-11 15:39:02 +01:00
|
|
|
return expected_internal_links.as_span().contains(internal_link);
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
if (all_expected_internal_links_exist) {
|
|
|
|
|
continue;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2024-03-11 15:39:02 +01:00
|
|
|
|
|
|
|
|
this->update_internal_links_in_node(ntree, *node, expected_internal_links);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2025-05-11 05:23:43 +02:00
|
|
|
const bNodeSocket *find_internally_linked_input(const bNodeTree &ntree,
|
|
|
|
|
const bNodeSocket *output_socket)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2025-05-11 05:23:43 +02:00
|
|
|
const bNode &node = output_socket->owner_node();
|
|
|
|
|
if (node.typeinfo->internally_linked_input) {
|
|
|
|
|
return node.typeinfo->internally_linked_input(ntree, node, *output_socket);
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
const bNodeSocket *selected_socket = nullptr;
|
2021-12-21 15:18:56 +01:00
|
|
|
int selected_priority = -1;
|
|
|
|
|
bool selected_is_linked = false;
|
2023-12-18 13:01:06 +01:00
|
|
|
for (const bNodeSocket *input_socket : node.input_sockets()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (!input_socket->is_available()) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
if (input_socket->flag & SOCK_NO_INTERNAL_LINK) {
|
2021-12-21 15:18:56 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
const int priority = get_internal_link_type_priority(input_socket->typeinfo,
|
|
|
|
|
output_socket->typeinfo);
|
2021-12-21 15:18:56 +01:00
|
|
|
if (priority < 0) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
const bool is_linked = input_socket->is_directly_linked();
|
|
|
|
|
const bool is_preferred = priority > selected_priority || (is_linked && !selected_is_linked);
|
|
|
|
|
if (!is_preferred) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
selected_socket = input_socket;
|
|
|
|
|
selected_priority = priority;
|
|
|
|
|
selected_is_linked = is_linked;
|
|
|
|
|
}
|
|
|
|
|
return selected_socket;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void update_internal_links_in_node(bNodeTree &ntree,
|
|
|
|
|
bNode &node,
|
2024-03-11 15:39:02 +01:00
|
|
|
Span<InternalLink> internal_links)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-11-18 13:58:36 -06:00
|
|
|
node.runtime->internal_links.clear();
|
2024-03-11 15:39:02 +01:00
|
|
|
node.runtime->internal_links.reserve(internal_links.size());
|
|
|
|
|
for (const InternalLink &internal_link : internal_links) {
|
2023-01-09 23:29:58 -05:00
|
|
|
bNodeLink link{};
|
|
|
|
|
link.fromnode = &node;
|
2024-03-11 15:39:02 +01:00
|
|
|
link.fromsock = internal_link.from;
|
2023-01-09 23:29:58 -05:00
|
|
|
link.tonode = &node;
|
2024-03-11 15:39:02 +01:00
|
|
|
link.tosock = internal_link.to;
|
2024-03-19 13:42:09 +01:00
|
|
|
link.multi_input_sort_id = internal_link.multi_input_sort_id;
|
2023-01-09 23:29:58 -05:00
|
|
|
link.flag |= NODE_LINK_VALID;
|
2022-11-18 13:46:21 +01:00
|
|
|
node.runtime->internal_links.append(link);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
BKE_ntree_update_tag_node_internal_link(&ntree, &node);
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
void update_generic_callback(bNodeTree &ntree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
|
|
|
|
if (ntree.typeinfo->update == nullptr) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
ntree.typeinfo->update(&ntree);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void remove_unused_previews_when_necessary(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
/* Don't trigger preview removal when only those flags are set. */
|
|
|
|
|
const uint32_t allowed_flags = NTREE_CHANGED_LINK | NTREE_CHANGED_SOCKET_PROPERTY |
|
2023-09-14 14:13:07 +02:00
|
|
|
NTREE_CHANGED_NODE_PROPERTY | NTREE_CHANGED_NODE_OUTPUT;
|
2022-05-30 12:54:07 +02:00
|
|
|
if ((ntree.runtime->changed_flag & allowed_flags) == ntree.runtime->changed_flag) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return;
|
|
|
|
|
}
|
2023-05-15 15:14:22 +02:00
|
|
|
blender::bke::node_preview_remove_unused(&ntree);
|
2023-08-08 17:36:06 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void make_node_previews_dirty(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
ntree.runtime->previews_refresh_state++;
|
|
|
|
|
for (bNode *node : ntree.all_nodes()) {
|
2025-01-09 16:59:47 +01:00
|
|
|
if (!node->is_group()) {
|
2023-08-08 17:36:06 +02:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
if (bNodeTree *nested_tree = reinterpret_cast<bNodeTree *>(node->id)) {
|
|
|
|
|
this->make_node_previews_dirty(*nested_tree);
|
|
|
|
|
}
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
void propagate_runtime_flags(const bNodeTree &ntree)
|
2022-05-18 16:42:49 +02:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
ntree.ensure_topology_cache();
|
|
|
|
|
|
2022-05-30 12:54:07 +02:00
|
|
|
ntree.runtime->runtime_flag = 0;
|
2022-05-18 16:42:49 +02:00
|
|
|
|
2023-05-10 14:39:23 +02:00
|
|
|
for (const bNode *group_node : ntree.group_nodes()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
const bNodeTree *group = reinterpret_cast<bNodeTree *>(group_node->id);
|
2022-05-18 16:42:49 +02:00
|
|
|
if (group != nullptr) {
|
2022-05-30 12:54:07 +02:00
|
|
|
ntree.runtime->runtime_flag |= group->runtime->runtime_flag;
|
2022-05-18 16:42:49 +02:00
|
|
|
}
|
|
|
|
|
}
|
2023-05-10 14:39:23 +02:00
|
|
|
|
|
|
|
|
if (ntree.type == NTREE_SHADER) {
|
|
|
|
|
/* Check if the tree itself has an animated image. */
|
|
|
|
|
for (const StringRefNull idname : {"ShaderNodeTexImage", "ShaderNodeTexEnvironment"}) {
|
|
|
|
|
for (const bNode *node : ntree.nodes_by_type(idname)) {
|
|
|
|
|
Image *image = reinterpret_cast<Image *>(node->id);
|
|
|
|
|
if (image != nullptr && BKE_image_is_animated(image)) {
|
|
|
|
|
ntree.runtime->runtime_flag |= NTREE_RUNTIME_FLAG_HAS_IMAGE_ANIMATION;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
/* Check if the tree has a material output. */
|
|
|
|
|
for (const StringRefNull idname : {"ShaderNodeOutputMaterial",
|
|
|
|
|
"ShaderNodeOutputLight",
|
|
|
|
|
"ShaderNodeOutputWorld",
|
|
|
|
|
"ShaderNodeOutputAOV"})
|
|
|
|
|
{
|
|
|
|
|
const Span<const bNode *> nodes = ntree.nodes_by_type(idname);
|
|
|
|
|
if (!nodes.is_empty()) {
|
|
|
|
|
ntree.runtime->runtime_flag |= NTREE_RUNTIME_FLAG_HAS_MATERIAL_OUTPUT;
|
2022-05-23 09:03:33 +02:00
|
|
|
break;
|
2022-05-18 16:42:49 +02:00
|
|
|
}
|
|
|
|
|
}
|
2022-05-23 09:03:33 +02:00
|
|
|
}
|
2023-05-10 14:39:23 +02:00
|
|
|
if (ntree.type == NTREE_GEOMETRY) {
|
|
|
|
|
/* Check if there is a simulation zone. */
|
|
|
|
|
if (!ntree.nodes_by_type("GeometryNodeSimulationOutput").is_empty()) {
|
|
|
|
|
ntree.runtime->runtime_flag |= NTREE_RUNTIME_FLAG_HAS_SIMULATION_ZONE;
|
2022-05-23 09:03:33 +02:00
|
|
|
}
|
|
|
|
|
}
|
2022-05-18 16:42:49 +02:00
|
|
|
}
|
|
|
|
|
|
2023-12-18 13:01:06 +01:00
|
|
|
void update_from_field_inference(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
/* Automatically tag a bake item as attribute when the input is a field. The flag should not be
|
|
|
|
|
* removed automatically even when the field input is disconnected because the baked data may
|
|
|
|
|
* still contain attribute data instead of a single value. */
|
2024-12-12 23:52:39 +01:00
|
|
|
const Span<bke::FieldSocketState> field_states = ntree.runtime->field_states;
|
2023-12-18 13:01:06 +01:00
|
|
|
for (bNode *node : ntree.nodes_by_type("GeometryNodeBake")) {
|
|
|
|
|
NodeGeometryBake &storage = *static_cast<NodeGeometryBake *>(node->storage);
|
|
|
|
|
for (const int i : IndexRange(storage.items_num)) {
|
|
|
|
|
const bNodeSocket &socket = node->input_socket(i);
|
|
|
|
|
NodeGeometryBakeItem &item = storage.items[i];
|
2024-12-12 23:52:39 +01:00
|
|
|
if (field_states[socket.index_in_tree()] == FieldSocketState::IsField) {
|
2023-12-18 13:01:06 +01:00
|
|
|
item.flag |= GEO_NODE_BAKE_ITEM_IS_ATTRIBUTE;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2024-08-20 16:15:52 +02:00
|
|
|
void update_socket_shapes(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
ntree.ensure_topology_cache();
|
2024-12-12 23:52:39 +01:00
|
|
|
const Span<bke::FieldSocketState> field_states = ntree.runtime->field_states;
|
2024-08-20 16:15:52 +02:00
|
|
|
for (bNodeSocket *socket : ntree.all_sockets()) {
|
2024-12-12 23:52:39 +01:00
|
|
|
switch (field_states[socket->index_in_tree()]) {
|
|
|
|
|
case bke::FieldSocketState::RequiresSingle:
|
|
|
|
|
socket->display_shape = SOCK_DISPLAY_SHAPE_CIRCLE;
|
|
|
|
|
break;
|
|
|
|
|
case bke::FieldSocketState::CanBeField:
|
|
|
|
|
socket->display_shape = SOCK_DISPLAY_SHAPE_DIAMOND_DOT;
|
|
|
|
|
break;
|
|
|
|
|
case bke::FieldSocketState::IsField:
|
|
|
|
|
socket->display_shape = SOCK_DISPLAY_SHAPE_DIAMOND;
|
|
|
|
|
break;
|
|
|
|
|
}
|
2024-08-20 16:15:52 +02:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2024-12-05 18:02:14 +01:00
|
|
|
void update_eval_dependencies(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
ntree.ensure_topology_cache();
|
2024-12-10 17:52:44 +01:00
|
|
|
nodes::GeometryNodesEvalDependencies new_deps =
|
|
|
|
|
nodes::gather_geometry_nodes_eval_dependencies_with_cache(ntree);
|
2024-12-05 18:02:14 +01:00
|
|
|
|
|
|
|
|
/* Check if the dependencies have changed. */
|
|
|
|
|
if (!ntree.runtime->geometry_nodes_eval_dependencies ||
|
|
|
|
|
new_deps != *ntree.runtime->geometry_nodes_eval_dependencies)
|
|
|
|
|
{
|
|
|
|
|
needs_relations_update_ = true;
|
|
|
|
|
ntree.runtime->geometry_nodes_eval_dependencies =
|
|
|
|
|
std::make_unique<nodes::GeometryNodesEvalDependencies>(std::move(new_deps));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-26 12:40:01 +01:00
|
|
|
bool propagate_enum_definitions(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
ntree.ensure_interface_cache();
|
|
|
|
|
|
|
|
|
|
/* Propagation from right to left to determine which enum
|
|
|
|
|
* definition to use for menu sockets. */
|
|
|
|
|
for (bNode *node : ntree.toposort_right_to_left()) {
|
|
|
|
|
const bool node_updated = this->should_update_individual_node(ntree, *node);
|
|
|
|
|
|
2025-05-13 04:57:19 +02:00
|
|
|
Vector<bNodeSocket *> locally_defined_enums;
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node->is_type("GeometryNodeMenuSwitch")) {
|
2025-05-13 04:57:19 +02:00
|
|
|
bNodeSocket &enum_input = node->input_socket(0);
|
|
|
|
|
BLI_assert(enum_input.is_available() && enum_input.type == SOCK_MENU);
|
2024-01-26 12:40:01 +01:00
|
|
|
/* Generate new enum items when the node has changed, otherwise keep existing items. */
|
|
|
|
|
if (node_updated) {
|
|
|
|
|
const NodeMenuSwitch &storage = *static_cast<NodeMenuSwitch *>(node->storage);
|
|
|
|
|
const RuntimeNodeEnumItems *enum_items = this->create_runtime_enum_items(
|
|
|
|
|
storage.enum_definition);
|
|
|
|
|
|
2025-05-13 04:57:19 +02:00
|
|
|
this->set_enum_ptr(*enum_input.default_value_typed<bNodeSocketValueMenu>(), enum_items);
|
2024-01-29 19:03:08 +01:00
|
|
|
/* Remove initial user. */
|
|
|
|
|
enum_items->remove_user_and_delete_if_last();
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
2025-05-13 04:57:19 +02:00
|
|
|
locally_defined_enums.append(&enum_input);
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
2025-03-07 12:06:50 -05:00
|
|
|
|
|
|
|
|
/* Clear current enum references. */
|
|
|
|
|
for (bNodeSocket *socket : node->input_sockets()) {
|
2025-05-13 04:57:19 +02:00
|
|
|
if (socket->is_available() && socket->type == SOCK_MENU &&
|
|
|
|
|
!locally_defined_enums.contains(socket))
|
|
|
|
|
{
|
2025-03-07 12:06:50 -05:00
|
|
|
clear_enum_reference(*socket);
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
2025-03-07 12:06:50 -05:00
|
|
|
}
|
|
|
|
|
for (bNodeSocket *socket : node->output_sockets()) {
|
|
|
|
|
if (socket->is_available() && socket->type == SOCK_MENU) {
|
|
|
|
|
clear_enum_reference(*socket);
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Propagate enum references from output links. */
|
|
|
|
|
for (bNodeSocket *output : node->output_sockets()) {
|
2024-01-29 19:02:39 +01:00
|
|
|
if (!output->is_available() || output->type != SOCK_MENU) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
for (const bNodeSocket *input : output->directly_linked_sockets()) {
|
|
|
|
|
if (!input->is_available() || input->type != SOCK_MENU) {
|
|
|
|
|
continue;
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
2024-01-29 19:02:39 +01:00
|
|
|
this->update_socket_enum_definition(*output->default_value_typed<bNodeSocketValueMenu>(),
|
|
|
|
|
*input->default_value_typed<bNodeSocketValueMenu>());
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (node->is_group()) {
|
|
|
|
|
/* Node groups expose internal enum definitions. */
|
|
|
|
|
if (node->id == nullptr) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
const bNodeTree *group_tree = reinterpret_cast<bNodeTree *>(node->id);
|
|
|
|
|
group_tree->ensure_interface_cache();
|
|
|
|
|
|
|
|
|
|
for (const int socket_i : group_tree->interface_inputs().index_range()) {
|
|
|
|
|
bNodeSocket &input = *node->input_sockets()[socket_i];
|
|
|
|
|
const bNodeTreeInterfaceSocket &iosocket = *group_tree->interface_inputs()[socket_i];
|
|
|
|
|
BLI_assert(STREQ(input.identifier, iosocket.identifier));
|
|
|
|
|
if (input.is_available() && input.type == SOCK_MENU) {
|
|
|
|
|
BLI_assert(STREQ(iosocket.socket_type, "NodeSocketMenu"));
|
|
|
|
|
this->update_socket_enum_definition(
|
|
|
|
|
*input.default_value_typed<bNodeSocketValueMenu>(),
|
|
|
|
|
*static_cast<bNodeSocketValueMenu *>(iosocket.socket_data));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2025-01-09 16:59:47 +01:00
|
|
|
else if (node->is_type("GeometryNodeMenuSwitch")) {
|
2024-01-26 12:40:01 +01:00
|
|
|
/* First input is always the node's own menu, propagate only to the enum case inputs. */
|
|
|
|
|
const bNodeSocket *output = node->output_sockets().first();
|
|
|
|
|
for (bNodeSocket *input : node->input_sockets().drop_front(1)) {
|
|
|
|
|
if (input->is_available() && input->type == SOCK_MENU) {
|
|
|
|
|
this->update_socket_enum_definition(
|
|
|
|
|
*input->default_value_typed<bNodeSocketValueMenu>(),
|
|
|
|
|
*output->default_value_typed<bNodeSocketValueMenu>());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2025-01-09 16:59:47 +01:00
|
|
|
else if (node->is_type("GeometryNodeForeachGeometryElementInput")) {
|
2024-09-25 15:13:16 +02:00
|
|
|
/* Propagate menu from element inputs to field inputs. */
|
|
|
|
|
BLI_assert(node->input_sockets().size() == node->output_sockets().size());
|
|
|
|
|
/* Inputs Geometry, Selection and outputs Index, Element are ignored. */
|
|
|
|
|
const IndexRange sockets = node->input_sockets().index_range().drop_front(2);
|
|
|
|
|
for (const int socket_i : sockets) {
|
|
|
|
|
bNodeSocket *input = node->input_sockets()[socket_i];
|
|
|
|
|
bNodeSocket *output = node->output_sockets()[socket_i];
|
|
|
|
|
if (input->is_available() && input->type == SOCK_MENU && output->is_available() &&
|
|
|
|
|
output->type == SOCK_MENU)
|
|
|
|
|
{
|
|
|
|
|
this->update_socket_enum_definition(
|
|
|
|
|
*input->default_value_typed<bNodeSocketValueMenu>(),
|
|
|
|
|
*output->default_value_typed<bNodeSocketValueMenu>());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2024-01-26 12:40:01 +01:00
|
|
|
else {
|
|
|
|
|
/* Propagate over internal relations. */
|
|
|
|
|
/* XXX Placeholder implementation just propagates all outputs
|
|
|
|
|
* to all inputs for built-in nodes This could perhaps use
|
|
|
|
|
* input/output relations to handle propagation generically? */
|
|
|
|
|
for (bNodeSocket *input : node->input_sockets()) {
|
|
|
|
|
if (input->is_available() && input->type == SOCK_MENU) {
|
|
|
|
|
for (const bNodeSocket *output : node->output_sockets()) {
|
|
|
|
|
if (output->is_available() && output->type == SOCK_MENU) {
|
|
|
|
|
this->update_socket_enum_definition(
|
|
|
|
|
*input->default_value_typed<bNodeSocketValueMenu>(),
|
|
|
|
|
*output->default_value_typed<bNodeSocketValueMenu>());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2024-02-22 14:40:21 +01:00
|
|
|
/* Find conflicts between on corresponding menu sockets on different group input nodes. */
|
|
|
|
|
const Span<bNode *> group_input_nodes = ntree.group_input_nodes();
|
|
|
|
|
for (const int interface_input_i : ntree.interface_inputs().index_range()) {
|
|
|
|
|
const bNodeTreeInterfaceSocket &interface_socket =
|
|
|
|
|
*ntree.interface_inputs()[interface_input_i];
|
|
|
|
|
if (interface_socket.socket_type != StringRef("NodeSocketMenu")) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
const RuntimeNodeEnumItems *found_enum_items = nullptr;
|
|
|
|
|
bool found_conflict = false;
|
|
|
|
|
for (bNode *input_node : group_input_nodes) {
|
|
|
|
|
const bNodeSocket &socket = input_node->output_socket(interface_input_i);
|
|
|
|
|
const auto &socket_value = *socket.default_value_typed<bNodeSocketValueMenu>();
|
|
|
|
|
if (socket_value.has_conflict()) {
|
|
|
|
|
found_conflict = true;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
if (found_enum_items == nullptr) {
|
|
|
|
|
found_enum_items = socket_value.enum_items;
|
|
|
|
|
}
|
|
|
|
|
else if (socket_value.enum_items != nullptr) {
|
|
|
|
|
if (found_enum_items != socket_value.enum_items) {
|
|
|
|
|
found_conflict = true;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if (found_conflict) {
|
2025-04-29 09:02:49 +10:00
|
|
|
/* Make sure that all group input sockets know that there is a socket. */
|
2024-02-22 14:40:21 +01:00
|
|
|
for (bNode *input_node : group_input_nodes) {
|
|
|
|
|
bNodeSocket &socket = input_node->output_socket(interface_input_i);
|
|
|
|
|
auto &socket_value = *socket.default_value_typed<bNodeSocketValueMenu>();
|
|
|
|
|
if (socket_value.enum_items) {
|
|
|
|
|
socket_value.enum_items->remove_user_and_delete_if_last();
|
|
|
|
|
socket_value.enum_items = nullptr;
|
|
|
|
|
}
|
|
|
|
|
socket_value.runtime_flag |= NodeSocketValueMenuRuntimeFlag::NODE_MENU_ITEMS_CONFLICT;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
else if (found_enum_items != nullptr) {
|
|
|
|
|
/* Make sure all corresponding menu sockets have the same menu reference. */
|
|
|
|
|
for (bNode *input_node : group_input_nodes) {
|
|
|
|
|
bNodeSocket &socket = input_node->output_socket(interface_input_i);
|
|
|
|
|
auto &socket_value = *socket.default_value_typed<bNodeSocketValueMenu>();
|
|
|
|
|
if (socket_value.enum_items == nullptr) {
|
|
|
|
|
found_enum_items->add_user();
|
|
|
|
|
socket_value.enum_items = found_enum_items;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-26 12:40:01 +01:00
|
|
|
/* Build list of new enum items for the node tree interface. */
|
|
|
|
|
Vector<bNodeSocketValueMenu> interface_enum_items(ntree.interface_inputs().size(), {0});
|
|
|
|
|
for (const bNode *group_input_node : ntree.group_input_nodes()) {
|
|
|
|
|
for (const int socket_i : ntree.interface_inputs().index_range()) {
|
|
|
|
|
const bNodeSocket &output = *group_input_node->output_sockets()[socket_i];
|
|
|
|
|
|
|
|
|
|
if (output.is_available() && output.type == SOCK_MENU) {
|
|
|
|
|
this->update_socket_enum_definition(interface_enum_items[socket_i],
|
|
|
|
|
*output.default_value_typed<bNodeSocketValueMenu>());
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Move enum items to the interface and detect if anything changed. */
|
|
|
|
|
bool changed = false;
|
|
|
|
|
for (const int socket_i : ntree.interface_inputs().index_range()) {
|
|
|
|
|
bNodeTreeInterfaceSocket &iosocket = *ntree.interface_inputs()[socket_i];
|
|
|
|
|
if (STREQ(iosocket.socket_type, "NodeSocketMenu")) {
|
|
|
|
|
bNodeSocketValueMenu &dst = *static_cast<bNodeSocketValueMenu *>(iosocket.socket_data);
|
|
|
|
|
const bNodeSocketValueMenu &src = interface_enum_items[socket_i];
|
|
|
|
|
if (dst.enum_items != src.enum_items || dst.has_conflict() != src.has_conflict()) {
|
|
|
|
|
changed = true;
|
|
|
|
|
if (dst.enum_items) {
|
|
|
|
|
dst.enum_items->remove_user_and_delete_if_last();
|
|
|
|
|
}
|
|
|
|
|
/* Items are moved, no need to change user count. */
|
|
|
|
|
dst.enum_items = src.enum_items;
|
|
|
|
|
SET_FLAG_FROM_TEST(dst.runtime_flag, src.has_conflict(), NODE_MENU_ITEMS_CONFLICT);
|
|
|
|
|
}
|
2025-02-27 12:31:31 +01:00
|
|
|
else {
|
|
|
|
|
/* If the item isn't move make sure it gets released again. */
|
|
|
|
|
if (src.enum_items) {
|
|
|
|
|
src.enum_items->remove_user_and_delete_if_last();
|
|
|
|
|
}
|
|
|
|
|
}
|
2024-01-26 12:40:01 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return changed;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Make a runtime copy of the DNA enum items.
|
|
|
|
|
* The runtime items list is shared by sockets.
|
|
|
|
|
*/
|
|
|
|
|
const RuntimeNodeEnumItems *create_runtime_enum_items(const NodeEnumDefinition &enum_def)
|
|
|
|
|
{
|
|
|
|
|
RuntimeNodeEnumItems *enum_items = new RuntimeNodeEnumItems();
|
|
|
|
|
enum_items->items.reinitialize(enum_def.items_num);
|
|
|
|
|
for (const int i : enum_def.items().index_range()) {
|
|
|
|
|
const NodeEnumItem &src = enum_def.items()[i];
|
|
|
|
|
RuntimeNodeEnumItem &dst = enum_items->items[i];
|
|
|
|
|
|
|
|
|
|
dst.identifier = src.identifier;
|
|
|
|
|
dst.name = src.name ? src.name : "";
|
|
|
|
|
dst.description = src.description ? src.description : "";
|
|
|
|
|
}
|
|
|
|
|
return enum_items;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void clear_enum_reference(bNodeSocket &socket)
|
|
|
|
|
{
|
|
|
|
|
BLI_assert(socket.is_available() && socket.type == SOCK_MENU);
|
|
|
|
|
bNodeSocketValueMenu &default_value = *socket.default_value_typed<bNodeSocketValueMenu>();
|
|
|
|
|
this->reset_enum_ptr(default_value);
|
|
|
|
|
default_value.runtime_flag &= ~NODE_MENU_ITEMS_CONFLICT;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void update_socket_enum_definition(bNodeSocketValueMenu &dst, const bNodeSocketValueMenu &src)
|
|
|
|
|
{
|
|
|
|
|
if (dst.has_conflict()) {
|
|
|
|
|
/* Target enum already has a conflict. */
|
|
|
|
|
BLI_assert(dst.enum_items == nullptr);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (src.has_conflict()) {
|
|
|
|
|
/* Target conflict if any source enum has a conflict. */
|
|
|
|
|
this->reset_enum_ptr(dst);
|
|
|
|
|
dst.runtime_flag |= NODE_MENU_ITEMS_CONFLICT;
|
|
|
|
|
}
|
|
|
|
|
else if (!dst.enum_items) {
|
|
|
|
|
/* First connection, set the reference. */
|
|
|
|
|
this->set_enum_ptr(dst, src.enum_items);
|
|
|
|
|
}
|
|
|
|
|
else if (src.enum_items && dst.enum_items != src.enum_items) {
|
|
|
|
|
/* Error if enum ref does not match other connections. */
|
|
|
|
|
this->reset_enum_ptr(dst);
|
|
|
|
|
dst.runtime_flag |= NODE_MENU_ITEMS_CONFLICT;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void reset_enum_ptr(bNodeSocketValueMenu &dst)
|
|
|
|
|
{
|
|
|
|
|
if (dst.enum_items) {
|
|
|
|
|
dst.enum_items->remove_user_and_delete_if_last();
|
|
|
|
|
dst.enum_items = nullptr;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void set_enum_ptr(bNodeSocketValueMenu &dst, const RuntimeNodeEnumItems *enum_items)
|
|
|
|
|
{
|
|
|
|
|
if (dst.enum_items) {
|
|
|
|
|
dst.enum_items->remove_user_and_delete_if_last();
|
|
|
|
|
dst.enum_items = nullptr;
|
|
|
|
|
}
|
|
|
|
|
if (enum_items) {
|
|
|
|
|
enum_items->add_user();
|
|
|
|
|
dst.enum_items = enum_items;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
void update_link_validation(bNodeTree &ntree)
|
|
|
|
|
{
|
2024-01-26 12:40:01 +01:00
|
|
|
/* Tests if enum references are undefined. */
|
|
|
|
|
const auto is_invalid_enum_ref = [](const bNodeSocket &socket) -> bool {
|
|
|
|
|
if (socket.type == SOCK_MENU) {
|
|
|
|
|
return socket.default_value_typed<bNodeSocketValueMenu>()->enum_items == nullptr;
|
|
|
|
|
}
|
|
|
|
|
return false;
|
|
|
|
|
};
|
|
|
|
|
|
Nodes: improve drawing with invalid zone links
Previously, whenever the zone detection algorithm could not find a result, zones
were just not drawn at all. This can be very confusing because it's not
necessarily obvious that something is wrong in this case.
Now, invalid zones and links that made them invalid have an error.
Note, we can't generally detect the "valid part" of zones when there are invalid
links, because it's ambiguous which links are valid. However, the solution here
is to remember the last valid zones, and to look at which links would invalidate
those. Since the zone-detection results in runtime-only data currently, the
error won't show when reopening the file for now.
Implementation wise, this works by keeping a potentially outdated version of the
last valid zones around, even when the zone detection failed. For that to work,
I had to change some node pointers to node identifiers in the zone structs, so
that it is safe to access them even if the nodes have been removed.
Pull Request: https://projects.blender.org/blender/blender/pulls/139044
2025-05-19 17:25:36 +02:00
|
|
|
const bNodeTreeZones *fallback_zones = nullptr;
|
|
|
|
|
if (ntree.type == NTREE_GEOMETRY && !ntree.zones() && ntree.runtime->last_valid_zones) {
|
|
|
|
|
fallback_zones = ntree.runtime->last_valid_zones.get();
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
LISTBASE_FOREACH (bNodeLink *, link, &ntree.links) {
|
|
|
|
|
link->flag |= NODE_LINK_VALID;
|
2023-06-09 14:14:47 +02:00
|
|
|
if (!link->fromsock->is_available() || !link->tosock->is_available()) {
|
|
|
|
|
link->flag &= ~NODE_LINK_VALID;
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2024-01-26 12:40:01 +01:00
|
|
|
if (is_invalid_enum_ref(*link->fromsock) || is_invalid_enum_ref(*link->tosock)) {
|
|
|
|
|
link->flag &= ~NODE_LINK_VALID;
|
2025-05-09 04:06:00 +02:00
|
|
|
ntree.runtime->link_errors.add(
|
|
|
|
|
NodeLinkKey{*link},
|
2024-05-23 14:31:16 +02:00
|
|
|
NodeLinkError{TIP_("Use node groups to reuse the same menu multiple times")});
|
2024-01-26 12:40:01 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
2024-05-23 14:31:16 +02:00
|
|
|
if (ntree.type == NTREE_GEOMETRY) {
|
2024-12-12 23:52:39 +01:00
|
|
|
const Span<FieldSocketState> field_states = ntree.runtime->field_states;
|
|
|
|
|
if (field_states[link->fromsock->index_in_tree()] == FieldSocketState::IsField &&
|
|
|
|
|
field_states[link->tosock->index_in_tree()] != FieldSocketState::IsField)
|
2024-05-23 14:31:16 +02:00
|
|
|
{
|
|
|
|
|
link->flag &= ~NODE_LINK_VALID;
|
2025-05-09 04:06:00 +02:00
|
|
|
ntree.runtime->link_errors.add(
|
|
|
|
|
NodeLinkKey{*link}, NodeLinkError{TIP_("The node input does not support fields")});
|
2024-05-23 14:31:16 +02:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
}
|
2022-11-21 11:34:22 -06:00
|
|
|
const bNode &from_node = *link->fromnode;
|
|
|
|
|
const bNode &to_node = *link->tonode;
|
2024-06-10 09:54:42 +02:00
|
|
|
if (from_node.runtime->toposort_left_to_right_index >
|
|
|
|
|
to_node.runtime->toposort_left_to_right_index)
|
|
|
|
|
{
|
2021-12-21 15:18:56 +01:00
|
|
|
link->flag &= ~NODE_LINK_VALID;
|
2025-05-09 04:06:00 +02:00
|
|
|
ntree.runtime->link_errors.add(
|
|
|
|
|
NodeLinkKey{*link},
|
2024-05-23 14:31:16 +02:00
|
|
|
NodeLinkError{TIP_("The links form a cycle which is not supported")});
|
2022-11-21 11:34:22 -06:00
|
|
|
continue;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2022-11-21 11:34:22 -06:00
|
|
|
if (ntree.typeinfo->validate_link) {
|
|
|
|
|
const eNodeSocketDatatype from_type = eNodeSocketDatatype(link->fromsock->type);
|
|
|
|
|
const eNodeSocketDatatype to_type = eNodeSocketDatatype(link->tosock->type);
|
2021-12-21 15:18:56 +01:00
|
|
|
if (!ntree.typeinfo->validate_link(from_type, to_type)) {
|
|
|
|
|
link->flag &= ~NODE_LINK_VALID;
|
2025-05-09 04:06:00 +02:00
|
|
|
ntree.runtime->link_errors.add(
|
|
|
|
|
NodeLinkKey{*link},
|
I18n: extract and disambiguate a few messages
Extract
- Add to Quick Favorites tooltip.
- "Mask", the name of a newly created mask (DATA_).
- "New" in the context of the new mask ID button.
- A few strings using BLI_STR_UTF8_ defines were not extracted.
Take the special characters out of the translation macros.
- "External" menu items from the filebrowser's Files context
menu (right-click on a file). These items were already extracted,
but not translated.
Improve
- Separate formatted error message "%s is not compatible with
["the specified", "any"] 'refresh' options" into two messages.
Disambiguate
- Use Action context for new F-modifiers' names. This is already used
for the "type" operator prop.
- Translate ImportHelper's default confirmation text using the
Operator context, as it uses the operator name which is extracted
with this context.
- "Scale" can be a noun, the scale of something, or a verb, to scale
something. The latter mostly uses the Operator context, so apply
this context to verbs, and the default contexts to nouns.
- "Scale Influence" can mean "Influence on Scale" (tracking
stabilization) and "to Scale the Influence" (dynamic paint canvas).
- "Object Line Art" as type of Line Art to add, as opposed to the
active object's Line Art settings.
- Float to Integer node: use NodeTree context for the node label, as
this is already extracted and used for the enum.
Do not translate
- Sequencer labels containing only a string formatting field.
Some issues reported by Gabriel Gazzán and Ye Gui.
Pull Request: https://projects.blender.org/blender/blender/pulls/122283
2024-05-27 19:33:35 +02:00
|
|
|
NodeLinkError{fmt::format("{}: {} " BLI_STR_UTF8_BLACK_RIGHT_POINTING_SMALL_TRIANGLE
|
|
|
|
|
" {}",
|
|
|
|
|
TIP_("Conversion is not supported"),
|
2025-02-06 17:47:52 +01:00
|
|
|
TIP_(link->fromsock->typeinfo->label),
|
|
|
|
|
TIP_(link->tosock->typeinfo->label))});
|
2022-11-21 11:34:22 -06:00
|
|
|
continue;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
Nodes: improve drawing with invalid zone links
Previously, whenever the zone detection algorithm could not find a result, zones
were just not drawn at all. This can be very confusing because it's not
necessarily obvious that something is wrong in this case.
Now, invalid zones and links that made them invalid have an error.
Note, we can't generally detect the "valid part" of zones when there are invalid
links, because it's ambiguous which links are valid. However, the solution here
is to remember the last valid zones, and to look at which links would invalidate
those. Since the zone-detection results in runtime-only data currently, the
error won't show when reopening the file for now.
Implementation wise, this works by keeping a potentially outdated version of the
last valid zones around, even when the zone detection failed. For that to work,
I had to change some node pointers to node identifiers in the zone structs, so
that it is safe to access them even if the nodes have been removed.
Pull Request: https://projects.blender.org/blender/blender/pulls/139044
2025-05-19 17:25:36 +02:00
|
|
|
if (fallback_zones) {
|
|
|
|
|
if (!fallback_zones->link_between_sockets_is_allowed(*link->fromsock, *link->tosock)) {
|
|
|
|
|
if (const bNodeTreeZone *from_zone = fallback_zones->get_zone_by_socket(*link->fromsock))
|
|
|
|
|
{
|
|
|
|
|
ntree.runtime->invalid_zone_output_node_ids.add(*from_zone->output_node_id);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
link->flag &= ~NODE_LINK_VALID;
|
|
|
|
|
ntree.runtime->link_errors.add(
|
|
|
|
|
NodeLinkKey{*link},
|
|
|
|
|
NodeLinkError{TIP_("Links can only go into a zone but not out")});
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
bool check_if_output_changed(const bNodeTree &tree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
tree.ensure_topology_cache();
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2024-01-26 12:40:01 +01:00
|
|
|
/* Compute a hash that represents the node topology connected to the output. This always has
|
|
|
|
|
* to be updated even if it is not used to detect changes right now. Otherwise
|
2022-05-30 12:54:07 +02:00
|
|
|
* #btree.runtime.output_topology_hash will go out of date. */
|
2022-08-31 12:15:57 +02:00
|
|
|
const Vector<const bNodeSocket *> tree_output_sockets = this->find_output_sockets(tree);
|
|
|
|
|
const uint32_t old_topology_hash = tree.runtime->output_topology_hash;
|
2021-12-21 15:18:56 +01:00
|
|
|
const uint32_t new_topology_hash = this->get_combined_socket_topology_hash(
|
|
|
|
|
tree, tree_output_sockets);
|
2022-08-31 12:15:57 +02:00
|
|
|
tree.runtime->output_topology_hash = new_topology_hash;
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
if (const AnimData *adt = BKE_animdata_from_id(&tree.id)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
/* Drivers may copy values in the node tree around arbitrarily and may cause the output to
|
|
|
|
|
* change even if it wouldn't without drivers. Only some special drivers like `frame/5` can
|
|
|
|
|
* be used without causing updates all the time currently. In the future we could try to
|
|
|
|
|
* handle other drivers better as well.
|
|
|
|
|
* Note that this optimization only works in practice when the depsgraph didn't also get a
|
2024-02-19 15:54:08 +01:00
|
|
|
* copy-on-evaluation tag for the node tree (which happens when changing node properties). It
|
2024-01-26 12:40:01 +01:00
|
|
|
* does work in a few situations like adding reroutes and duplicating nodes though. */
|
2021-12-21 15:18:56 +01:00
|
|
|
LISTBASE_FOREACH (const FCurve *, fcurve, &adt->drivers) {
|
|
|
|
|
const ChannelDriver *driver = fcurve->driver;
|
|
|
|
|
const StringRef expression = driver->expression;
|
|
|
|
|
if (expression.startswith("frame")) {
|
|
|
|
|
const StringRef remaining_expression = expression.drop_known_prefix("frame");
|
|
|
|
|
if (remaining_expression.find_first_not_of(" */+-0123456789.") == StringRef::not_found) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
/* Unrecognized driver, assume that the output always changes. */
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
if (tree.runtime->changed_flag & NTREE_CHANGED_ANY) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (old_topology_hash != new_topology_hash) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-26 12:40:01 +01:00
|
|
|
/* The topology hash can only be used when only topology-changing operations have been done.
|
|
|
|
|
*/
|
2022-08-31 12:15:57 +02:00
|
|
|
if (tree.runtime->changed_flag ==
|
|
|
|
|
(tree.runtime->changed_flag & (NTREE_CHANGED_LINK | NTREE_CHANGED_REMOVED_NODE)))
|
|
|
|
|
{
|
2021-12-21 15:18:56 +01:00
|
|
|
if (old_topology_hash == new_topology_hash) {
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (!this->check_if_socket_outputs_changed_based_on_flags(tree, tree_output_sockets)) {
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
Vector<const bNodeSocket *> find_output_sockets(const bNodeTree &tree)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
Vector<const bNodeSocket *> sockets;
|
|
|
|
|
for (const bNode *node : tree.all_nodes()) {
|
2022-03-14 10:49:03 +01:00
|
|
|
if (!this->is_output_node(*node)) {
|
2021-12-21 15:18:56 +01:00
|
|
|
continue;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
for (const bNodeSocket *socket : node->input_sockets()) {
|
|
|
|
|
if (!STREQ(socket->idname, "NodeSocketVirtual")) {
|
2021-12-21 15:18:56 +01:00
|
|
|
sockets.append(socket);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return sockets;
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
bool is_output_node(const bNode &node) const
|
2022-03-14 10:49:03 +01:00
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
if (node.typeinfo->nclass == NODE_CLASS_OUTPUT) {
|
2022-03-14 10:49:03 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node.is_group_output()) {
|
2022-03-14 10:49:03 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node.is_type("GeometryNodeWarning")) {
|
2024-08-29 16:03:25 +02:00
|
|
|
return true;
|
|
|
|
|
}
|
Geometry Nodes: support attaching gizmos to input values
This adds support for attaching gizmos for input values. The goal is to make it
easier for users to set input values intuitively in the 3D viewport.
We went through multiple different possible designs until we settled on the one
implemented here. We picked it for it's flexibility and ease of use when using
geometry node assets. The core principle in the design is that **gizmos are
attached to existing input values instead of being the input value themselves**.
This actually fits the existing concept of gizmos in Blender well, but may be a
bit unintutitive in a node setup at first. The attachment is done using links in
the node editor.
The most basic usage of the node is to link a Value node to the new Linear Gizmo
node. This attaches the gizmo to the input value and allows you to change it
from the 3D view. The attachment is indicated by the gizmo icon in the sockets
which are controlled by a gizmo as well as the back-link (notice the double
link) when the gizmo is active.
The core principle makes it straight forward to control the same node setup from
the 3D view with gizmos, or by manually changing input values, or by driving the
input values procedurally.
If the input value is controlled indirectly by other inputs, it's often possible
to **automatically propagate** the gizmo to the actual input.
Backpropagation does not work for all nodes, although more nodes can be
supported over time.
This patch adds the first three gizmo nodes which cover common use cases:
* **Linear Gizmo**: Creates a gizmo that controls a float or integer value using
a linear movement of e.g. an arrow in the 3D viewport.
* **Dial Gizmo**: Creates a circular gizmo in the 3D viewport that can be
rotated to change the attached angle input.
* **Transform Gizmo**: Creates a simple gizmo for location, rotation and scale.
In the future, more built-in gizmos and potentially the ability for custom
gizmos could be added.
All gizmo nodes have a **Transform** geometry output. Using it is optional but
it is recommended when the gizmo is used to control inputs that affect a
geometry. When it is used, Blender will automatically transform the gizmos
together with the geometry that they control. To achieve this, the output should
be merged with the generated geometry using the *Join Geometry* node. The data
contained in *Transform* output is not visible geometry, but just internal
information that helps Blender to give a better user experience when using
gizmos.
The gizmo nodes have a multi-input socket. This allows **controlling multiple
values** with the same gizmo.
Only a small set of **gizmo shapes** is supported initially. It might be
extended in the future but one goal is to give the gizmos used by different node
group assets a familiar look and feel. A similar constraint exists for
**colors**. Currently, one can choose from a fixed set of colors which can be
modified in the theme settings.
The set of **visible gizmos** is determined by a multiple factors because it's
not really feasible to show all possible gizmos at all times. To see any of the
geometry nodes gizmos, the "Active Modifier" option has to be enabled in the
"Viewport Gizmos" popover. Then all gizmos are drawn for which at least one of
the following is true:
* The gizmo controls an input of the active modifier of the active object.
* The gizmo controls a value in a selected node in an open node editor.
* The gizmo controls a pinned value in an open node editor. Pinning works by
clicking the gizmo icon next to the value.
Pull Request: https://projects.blender.org/blender/blender/pulls/112677
2024-07-10 16:18:47 +02:00
|
|
|
if (nodes::gizmos::is_builtin_gizmo_node(node)) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
2022-03-14 10:49:03 +01:00
|
|
|
/* Assume node groups without output sockets are outputs. */
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node.is_group()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
const bNodeTree *node_group = reinterpret_cast<const bNodeTree *>(node.id);
|
2022-05-23 09:03:33 +02:00
|
|
|
if (node_group != nullptr &&
|
2022-05-30 12:54:07 +02:00
|
|
|
node_group->runtime->runtime_flag & NTREE_RUNTIME_FLAG_HAS_MATERIAL_OUTPUT)
|
|
|
|
|
{
|
2022-05-23 09:03:33 +02:00
|
|
|
return true;
|
|
|
|
|
}
|
2022-03-14 10:49:03 +01:00
|
|
|
}
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
/**
|
2024-01-26 12:40:01 +01:00
|
|
|
* Computes a hash that changes when the node tree topology connected to an output node
|
|
|
|
|
* changes. Adding reroutes does not have an effect on the hash.
|
2021-12-21 15:18:56 +01:00
|
|
|
*/
|
2022-08-31 12:15:57 +02:00
|
|
|
uint32_t get_combined_socket_topology_hash(const bNodeTree &tree,
|
|
|
|
|
Span<const bNodeSocket *> sockets)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-09-20 13:21:03 +02:00
|
|
|
if (tree.has_available_link_cycle()) {
|
2024-01-26 12:40:01 +01:00
|
|
|
/* Return dummy value when the link has any cycles. The algorithm below could be improved
|
|
|
|
|
* to handle cycles more gracefully. */
|
2021-12-31 11:33:47 +01:00
|
|
|
return 0;
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
Array<uint32_t> hashes = this->get_socket_topology_hashes(tree, sockets);
|
|
|
|
|
uint32_t combined_hash = 0;
|
|
|
|
|
for (uint32_t hash : hashes) {
|
|
|
|
|
combined_hash = noise::hash(combined_hash, hash);
|
|
|
|
|
}
|
|
|
|
|
return combined_hash;
|
|
|
|
|
}
|
|
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
Array<uint32_t> get_socket_topology_hashes(const bNodeTree &tree,
|
2022-09-29 12:43:27 +02:00
|
|
|
const Span<const bNodeSocket *> sockets)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2022-09-20 13:21:03 +02:00
|
|
|
BLI_assert(!tree.has_available_link_cycle());
|
2022-08-31 12:15:57 +02:00
|
|
|
Array<std::optional<uint32_t>> hash_by_socket_id(tree.all_sockets().size());
|
|
|
|
|
Stack<const bNodeSocket *> sockets_to_check = sockets;
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2022-09-29 12:43:27 +02:00
|
|
|
auto get_socket_ptr_hash = [&](const bNodeSocket &socket) {
|
|
|
|
|
const uint64_t socket_ptr = uintptr_t(&socket);
|
|
|
|
|
return noise::hash(socket_ptr, socket_ptr >> 32);
|
|
|
|
|
};
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
while (!sockets_to_check.is_empty()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
const bNodeSocket &socket = *sockets_to_check.peek();
|
|
|
|
|
const bNode &node = socket.owner_node();
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
if (hash_by_socket_id[socket.index_in_tree()].has_value()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
sockets_to_check.pop();
|
|
|
|
|
/* Socket is handled already. */
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
2022-09-29 12:43:27 +02:00
|
|
|
uint32_t socket_hash = 0;
|
2022-08-31 12:15:57 +02:00
|
|
|
if (socket.is_input()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
/* For input sockets, first compute the hashes of all linked sockets. */
|
|
|
|
|
bool all_origins_computed = true;
|
2022-09-29 12:43:27 +02:00
|
|
|
bool get_value_from_origin = false;
|
|
|
|
|
for (const bNodeLink *link : socket.directly_linked_links()) {
|
|
|
|
|
if (link->is_muted()) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
if (!link->is_available()) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
const bNodeSocket &origin_socket = *link->fromsock;
|
|
|
|
|
const std::optional<uint32_t> origin_hash =
|
|
|
|
|
hash_by_socket_id[origin_socket.index_in_tree()];
|
|
|
|
|
if (origin_hash.has_value()) {
|
|
|
|
|
if (get_value_from_origin || socket.type != origin_socket.type) {
|
|
|
|
|
socket_hash = noise::hash(socket_hash, *origin_hash);
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
/* Copy the socket hash because the link did not change it. */
|
|
|
|
|
socket_hash = *origin_hash;
|
|
|
|
|
}
|
|
|
|
|
get_value_from_origin = true;
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
sockets_to_check.push(&origin_socket);
|
2021-12-21 15:18:56 +01:00
|
|
|
all_origins_computed = false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if (!all_origins_computed) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2022-09-29 12:43:27 +02:00
|
|
|
|
|
|
|
|
if (!get_value_from_origin) {
|
|
|
|
|
socket_hash = get_socket_ptr_hash(socket);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
bool all_available_inputs_computed = true;
|
2022-08-31 12:15:57 +02:00
|
|
|
for (const bNodeSocket *input_socket : node.input_sockets()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (input_socket->is_available()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
if (!hash_by_socket_id[input_socket->index_in_tree()].has_value()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
sockets_to_check.push(input_socket);
|
|
|
|
|
all_available_inputs_computed = false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if (!all_available_inputs_computed) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node.is_reroute()) {
|
2022-09-29 12:43:27 +02:00
|
|
|
socket_hash = *hash_by_socket_id[node.input_socket(0).index_in_tree()];
|
|
|
|
|
}
|
|
|
|
|
else if (node.is_muted()) {
|
|
|
|
|
const bNodeSocket *internal_input = socket.internal_link_input();
|
|
|
|
|
if (internal_input == nullptr) {
|
|
|
|
|
socket_hash = get_socket_ptr_hash(socket);
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
if (internal_input->type == socket.type) {
|
|
|
|
|
socket_hash = *hash_by_socket_id[internal_input->index_in_tree()];
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
socket_hash = get_socket_ptr_hash(socket);
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
2022-09-29 12:43:27 +02:00
|
|
|
else {
|
|
|
|
|
socket_hash = get_socket_ptr_hash(socket);
|
|
|
|
|
for (const bNodeSocket *input_socket : node.input_sockets()) {
|
|
|
|
|
if (input_socket->is_available()) {
|
|
|
|
|
const uint32_t input_socket_hash = *hash_by_socket_id[input_socket->index_in_tree()];
|
|
|
|
|
socket_hash = noise::hash(socket_hash, input_socket_hash);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* The Image Texture node has a special case. The behavior of the color output changes
|
|
|
|
|
* depending on whether the Alpha output is linked. */
|
2025-01-09 16:59:47 +01:00
|
|
|
if (node.is_type("ShaderNodeTexImage") && socket.index() == 0) {
|
2022-09-29 12:43:27 +02:00
|
|
|
BLI_assert(STREQ(socket.name, "Color"));
|
|
|
|
|
const bNodeSocket &alpha_socket = node.output_socket(1);
|
|
|
|
|
BLI_assert(STREQ(alpha_socket.name, "Alpha"));
|
|
|
|
|
if (alpha_socket.is_directly_linked()) {
|
|
|
|
|
socket_hash = noise::hash(socket_hash);
|
|
|
|
|
}
|
2022-03-18 10:49:54 +01:00
|
|
|
}
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2022-09-29 12:43:27 +02:00
|
|
|
hash_by_socket_id[socket.index_in_tree()] = socket_hash;
|
|
|
|
|
/* Check that nothing has been pushed in the meantime. */
|
|
|
|
|
BLI_assert(sockets_to_check.peek() == &socket);
|
|
|
|
|
sockets_to_check.pop();
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Create output array. */
|
|
|
|
|
Array<uint32_t> hashes(sockets.size());
|
|
|
|
|
for (const int i : sockets.index_range()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
hashes[i] = *hash_by_socket_id[sockets[i]->index_in_tree()];
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
return hashes;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Returns true when any of the provided sockets changed its values. A change is detected by
|
|
|
|
|
* checking the #changed_flag on connected sockets and nodes.
|
|
|
|
|
*/
|
2022-08-31 12:15:57 +02:00
|
|
|
bool check_if_socket_outputs_changed_based_on_flags(const bNodeTree &tree,
|
|
|
|
|
Span<const bNodeSocket *> sockets)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
|
|
|
|
/* Avoid visiting the same socket twice when multiple links point to the same socket. */
|
2022-08-31 12:15:57 +02:00
|
|
|
Array<bool> pushed_by_socket_id(tree.all_sockets().size(), false);
|
|
|
|
|
Stack<const bNodeSocket *> sockets_to_check = sockets;
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2022-08-31 12:15:57 +02:00
|
|
|
for (const bNodeSocket *socket : sockets) {
|
|
|
|
|
pushed_by_socket_id[socket->index_in_tree()] = true;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
while (!sockets_to_check.is_empty()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
const bNodeSocket &socket = *sockets_to_check.pop();
|
|
|
|
|
const bNode &node = socket.owner_node();
|
|
|
|
|
if (socket.runtime->changed_flag != NTREE_CHANGED_NOTHING) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
if (node.runtime->changed_flag != NTREE_CHANGED_NOTHING) {
|
|
|
|
|
const bool only_unused_internal_link_changed = !node.is_muted() &&
|
|
|
|
|
node.runtime->changed_flag ==
|
2021-12-21 15:18:56 +01:00
|
|
|
NTREE_CHANGED_INTERNAL_LINK;
|
2025-05-15 06:52:38 +02:00
|
|
|
const bool only_parent_changed = node.runtime->changed_flag == NTREE_CHANGED_PARENT;
|
|
|
|
|
const bool change_affects_output = !(only_unused_internal_link_changed ||
|
|
|
|
|
only_parent_changed);
|
|
|
|
|
if (change_affects_output) {
|
2021-12-21 15:18:56 +01:00
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
}
|
2022-08-31 12:15:57 +02:00
|
|
|
if (socket.is_input()) {
|
2022-09-29 13:08:45 +02:00
|
|
|
for (const bNodeSocket *origin_socket : socket.directly_linked_sockets()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
bool &pushed = pushed_by_socket_id[origin_socket->index_in_tree()];
|
2021-12-21 15:18:56 +01:00
|
|
|
if (!pushed) {
|
|
|
|
|
sockets_to_check.push(origin_socket);
|
|
|
|
|
pushed = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
else {
|
2022-08-31 12:15:57 +02:00
|
|
|
for (const bNodeSocket *input_socket : node.input_sockets()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (input_socket->is_available()) {
|
2022-08-31 12:15:57 +02:00
|
|
|
bool &pushed = pushed_by_socket_id[input_socket->index_in_tree()];
|
2021-12-21 15:18:56 +01:00
|
|
|
if (!pushed) {
|
|
|
|
|
sockets_to_check.push(input_socket);
|
|
|
|
|
pushed = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2024-09-24 11:52:02 +02:00
|
|
|
/* Zones may propagate changes from the input node to the output node even though there is
|
|
|
|
|
* no explicit link. */
|
2025-01-09 15:28:57 +01:00
|
|
|
switch (node.type_legacy) {
|
2024-09-24 11:52:02 +02:00
|
|
|
case GEO_NODE_REPEAT_OUTPUT:
|
|
|
|
|
case GEO_NODE_SIMULATION_OUTPUT:
|
|
|
|
|
case GEO_NODE_FOREACH_GEOMETRY_ELEMENT_OUTPUT: {
|
|
|
|
|
const bNodeTreeZones *zones = tree.zones();
|
|
|
|
|
if (!zones) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
const bNodeTreeZone *zone = zones->get_zone_by_node(node.identifier);
|
Nodes: improve drawing with invalid zone links
Previously, whenever the zone detection algorithm could not find a result, zones
were just not drawn at all. This can be very confusing because it's not
necessarily obvious that something is wrong in this case.
Now, invalid zones and links that made them invalid have an error.
Note, we can't generally detect the "valid part" of zones when there are invalid
links, because it's ambiguous which links are valid. However, the solution here
is to remember the last valid zones, and to look at which links would invalidate
those. Since the zone-detection results in runtime-only data currently, the
error won't show when reopening the file for now.
Implementation wise, this works by keeping a potentially outdated version of the
last valid zones around, even when the zone detection failed. For that to work,
I had to change some node pointers to node identifiers in the zone structs, so
that it is safe to access them even if the nodes have been removed.
Pull Request: https://projects.blender.org/blender/blender/pulls/139044
2025-05-19 17:25:36 +02:00
|
|
|
if (!zone->input_node()) {
|
2024-09-24 11:52:02 +02:00
|
|
|
break;
|
|
|
|
|
}
|
Nodes: improve drawing with invalid zone links
Previously, whenever the zone detection algorithm could not find a result, zones
were just not drawn at all. This can be very confusing because it's not
necessarily obvious that something is wrong in this case.
Now, invalid zones and links that made them invalid have an error.
Note, we can't generally detect the "valid part" of zones when there are invalid
links, because it's ambiguous which links are valid. However, the solution here
is to remember the last valid zones, and to look at which links would invalidate
those. Since the zone-detection results in runtime-only data currently, the
error won't show when reopening the file for now.
Implementation wise, this works by keeping a potentially outdated version of the
last valid zones around, even when the zone detection failed. For that to work,
I had to change some node pointers to node identifiers in the zone structs, so
that it is safe to access them even if the nodes have been removed.
Pull Request: https://projects.blender.org/blender/blender/pulls/139044
2025-05-19 17:25:36 +02:00
|
|
|
for (const bNodeSocket *input_socket : zone->input_node()->input_sockets()) {
|
2024-09-24 11:52:02 +02:00
|
|
|
if (input_socket->is_available()) {
|
|
|
|
|
bool &pushed = pushed_by_socket_id[input_socket->index_in_tree()];
|
|
|
|
|
if (!pushed) {
|
|
|
|
|
sockets_to_check.push(input_socket);
|
|
|
|
|
pushed = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
2024-01-26 12:40:01 +01:00
|
|
|
/* The Normal node has a special case, because the value stored in the first output
|
|
|
|
|
* socket is used as input in the node. */
|
2025-01-30 11:45:59 +01:00
|
|
|
if ((node.is_type("ShaderNodeNormal") || node.is_type("CompositorNodeNormal")) &&
|
|
|
|
|
socket.index() == 1)
|
|
|
|
|
{
|
2022-08-31 12:15:57 +02:00
|
|
|
BLI_assert(STREQ(socket.name, "Dot"));
|
|
|
|
|
const bNodeSocket &normal_output = node.output_socket(0);
|
|
|
|
|
BLI_assert(STREQ(normal_output.name, "Normal"));
|
|
|
|
|
bool &pushed = pushed_by_socket_id[normal_output.index_in_tree()];
|
2022-02-14 09:08:54 +01:00
|
|
|
if (!pushed) {
|
|
|
|
|
sockets_to_check.push(&normal_output);
|
|
|
|
|
pushed = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
/**
|
|
|
|
|
* Make sure that the #bNodeTree::nested_node_refs is up to date. It's supposed to contain a
|
|
|
|
|
* reference to all (nested) simulation zones.
|
|
|
|
|
*/
|
|
|
|
|
bool update_nested_node_refs(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
ntree.ensure_topology_cache();
|
|
|
|
|
|
|
|
|
|
/* Simplify lookup of old ids. */
|
|
|
|
|
Map<bNestedNodePath, int32_t> old_id_by_path;
|
|
|
|
|
Set<int32_t> old_ids;
|
|
|
|
|
for (const bNestedNodeRef &ref : ntree.nested_node_refs_span()) {
|
|
|
|
|
old_id_by_path.add(ref.path, ref.id);
|
|
|
|
|
old_ids.add(ref.id);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Vector<bNestedNodePath> nested_node_paths;
|
|
|
|
|
|
|
|
|
|
/* Don't forget nested node refs just because the linked file is not available right now. */
|
|
|
|
|
for (const bNestedNodePath &path : old_id_by_path.keys()) {
|
|
|
|
|
const bNode *node = ntree.node_by_id(path.node_id);
|
|
|
|
|
if (node && node->is_group() && node->id) {
|
2024-08-07 12:12:17 +02:00
|
|
|
if (node->id->tag & ID_TAG_MISSING) {
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
nested_node_paths.append(path);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if (ntree.type == NTREE_GEOMETRY) {
|
2023-12-18 13:01:06 +01:00
|
|
|
/* Create references for simulations and bake nodes in geometry nodes.
|
|
|
|
|
* Those are the nodes that we want to store settings for at a higher level. */
|
|
|
|
|
for (StringRefNull idname : {"GeometryNodeSimulationOutput", "GeometryNodeBake"}) {
|
|
|
|
|
for (const bNode *node : ntree.nodes_by_type(idname)) {
|
|
|
|
|
nested_node_paths.append({node->identifier, -1});
|
|
|
|
|
}
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
/* Propagate references to nested nodes in group nodes. */
|
|
|
|
|
for (const bNode *node : ntree.group_nodes()) {
|
|
|
|
|
const bNodeTree *group = reinterpret_cast<const bNodeTree *>(node->id);
|
|
|
|
|
if (group == nullptr) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
for (const int i : group->nested_node_refs_span().index_range()) {
|
|
|
|
|
const bNestedNodeRef &child_ref = group->nested_node_refs[i];
|
|
|
|
|
nested_node_paths.append({node->identifier, child_ref.id});
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Used to generate new unique IDs if necessary. */
|
2024-01-19 11:57:57 +01:00
|
|
|
RandomNumberGenerator rng = RandomNumberGenerator::from_random_seed();
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
|
|
|
|
|
Map<int32_t, bNestedNodePath> new_path_by_id;
|
|
|
|
|
for (const bNestedNodePath &path : nested_node_paths) {
|
|
|
|
|
const int32_t old_id = old_id_by_path.lookup_default(path, -1);
|
|
|
|
|
if (old_id != -1) {
|
|
|
|
|
/* The same path existed before, it should keep the same ID as before. */
|
|
|
|
|
new_path_by_id.add(old_id, path);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
int32_t new_id;
|
|
|
|
|
while (true) {
|
|
|
|
|
new_id = rng.get_int32(INT32_MAX);
|
|
|
|
|
if (!old_ids.contains(new_id) && !new_path_by_id.contains(new_id)) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
/* The path is new, it should get a new ID that does not collide with any existing IDs. */
|
|
|
|
|
new_path_by_id.add(new_id, path);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Check if the old and new references are identical. */
|
|
|
|
|
if (!this->nested_node_refs_changed(ntree, new_path_by_id)) {
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
MEM_SAFE_FREE(ntree.nested_node_refs);
|
|
|
|
|
if (new_path_by_id.is_empty()) {
|
|
|
|
|
ntree.nested_node_refs_num = 0;
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Allocate new array for the nested node references contained in the node tree. */
|
2025-03-20 11:25:19 +01:00
|
|
|
bNestedNodeRef *new_refs = MEM_malloc_arrayN<bNestedNodeRef>(size_t(new_path_by_id.size()),
|
|
|
|
|
__func__);
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
int index = 0;
|
|
|
|
|
for (const auto item : new_path_by_id.items()) {
|
|
|
|
|
bNestedNodeRef &ref = new_refs[index];
|
|
|
|
|
ref.id = item.key;
|
|
|
|
|
ref.path = item.value;
|
|
|
|
|
index++;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
ntree.nested_node_refs = new_refs;
|
|
|
|
|
ntree.nested_node_refs_num = new_path_by_id.size();
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool nested_node_refs_changed(const bNodeTree &ntree,
|
|
|
|
|
const Map<int32_t, bNestedNodePath> &new_path_by_id)
|
|
|
|
|
{
|
|
|
|
|
if (ntree.nested_node_refs_num != new_path_by_id.size()) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
for (const bNestedNodeRef &ref : ntree.nested_node_refs_span()) {
|
|
|
|
|
if (!new_path_by_id.contains(ref.id)) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
void reset_changed_flags(bNodeTree &ntree)
|
|
|
|
|
{
|
2022-05-30 12:54:07 +02:00
|
|
|
ntree.runtime->changed_flag = NTREE_CHANGED_NOTHING;
|
2022-12-02 11:12:51 -06:00
|
|
|
for (bNode *node : ntree.all_nodes()) {
|
2022-05-30 15:31:13 +02:00
|
|
|
node->runtime->changed_flag = NTREE_CHANGED_NOTHING;
|
2022-11-18 12:46:20 +01:00
|
|
|
node->runtime->update = 0;
|
2021-12-21 15:18:56 +01:00
|
|
|
LISTBASE_FOREACH (bNodeSocket *, socket, &node->inputs) {
|
2022-05-30 15:31:13 +02:00
|
|
|
socket->runtime->changed_flag = NTREE_CHANGED_NOTHING;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
LISTBASE_FOREACH (bNodeSocket *, socket, &node->outputs) {
|
2022-05-30 15:31:13 +02:00
|
|
|
socket->runtime->changed_flag = NTREE_CHANGED_NOTHING;
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
}
|
2023-09-14 14:13:07 +02:00
|
|
|
|
|
|
|
|
ntree.tree_interface.reset_changed_flags();
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
2025-02-28 19:07:02 +01:00
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Update the panel toggle sockets to use the same name as the panel.
|
|
|
|
|
*/
|
|
|
|
|
bool update_panel_toggle_names(bNodeTree &ntree)
|
|
|
|
|
{
|
|
|
|
|
bool changed = false;
|
|
|
|
|
ntree.ensure_interface_cache();
|
|
|
|
|
for (bNodeTreeInterfaceItem *item : ntree.interface_items()) {
|
|
|
|
|
if (item->item_type != NODE_INTERFACE_PANEL) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
bNodeTreeInterfacePanel *panel = reinterpret_cast<bNodeTreeInterfacePanel *>(item);
|
|
|
|
|
if (bNodeTreeInterfaceSocket *toggle_socket = panel->header_toggle_socket()) {
|
|
|
|
|
if (!STREQ(panel->name, toggle_socket->name)) {
|
|
|
|
|
MEM_SAFE_FREE(toggle_socket->name);
|
|
|
|
|
toggle_socket->name = BLI_strdup_null(panel->name);
|
|
|
|
|
changed = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return changed;
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
|
|
} // namespace blender::bke
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_all(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_ANY);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_node_property(bNodeTree *ntree, bNode *node)
|
|
|
|
|
{
|
|
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_NODE_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_node_new(bNodeTree *ntree, bNode *node)
|
|
|
|
|
{
|
|
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_NODE_PROPERTY);
|
Fix #134283: defer freeing tree/node/socket types
Currently, tree, node and socket types are always freed immediately when the
Python code unregisters them. This is problematic, because there may still be
references to those type pointers in evaluated data owned by potentially various
depsgraphs. It's not possible to change data in these depsgraphs, because they
may be independent from the original data and might be worked on by a separate
thread. So when the type pointers are freed directly, there will be a lot of
dangling pointers in evaluated copies. Since those are used to free the nodes,
there will be a crash when the depsgraph updates. In practice, this does not
happen that often, because typically custom node tree addons are not disabled
while in use. They still used to crash often, but only when Blender exits and
unregisters all types.
The solution is to just keep the typeinfo pointers alive and free them all at
the very end. This obviously has the downside that the list of pointers we need
to keep track of can grow endlessly, however in practice that doesn't really
happen under any normal circumstances.
I'm still getting some other crashes when enabling/disabling Sverchok while
testing, but not entirely reliably and also without this patch (the crash there
happens in RNA code). So some additional work will probably be needed later to
make this work properly in all cases.
Pull Request: https://projects.blender.org/blender/blender/pulls/134360
2025-02-11 17:25:10 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_node_type(bNodeTree *ntree, bNode *node)
|
|
|
|
|
{
|
|
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_NODE_PROPERTY);
|
2021-12-21 15:18:56 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_socket_property(bNodeTree *ntree, bNodeSocket *socket)
|
|
|
|
|
{
|
|
|
|
|
add_socket_tag(ntree, socket, NTREE_CHANGED_SOCKET_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_socket_new(bNodeTree *ntree, bNodeSocket *socket)
|
|
|
|
|
{
|
|
|
|
|
add_socket_tag(ntree, socket, NTREE_CHANGED_SOCKET_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_socket_removed(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_REMOVED_SOCKET);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_socket_type(bNodeTree *ntree, bNodeSocket *socket)
|
|
|
|
|
{
|
|
|
|
|
add_socket_tag(ntree, socket, NTREE_CHANGED_SOCKET_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_socket_availability(bNodeTree *ntree, bNodeSocket *socket)
|
|
|
|
|
{
|
|
|
|
|
add_socket_tag(ntree, socket, NTREE_CHANGED_SOCKET_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_node_removed(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_REMOVED_NODE);
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-22 14:00:17 -06:00
|
|
|
void BKE_ntree_update_tag_node_mute(bNodeTree *ntree, bNode *node)
|
|
|
|
|
{
|
|
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_NODE_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
void BKE_ntree_update_tag_node_internal_link(bNodeTree *ntree, bNode *node)
|
|
|
|
|
{
|
|
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_INTERNAL_LINK);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_link_changed(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_LINK);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void BKE_ntree_update_tag_link_removed(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_LINK);
|
|
|
|
|
}
|
|
|
|
|
|
2022-10-03 17:37:25 -05:00
|
|
|
void BKE_ntree_update_tag_link_added(bNodeTree *ntree, bNodeLink * /*link*/)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_LINK);
|
|
|
|
|
}
|
|
|
|
|
|
2022-10-03 17:37:25 -05:00
|
|
|
void BKE_ntree_update_tag_link_mute(bNodeTree *ntree, bNodeLink * /*link*/)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_LINK);
|
|
|
|
|
}
|
|
|
|
|
|
2022-02-10 12:07:48 +01:00
|
|
|
void BKE_ntree_update_tag_active_output_changed(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_ANY);
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
void BKE_ntree_update_tag_missing_runtime_data(bNodeTree *ntree)
|
|
|
|
|
{
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_ALL);
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-18 11:20:13 +01:00
|
|
|
void BKE_ntree_update_tag_parent_change(bNodeTree *ntree, bNode *node)
|
|
|
|
|
{
|
|
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_PARENT);
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
void BKE_ntree_update_tag_id_changed(Main *bmain, ID *id)
|
|
|
|
|
{
|
|
|
|
|
FOREACH_NODETREE_BEGIN (bmain, ntree, ntree_id) {
|
2022-12-02 11:12:51 -06:00
|
|
|
for (bNode *node : ntree->all_nodes()) {
|
2021-12-21 15:18:56 +01:00
|
|
|
if (node->id == id) {
|
2022-11-18 12:46:20 +01:00
|
|
|
node->runtime->update |= NODE_UPDATE_ID;
|
2021-12-21 15:18:56 +01:00
|
|
|
add_node_tag(ntree, node, NTREE_CHANGED_NODE_PROPERTY);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
FOREACH_NODETREE_END;
|
|
|
|
|
}
|
|
|
|
|
|
2022-10-03 17:37:25 -05:00
|
|
|
void BKE_ntree_update_tag_image_user_changed(bNodeTree *ntree, ImageUser * /*iuser*/)
|
2022-02-10 17:31:04 +01:00
|
|
|
{
|
|
|
|
|
/* Would have to search for the node that uses the image user for a more detailed tag. */
|
|
|
|
|
add_tree_tag(ntree, NTREE_CHANGED_ANY);
|
|
|
|
|
}
|
|
|
|
|
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
uint64_t bNestedNodePath::hash() const
|
|
|
|
|
{
|
2024-01-26 11:45:49 +01:00
|
|
|
return blender::get_default_hash(this->node_id, this->id_in_node);
|
Nodes: add nested node ids and use them for simulation state
The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.
Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.
The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.
In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.
To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.
A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).
This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).
When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).
Pull Request: https://projects.blender.org/blender/blender/pulls/109444
2023-07-01 11:54:32 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool operator==(const bNestedNodePath &a, const bNestedNodePath &b)
|
|
|
|
|
{
|
|
|
|
|
return a.node_id == b.node_id && a.id_in_node == b.id_in_node;
|
|
|
|
|
}
|
|
|
|
|
|
2021-12-21 15:18:56 +01:00
|
|
|
/**
|
|
|
|
|
* Protect from recursive calls into the updating function. Some node update functions might
|
|
|
|
|
* trigger this from Python or in other cases.
|
|
|
|
|
*
|
|
|
|
|
* This could be added to #Main, but given that there is generally only one #Main, that's not
|
|
|
|
|
* really worth it now.
|
|
|
|
|
*/
|
|
|
|
|
static bool is_updating = false;
|
|
|
|
|
|
2025-01-09 17:00:05 +01:00
|
|
|
void BKE_ntree_update(Main &bmain,
|
|
|
|
|
const std::optional<blender::Span<bNodeTree *>> modified_trees,
|
|
|
|
|
const NodeTreeUpdateExtraParams ¶ms)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
|
|
|
|
if (is_updating) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
is_updating = true;
|
2025-01-09 17:00:05 +01:00
|
|
|
blender::bke::NodeTreeMainUpdater updater{&bmain, params};
|
|
|
|
|
if (modified_trees.has_value()) {
|
|
|
|
|
updater.update_rooted(*modified_trees);
|
|
|
|
|
}
|
|
|
|
|
else {
|
|
|
|
|
updater.update();
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
is_updating = false;
|
|
|
|
|
}
|
|
|
|
|
|
2025-01-09 17:00:05 +01:00
|
|
|
void BKE_ntree_update_after_single_tree_change(Main &bmain,
|
|
|
|
|
bNodeTree &modified_tree,
|
|
|
|
|
const NodeTreeUpdateExtraParams ¶ms)
|
2021-12-21 15:18:56 +01:00
|
|
|
{
|
2025-01-09 17:00:05 +01:00
|
|
|
BKE_ntree_update(bmain, blender::Span{&modified_tree}, params);
|
|
|
|
|
}
|
2021-12-21 15:18:56 +01:00
|
|
|
|
2025-01-09 17:00:05 +01:00
|
|
|
void BKE_ntree_update_without_main(bNodeTree &tree)
|
|
|
|
|
{
|
2021-12-21 15:18:56 +01:00
|
|
|
if (is_updating) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
is_updating = true;
|
2025-01-09 17:00:05 +01:00
|
|
|
NodeTreeUpdateExtraParams params;
|
|
|
|
|
blender::bke::NodeTreeMainUpdater updater{nullptr, params};
|
|
|
|
|
updater.update_rooted({&tree});
|
2021-12-21 15:18:56 +01:00
|
|
|
is_updating = false;
|
|
|
|
|
}
|