Nodes: add nested node ids and use them for simulation state

The simulation state used by simulation nodes is owned by the modifier. Since a
geometry nodes setup can contain an arbitrary number of simulations, the modifier
has a mapping from `SimulationZoneID` to `SimulationZoneState`. This patch changes
what is used as `SimulationZoneID`.

Previously, the `SimulationZoneID` contained a list of `bNode::identifier` that described
the path from the root node tree to the simulation output node. This works ok in many
cases, but also has a significant problem: The `SimulationZoneID` changes when moving
the simulation zone into or out of a node group. This implies that any of these operations
loses the mapping from zone to simulation state, invalidating the cache or even baked data.

The goal of this patch is to introduce a single-integer ID that identifies a (nested) simulation
zone and is stable even when grouping and un-grouping. The ID should be stable even if the
node group containing the (nested) simulation zone is in a separate linked .blend file and
that linked file is changed.

In the future, the same kind of ID can be used to store e.g. checkpoint/baked/frozen data
in the modifier.

To achieve the described goal, node trees can now store an arbitrary number of nested node
references (an array of `bNestedNodeRef`). Each nested node reference has an ID that is
unique within the current node tree. The node tree does not store the entire path to the
nested node. Instead it only know which group node the nested node is in, and what the
nested node ID of the node is within that group. Grouping and un-grouping operations
have to update the nested node references to keep the IDs stable. Importantly though,
these operations only have to care about the two node groups that are affected. IDs in
higher level node groups remain unchanged by design.

A consequence of this design is that every `bNodeTree` now has a `bNestedNodeRef`
for every (nested) simulation zone. Two instances of the same simulation zone (because
a node group is reused) are referenced by two separate `bNestedNodeRef`. This is
important to keep in mind, because it also means that this solution doesn't scale well if
we wanted to use it to keep stable references to *all* nested nodes. I can't think of a
solution that fulfills the described requirements but scales better with more nodes. For
that reason, this solution should only be used when we want to store data for each
referenced nested node at the top level (like we do for simulations).

This is not a replacement for `ViewerPath` which can store a path to data in a node tree
without changing the node tree. Also `ViewerPath` can contain information like the loop
iteration that should be viewed (#109164). `bNestedNodeRef` can't differentiate between
different iterations of a loop. This also means that simulations can't be used inside of a
loop (loops inside of a simulation work fine though).

When baking, the new stable ID is now written to disk, which means that baked data is
not invalidated by grouping/un-grouping operations. Backward compatibility for baked
data is provided, but only works as long as the simulation zone has not been moved to
a different node group yet. Forward compatibility for the baked data is not provided
(so older versions can't load the data baked with a newer version of Blender).

Pull Request: https://projects.blender.org/blender/blender/pulls/109444
This commit is contained in:
Jacques Lucke
2023-07-01 11:54:32 +02:00
parent e9e12015ea
commit f33d7bb598
15 changed files with 381 additions and 36 deletions

View File

@@ -540,6 +540,16 @@ inline blender::MutableSpan<bNodePanel *> bNodeTree::panels_for_write()
return blender::MutableSpan(panels_array, panels_num);
}
inline blender::MutableSpan<bNestedNodeRef> bNodeTree::nested_node_refs_span()
{
return {this->nested_node_refs, this->nested_node_refs_num};
}
inline blender::Span<bNestedNodeRef> bNodeTree::nested_node_refs_span() const
{
return {this->nested_node_refs, this->nested_node_refs_num};
}
/** \} */
/* -------------------------------------------------------------------- */

View File

@@ -9,6 +9,8 @@
#include "BLI_map.hh"
#include "BLI_sub_frame.hh"
struct bNodeTree;
namespace blender::bke::sim {
class BDataSharing;
@@ -89,17 +91,17 @@ class SimulationZoneState {
/** Identifies a simulation zone (input and output node pair) used by a modifier. */
struct SimulationZoneID {
/** Every node identifier in the hierarchy of compute contexts. */
Vector<int> node_ids;
/** ID of the #bNestedNodeRef that references the output node of the zone. */
int32_t nested_node_id;
uint64_t hash() const
{
return get_default_hash(this->node_ids);
return this->nested_node_id;
}
friend bool operator==(const SimulationZoneID &a, const SimulationZoneID &b)
{
return a.node_ids == b.node_ids;
return a.nested_node_id == b.nested_node_id;
}
};
@@ -122,7 +124,7 @@ class ModifierSimulationState {
const SimulationZoneState *get_zone_state(const SimulationZoneID &zone_id) const;
SimulationZoneState &get_zone_state_for_write(const SimulationZoneID &zone_id);
void ensure_bake_loaded() const;
void ensure_bake_loaded(const bNodeTree &ntree) const;
};
struct ModifierSimulationStateAtFrame {

View File

@@ -163,7 +163,8 @@ void serialize_modifier_simulation_state(const ModifierSimulationState &state,
* Fill the simulation state by parsing the provided #DictionaryValue which also contains
* references to external binary data that is read using #bdata_reader.
*/
void deserialize_modifier_simulation_state(const DictionaryValue &io_root,
void deserialize_modifier_simulation_state(const bNodeTree &ntree,
const DictionaryValue &io_root,
const BDataReader &bdata_reader,
const BDataSharing &bdata_sharing,
ModifierSimulationState &r_state);

View File

@@ -253,6 +253,13 @@ static void ntree_copy_data(Main * /*bmain*/, ID *id_dst, const ID *id_src, cons
}
}
if (ntree_src->nested_node_refs) {
ntree_dst->nested_node_refs = static_cast<bNestedNodeRef *>(
MEM_malloc_arrayN(ntree_src->nested_node_refs_num, sizeof(bNestedNodeRef), __func__));
uninitialized_copy_n(
ntree_src->nested_node_refs, ntree_src->nested_node_refs_num, ntree_dst->nested_node_refs);
}
if (flag & LIB_ID_COPY_NO_PREVIEW) {
ntree_dst->preview = nullptr;
}
@@ -316,6 +323,10 @@ static void ntree_free_data(ID *id)
BKE_libblock_free_data(&ntree->id, true);
}
if (ntree->nested_node_refs) {
MEM_freeN(ntree->nested_node_refs);
}
BKE_previewimg_free(&ntree->preview);
MEM_delete(ntree->runtime);
}
@@ -684,6 +695,9 @@ void ntreeBlendWrite(BlendWriter *writer, bNodeTree *ntree)
BLO_write_string(writer, panel->name);
}
BLO_write_struct_array(
writer, bNestedNodeRef, ntree->nested_node_refs_num, ntree->nested_node_refs);
BKE_previewimg_blend_write(writer, ntree->preview);
}
@@ -911,6 +925,8 @@ void ntreeBlendReadData(BlendDataReader *reader, ID *owner_id, bNodeTree *ntree)
BLO_read_data_address(reader, &ntree->panels_array[i]->name);
}
BLO_read_data_address(reader, &ntree->nested_node_refs);
/* TODO: should be dealt by new generic cache handling of IDs... */
ntree->previews = nullptr;

View File

@@ -554,3 +554,53 @@ void bNodeTree::ensure_topology_cache() const
{
blender::bke::node_tree_runtime::ensure_topology_cache(*this);
}
const bNestedNodeRef *bNodeTree::find_nested_node_ref(const int32_t nested_node_id) const
{
for (const bNestedNodeRef &ref : this->nested_node_refs_span()) {
if (ref.id == nested_node_id) {
return &ref;
}
}
return nullptr;
}
const bNestedNodeRef *bNodeTree::nested_node_ref_from_node_id_path(
const blender::Span<int32_t> node_ids) const
{
if (node_ids.is_empty()) {
return nullptr;
}
for (const bNestedNodeRef &ref : this->nested_node_refs_span()) {
blender::Vector<int> current_node_ids;
if (this->node_id_path_from_nested_node_ref(ref.id, current_node_ids)) {
if (current_node_ids.as_span() == node_ids) {
return &ref;
}
}
}
return nullptr;
}
bool bNodeTree::node_id_path_from_nested_node_ref(const int32_t nested_node_id,
blender::Vector<int> &r_node_ids) const
{
const bNestedNodeRef *ref = this->find_nested_node_ref(nested_node_id);
if (ref == nullptr) {
return false;
}
const int32_t node_id = ref->path.node_id;
const bNode *node = this->node_by_id(node_id);
if (node == nullptr) {
return false;
}
r_node_ids.append(node_id);
if (!node->is_group()) {
return true;
}
const bNodeTree *group = reinterpret_cast<const bNodeTree *>(node->id);
if (group == nullptr) {
return false;
}
return group->node_id_path_from_nested_node_ref(ref->path.id_in_node, r_node_ids);
}

View File

@@ -5,11 +5,14 @@
#include "BLI_map.hh"
#include "BLI_multi_value_map.hh"
#include "BLI_noise.hh"
#include "BLI_rand.hh"
#include "BLI_set.hh"
#include "BLI_stack.hh"
#include "BLI_timeit.hh"
#include "BLI_vector_set.hh"
#include "PIL_time.h"
#include "DNA_anim_types.h"
#include "DNA_modifier_types.h"
#include "DNA_node_types.h"
@@ -486,6 +489,10 @@ class NodeTreeMainUpdater {
this->update_socket_link_and_use(ntree);
this->update_link_validation(ntree);
if (this->update_nested_node_refs(ntree)) {
result.interface_changed = true;
}
if (ntree.type == NTREE_TEXTURE) {
ntreeTexCheckCyclics(&ntree);
}
@@ -1086,6 +1093,115 @@ class NodeTreeMainUpdater {
return false;
}
/**
* Make sure that the #bNodeTree::nested_node_refs is up to date. It's supposed to contain a
* reference to all (nested) simulation zones.
*/
bool update_nested_node_refs(bNodeTree &ntree)
{
ntree.ensure_topology_cache();
/* Simplify lookup of old ids. */
Map<bNestedNodePath, int32_t> old_id_by_path;
Set<int32_t> old_ids;
for (const bNestedNodeRef &ref : ntree.nested_node_refs_span()) {
old_id_by_path.add(ref.path, ref.id);
old_ids.add(ref.id);
}
Vector<bNestedNodePath> nested_node_paths;
/* Don't forget nested node refs just because the linked file is not available right now. */
for (const bNestedNodePath &path : old_id_by_path.keys()) {
const bNode *node = ntree.node_by_id(path.node_id);
if (node && node->is_group() && node->id) {
if (node->id->tag & LIB_TAG_MISSING) {
nested_node_paths.append(path);
}
}
}
if (ntree.type == NTREE_GEOMETRY) {
/* Create references for simulations in geometry nodes. */
for (const bNode *node : ntree.nodes_by_type("GeometryNodeSimulationOutput")) {
nested_node_paths.append({node->identifier, -1});
}
}
/* Propagate references to nested nodes in group nodes. */
for (const bNode *node : ntree.group_nodes()) {
const bNodeTree *group = reinterpret_cast<const bNodeTree *>(node->id);
if (group == nullptr) {
continue;
}
for (const int i : group->nested_node_refs_span().index_range()) {
const bNestedNodeRef &child_ref = group->nested_node_refs[i];
nested_node_paths.append({node->identifier, child_ref.id});
}
}
/* Used to generate new unique IDs if necessary. */
RandomNumberGenerator rng(int(PIL_check_seconds_timer() * 1000000.0));
Map<int32_t, bNestedNodePath> new_path_by_id;
for (const bNestedNodePath &path : nested_node_paths) {
const int32_t old_id = old_id_by_path.lookup_default(path, -1);
if (old_id != -1) {
/* The same path existed before, it should keep the same ID as before. */
new_path_by_id.add(old_id, path);
continue;
}
int32_t new_id;
while (true) {
new_id = rng.get_int32(INT32_MAX);
if (!old_ids.contains(new_id) && !new_path_by_id.contains(new_id)) {
break;
}
}
/* The path is new, it should get a new ID that does not collide with any existing IDs. */
new_path_by_id.add(new_id, path);
}
/* Check if the old and new references are identical. */
if (!this->nested_node_refs_changed(ntree, new_path_by_id)) {
return false;
}
MEM_SAFE_FREE(ntree.nested_node_refs);
if (new_path_by_id.is_empty()) {
ntree.nested_node_refs_num = 0;
return true;
}
/* Allocate new array for the nested node references contained in the node tree. */
bNestedNodeRef *new_refs = static_cast<bNestedNodeRef *>(
MEM_malloc_arrayN(new_path_by_id.size(), sizeof(bNestedNodeRef), __func__));
int index = 0;
for (const auto item : new_path_by_id.items()) {
bNestedNodeRef &ref = new_refs[index];
ref.id = item.key;
ref.path = item.value;
index++;
}
ntree.nested_node_refs = new_refs;
ntree.nested_node_refs_num = new_path_by_id.size();
return true;
}
bool nested_node_refs_changed(const bNodeTree &ntree,
const Map<int32_t, bNestedNodePath> &new_path_by_id)
{
if (ntree.nested_node_refs_num != new_path_by_id.size()) {
return true;
}
for (const bNestedNodeRef &ref : ntree.nested_node_refs_span()) {
if (!new_path_by_id.contains(ref.id)) {
return true;
}
}
return false;
}
void reset_changed_flags(bNodeTree &ntree)
{
ntree.runtime->changed_flag = NTREE_CHANGED_NOTHING;
@@ -1227,6 +1343,16 @@ void BKE_ntree_update_tag_image_user_changed(bNodeTree *ntree, ImageUser * /*ius
add_tree_tag(ntree, NTREE_CHANGED_ANY);
}
uint64_t bNestedNodePath::hash() const
{
return blender::get_default_hash_2(this->node_id, this->id_in_node);
}
bool operator==(const bNestedNodePath &a, const bNestedNodePath &b)
{
return a.node_id == b.node_id && a.id_in_node == b.id_in_node;
}
/**
* Protect from recursive calls into the updating function. Some node update functions might
* trigger this from Python or in other cases.

View File

@@ -202,7 +202,7 @@ SimulationZoneState &ModifierSimulationState::get_zone_state_for_write(
[]() { return std::make_unique<SimulationZoneState>(); });
}
void ModifierSimulationState::ensure_bake_loaded() const
void ModifierSimulationState::ensure_bake_loaded(const bNodeTree &ntree) const
{
std::scoped_lock lock{mutex_};
if (bake_loaded_) {
@@ -223,7 +223,8 @@ void ModifierSimulationState::ensure_bake_loaded() const
}
const DiskBDataReader bdata_reader{*bdata_dir_};
deserialize_modifier_simulation_state(*io_root,
deserialize_modifier_simulation_state(ntree,
*io_root,
bdata_reader,
*owner_->bdata_sharing_,
const_cast<ModifierSimulationState &>(*this));

View File

@@ -7,6 +7,7 @@
#include "BKE_lib_id.h"
#include "BKE_main.h"
#include "BKE_mesh.hh"
#include "BKE_node_runtime.hh"
#include "BKE_pointcloud.h"
#include "BKE_simulation_state_serialize.hh"
@@ -790,12 +791,17 @@ static std::shared_ptr<io::serialize::Value> serialize_primitive_value(
return {};
}
/**
* Version written to the baked data.
*/
static constexpr int serialize_format_version = 2;
void serialize_modifier_simulation_state(const ModifierSimulationState &state,
BDataWriter &bdata_writer,
BDataSharing &bdata_sharing,
DictionaryValue &r_io_root)
{
r_io_root.append_int("version", 1);
r_io_root.append_int("version", serialize_format_version);
auto io_zones = r_io_root.append_array("zones");
for (const auto item : state.zone_states_.items()) {
@@ -804,11 +810,7 @@ void serialize_modifier_simulation_state(const ModifierSimulationState &state,
auto io_zone = io_zones->append_dict();
auto io_zone_id = io_zone->append_array("zone_id");
for (const int node_id : zone_id.node_ids) {
io_zone_id->append_int(node_id);
}
io_zone->append_int("state_id", zone_id.nested_node_id);
auto io_state_items = io_zone->append_array("state_items");
for (const MapItem<int, std::unique_ptr<SimulationStateItem>> &state_item_with_id :
@@ -980,7 +982,8 @@ template<typename T>
return false;
}
void deserialize_modifier_simulation_state(const DictionaryValue &io_root,
void deserialize_modifier_simulation_state(const bNodeTree &ntree,
const DictionaryValue &io_root,
const BDataReader &bdata_reader,
const BDataSharing &bdata_sharing,
ModifierSimulationState &r_state)
@@ -990,7 +993,7 @@ void deserialize_modifier_simulation_state(const DictionaryValue &io_root,
if (!version) {
return;
}
if (*version != 1) {
if (*version > serialize_format_version) {
return;
}
const io::serialize::ArrayValue *io_zones = io_root.lookup_array("zones");
@@ -1002,14 +1005,27 @@ void deserialize_modifier_simulation_state(const DictionaryValue &io_root,
if (!io_zone) {
continue;
}
const io::serialize::ArrayValue *io_zone_id = io_zone->lookup_array("zone_id");
bke::sim::SimulationZoneID zone_id;
for (const auto &io_zone_id_element : io_zone_id->elements()) {
const io::serialize::IntValue *io_node_id = io_zone_id_element->as_int_value();
if (!io_node_id) {
if (const std::optional<int> state_id = io_zone->lookup_int("state_id")) {
zone_id.nested_node_id = *state_id;
}
else if (const io::serialize::ArrayValue *io_zone_id = io_zone->lookup_array("zone_id")) {
/* In the initial release of simulation nodes, the entire node id path was written to the
* baked data. For backward compatibility the node ids are read here and then the nested node
* id is looked up. */
Vector<int> node_ids;
for (const auto &io_zone_id_element : io_zone_id->elements()) {
const io::serialize::IntValue *io_node_id = io_zone_id_element->as_int_value();
if (!io_node_id) {
continue;
}
node_ids.append(io_node_id->value());
}
const bNestedNodeRef *nested_node_ref = ntree.nested_node_ref_from_node_id_path(node_ids);
if (!nested_node_ref) {
continue;
}
zone_id.node_ids.append(io_node_id->value());
zone_id.nested_node_id = nested_node_ref->id;
}
const io::serialize::ArrayValue *io_state_items = io_zone->lookup_array("state_items");

View File

@@ -17,10 +17,13 @@
#include "BLI_listbase.h"
#include "BLI_map.hh"
#include "BLI_math_vector_types.hh"
#include "BLI_rand.hh"
#include "BLI_set.hh"
#include "BLI_string.h"
#include "BLI_vector.hh"
#include "PIL_time.h"
#include "BLT_translation.h"
#include "BKE_action.h"
@@ -237,6 +240,30 @@ static void animation_basepath_change_free(AnimationBasePathChange *basepath_cha
MEM_freeN(basepath_change);
}
static void update_nested_node_refs_after_ungroup(bNodeTree &ntree,
const bNodeTree &ngroup,
const bNode &gnode,
const Map<int32_t, int32_t> &node_identifier_map)
{
for (bNestedNodeRef &ref : ntree.nested_node_refs_span()) {
if (ref.path.node_id != gnode.identifier) {
continue;
}
const bNestedNodeRef *child_ref = ngroup.find_nested_node_ref(ref.path.id_in_node);
if (!child_ref) {
continue;
}
constexpr int32_t missing_id = -1;
const int32_t new_node_id = node_identifier_map.lookup_default(child_ref->path.node_id,
missing_id);
if (new_node_id == missing_id) {
continue;
}
ref.path.node_id = new_node_id;
ref.path.id_in_node = child_ref->path.id_in_node;
}
}
/**
* \return True if successful.
*/
@@ -421,6 +448,8 @@ static bool node_group_ungroup(Main *bmain, bNodeTree *ntree, bNode *gnode)
/* delete the group instance and dereference group tree */
nodeRemoveNode(bmain, ntree, gnode, true);
update_nested_node_refs_after_ungroup(*ntree, *ngroup, *gnode, node_identifier_map);
return true;
}
@@ -852,6 +881,54 @@ static bNodeSocket *add_interface_from_socket(const bNodeTree &original_tree,
socket_for_name.name);
}
static void update_nested_node_refs_after_moving_nodes_into_group(
bNodeTree &ntree,
bNodeTree &group,
bNode &gnode,
const Map<int32_t, int32_t> &node_identifier_map)
{
/* Update nested node references in the parent and child node tree. */
RandomNumberGenerator rng(int(PIL_check_seconds_timer() * 1000000.0));
Vector<bNestedNodeRef> new_nested_node_refs;
/* Keep all nested node references that were in the group before. */
for (const bNestedNodeRef &state_id : group.nested_node_refs_span()) {
new_nested_node_refs.append(state_id);
}
Set<int32_t> used_nested_node_ref_ids;
for (const bNestedNodeRef &ref : group.nested_node_refs_span()) {
used_nested_node_ref_ids.add(ref.id);
}
Map<bNestedNodePath, int32_t> new_id_by_old_path;
for (bNestedNodeRef &state_id : ntree.nested_node_refs_span()) {
const int32_t new_node_id = node_identifier_map.lookup_default(state_id.path.node_id, -1);
if (new_node_id == -1) {
/* The node was not moved between node groups. */
continue;
}
bNestedNodeRef new_state_id = state_id;
new_state_id.path.node_id = new_node_id;
/* Find new unique identifier for the nested node ref. */
while (true) {
const int32_t new_id = rng.get_int32(INT32_MAX);
if (used_nested_node_ref_ids.add(new_id)) {
new_state_id.id = new_id;
break;
}
}
new_id_by_old_path.add_new(state_id.path, new_state_id.id);
new_nested_node_refs.append(new_state_id);
/* Updated the nested node ref in the parent so that it points to the same node that is now
* inside of a nested group. */
state_id.path.node_id = gnode.identifier;
state_id.path.id_in_node = new_state_id.id;
}
MEM_SAFE_FREE(group.nested_node_refs);
group.nested_node_refs = static_cast<bNestedNodeRef *>(
MEM_malloc_arrayN(new_nested_node_refs.size(), sizeof(bNestedNodeRef), __func__));
uninitialized_copy_n(
new_nested_node_refs.data(), new_nested_node_refs.size(), group.nested_node_refs);
}
static void node_group_make_insert_selected(const bContext &C,
bNodeTree &ntree,
bNode *gnode,
@@ -1103,6 +1180,8 @@ static void node_group_make_insert_selected(const bContext &C,
info.link->fromsock = node_group_find_output_socket(gnode, info.interface_socket->identifier);
}
update_nested_node_refs_after_moving_nodes_into_group(ntree, group, *gnode, node_identifier_map);
ED_node_tree_propagate_change(&C, bmain, nullptr);
}

View File

@@ -15,6 +15,8 @@
/** Workaround to forward-declare C++ type in C header. */
#ifdef __cplusplus
# include <BLI_vector.hh>
namespace blender {
template<typename T> class Span;
template<typename T> class MutableSpan;
@@ -568,6 +570,27 @@ typedef struct bNodePanel {
char *name;
} bNodePanel;
typedef struct bNestedNodePath {
/** ID of the node that is or contains the nested node. */
int32_t node_id;
/** Unused if the node is the final nested node, otherwise an id inside of the (group) node. */
int32_t id_in_node;
#ifdef __cplusplus
uint64_t hash() const;
friend bool operator==(const bNestedNodePath &a, const bNestedNodePath &b);
#endif
} bNestedNodePath;
typedef struct bNestedNodeRef {
/** Identifies a potentially nested node. This ID remains stable even if the node is moved into
* and out of node groups. */
int32_t id;
char _pad[4];
/** Where to find the nested node in the current node tree. */
bNestedNodePath path;
} bNestedNodeRef;
/**
* The basis for a Node tree, all links and nodes reside internal here.
*
@@ -632,7 +655,12 @@ typedef struct bNodeTree {
*/
bNodeInstanceKey active_viewer_key;
char _pad[4];
/**
* Used to maintain stable IDs for a subset of nested nodes. For example, every simulation zone
* that is in the node tree has a unique entry here.
*/
int nested_node_refs_num;
bNestedNodeRef *nested_node_refs;
/** Image representing what the node group does. */
struct PreviewImage *preview;
@@ -654,6 +682,15 @@ typedef struct bNodeTree {
struct bNode *node_by_id(int32_t identifier);
const struct bNode *node_by_id(int32_t identifier) const;
blender::MutableSpan<bNestedNodeRef> nested_node_refs_span();
blender::Span<bNestedNodeRef> nested_node_refs_span() const;
const bNestedNodeRef *find_nested_node_ref(int32_t nested_node_id) const;
/** Conversions between node id paths and their corresponding nested node ref. */
const bNestedNodeRef *nested_node_ref_from_node_id_path(blender::Span<int> node_ids) const;
[[nodiscard]] bool node_id_path_from_nested_node_ref(const int32_t nested_node_id,
blender::Vector<int32_t> &r_node_ids) const;
/**
* Update a run-time cache for the node tree based on it's current state. This makes many methods
* available which allow efficient lookup for topology information (like neighboring sockets).

View File

@@ -737,14 +737,14 @@ static void prepare_simulation_states_for_evaluation(const NodesModifierData &nm
const bke::sim::StatesAroundFrame sim_states = simulation_cache.get_states_around_frame(
current_frame);
if (sim_states.current) {
sim_states.current->state.ensure_bake_loaded();
sim_states.current->state.ensure_bake_loaded(*nmd.node_group);
exec_data.current_simulation_state = &sim_states.current->state;
}
if (sim_states.prev) {
sim_states.prev->state.ensure_bake_loaded();
sim_states.prev->state.ensure_bake_loaded(*nmd.node_group);
exec_data.prev_simulation_state = &sim_states.prev->state;
if (sim_states.next) {
sim_states.next->state.ensure_bake_loaded();
sim_states.next->state.ensure_bake_loaded(*nmd.node_group);
exec_data.next_simulation_state = &sim_states.next->state;
exec_data.simulation_state_mix_factor =
(float(current_frame) - float(sim_states.prev->frame)) /

View File

@@ -102,6 +102,10 @@ struct GeoNodesLFUserData : public lf::UserData {
* Log socket values in the current compute context. Child contexts might use logging again.
*/
bool log_socket_values = true;
/**
* Top-level node tree of the current evaluation.
*/
const bNodeTree *root_ntree = nullptr;
destruct_ptr<lf::LocalUserData> get_local(LinearAllocator<> &allocator) override;
};
@@ -252,7 +256,7 @@ std::unique_ptr<LazyFunction> get_simulation_input_lazy_function(
GeometryNodesLazyFunctionGraphInfo &own_lf_graph_info);
std::unique_ptr<LazyFunction> get_switch_node_lazy_function(const bNode &node);
bke::sim::SimulationZoneID get_simulation_zone_id(const ComputeContext &context,
bke::sim::SimulationZoneID get_simulation_zone_id(const GeoNodesLFUserData &user_data,
const int output_node_id);
/**

View File

@@ -70,8 +70,7 @@ class LazyFunctionForSimulationInputNode final : public LazyFunction {
params.set_output(0, fn::ValueOrField<float>(delta_time));
}
const bke::sim::SimulationZoneID zone_id = get_simulation_zone_id(*user_data.compute_context,
output_node_id_);
const bke::sim::SimulationZoneID zone_id = get_simulation_zone_id(user_data, output_node_id_);
const bke::sim::SimulationZoneState *prev_zone_state =
modifier_data.prev_simulation_state == nullptr ?

View File

@@ -693,8 +693,7 @@ class LazyFunctionForSimulationOutputNode final : public LazyFunction {
EvalData &eval_data = *static_cast<EvalData *>(context.storage);
BLI_SCOPED_DEFER([&]() { eval_data.is_first_evaluation = false; });
const bke::sim::SimulationZoneID zone_id = get_simulation_zone_id(*user_data.compute_context,
node_.identifier);
const bke::sim::SimulationZoneID zone_id = get_simulation_zone_id(user_data, node_.identifier);
const bke::sim::SimulationZoneState *current_zone_state =
modifier_data.current_simulation_state ?
@@ -823,19 +822,23 @@ std::unique_ptr<LazyFunction> get_simulation_output_lazy_function(
return std::make_unique<file_ns::LazyFunctionForSimulationOutputNode>(node, own_lf_graph_info);
}
bke::sim::SimulationZoneID get_simulation_zone_id(const ComputeContext &compute_context,
bke::sim::SimulationZoneID get_simulation_zone_id(const GeoNodesLFUserData &user_data,
const int output_node_id)
{
bke::sim::SimulationZoneID zone_id;
for (const ComputeContext *context = &compute_context; context != nullptr;
Vector<int> node_ids;
for (const ComputeContext *context = user_data.compute_context; context != nullptr;
context = context->parent())
{
if (const auto *node_context = dynamic_cast<const bke::NodeGroupComputeContext *>(context)) {
zone_id.node_ids.append(node_context->node_id());
node_ids.append(node_context->node_id());
}
}
std::reverse(zone_id.node_ids.begin(), zone_id.node_ids.end());
zone_id.node_ids.append(output_node_id);
std::reverse(node_ids.begin(), node_ids.end());
node_ids.append(output_node_id);
const bNestedNodeRef *nested_node_ref = user_data.root_ntree->nested_node_ref_from_node_id_path(
node_ids);
bke::sim::SimulationZoneID zone_id;
zone_id.nested_node_id = nested_node_ref->id;
return zone_id;
}

View File

@@ -564,6 +564,7 @@ bke::GeometrySet execute_geometry_nodes_on_geometry(
nodes::GeoNodesLFUserData user_data;
fill_user_data(user_data);
user_data.root_ntree = &btree;
user_data.compute_context = &base_compute_context;
LinearAllocator<> allocator;