2023-08-16 00:20:26 +10:00
|
|
|
/* SPDX-FileCopyrightText: 2023 Blender Authors
|
2023-05-31 16:19:06 +02:00
|
|
|
*
|
|
|
|
|
* SPDX-License-Identifier: GPL-2.0-or-later */
|
2020-04-20 10:58:43 +02:00
|
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
|
|
#include "MEM_guardedalloc.h"
|
|
|
|
|
|
2023-05-15 15:14:22 +02:00
|
|
|
#include "BKE_node.hh"
|
2023-11-18 13:11:39 +01:00
|
|
|
#include "BKE_node_socket_value.hh"
|
2020-04-20 10:58:43 +02:00
|
|
|
|
2020-12-02 13:25:25 +01:00
|
|
|
#include "NOD_geometry_exec.hh"
|
2023-08-09 22:01:03 +02:00
|
|
|
#include "NOD_register.hh"
|
2021-08-30 17:13:46 +02:00
|
|
|
#include "NOD_socket_declarations.hh"
|
2021-10-26 20:00:03 +02:00
|
|
|
#include "NOD_socket_declarations_geometry.hh"
|
2020-04-20 10:58:43 +02:00
|
|
|
|
2023-05-03 14:21:14 +02:00
|
|
|
#include "node_util.hh"
|
2020-04-20 10:58:43 +02:00
|
|
|
|
2022-09-23 13:56:35 -05:00
|
|
|
struct BVHTreeFromMesh;
|
2023-08-13 10:50:52 +03:00
|
|
|
struct GeometrySet;
|
2023-08-04 20:59:04 +02:00
|
|
|
namespace blender::nodes {
|
|
|
|
|
class GatherAddNodeSearchParams;
|
|
|
|
|
class GatherLinkSearchOpParams;
|
|
|
|
|
} // namespace blender::nodes
|
2022-09-23 13:56:35 -05:00
|
|
|
|
2023-08-03 01:11:28 +02:00
|
|
|
void geo_node_type_base(bNodeType *ntype, int type, const char *name, short nclass);
|
|
|
|
|
bool geo_node_poll_default(const bNodeType *ntype,
|
|
|
|
|
const bNodeTree *ntree,
|
2021-04-12 18:43:23 +02:00
|
|
|
const char **r_disabled_hint);
|
2020-12-09 16:20:48 +01:00
|
|
|
|
|
|
|
|
namespace blender::nodes {
|
2020-12-18 16:00:45 +01:00
|
|
|
|
2023-08-04 20:59:04 +02:00
|
|
|
bool check_tool_context_and_error(GeoNodeExecParams ¶ms);
|
|
|
|
|
void search_link_ops_for_tool_node(GatherLinkSearchOpParams ¶ms);
|
2024-03-02 15:06:32 -05:00
|
|
|
void search_link_ops_for_volume_grid_node(GatherLinkSearchOpParams ¶ms);
|
2023-08-04 20:59:04 +02:00
|
|
|
|
2022-09-23 13:56:35 -05:00
|
|
|
void get_closest_in_bvhtree(BVHTreeFromMesh &tree_data,
|
|
|
|
|
const VArray<float3> &positions,
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
const IndexMask &mask,
|
2023-12-13 09:38:32 -05:00
|
|
|
MutableSpan<int> r_indices,
|
|
|
|
|
MutableSpan<float> r_distances_sq,
|
|
|
|
|
MutableSpan<float3> r_positions);
|
2022-09-23 13:56:35 -05:00
|
|
|
|
2022-09-28 14:38:27 -05:00
|
|
|
int apply_offset_in_cyclic_range(IndexRange range, int start_index, int offset);
|
2023-05-03 11:46:18 -04:00
|
|
|
|
2023-12-13 09:38:32 -05:00
|
|
|
void mix_baked_data_item(eNodeSocketDatatype socket_type,
|
2023-12-01 16:17:03 +01:00
|
|
|
void *prev,
|
|
|
|
|
const void *next,
|
|
|
|
|
const float factor);
|
|
|
|
|
|
2023-08-29 12:15:40 +02:00
|
|
|
namespace enums {
|
|
|
|
|
|
|
|
|
|
const EnumPropertyItem *attribute_type_type_with_socket_fn(bContext * /*C*/,
|
|
|
|
|
PointerRNA * /*ptr*/,
|
|
|
|
|
PropertyRNA * /*prop*/,
|
|
|
|
|
bool *r_free);
|
|
|
|
|
|
|
|
|
|
bool generic_attribute_type_supported(const EnumPropertyItem &item);
|
|
|
|
|
|
2023-10-10 16:49:30 +02:00
|
|
|
const EnumPropertyItem *domain_experimental_grease_pencil_version3_fn(bContext * /*C*/,
|
|
|
|
|
PointerRNA * /*ptr*/,
|
|
|
|
|
PropertyRNA * /*prop*/,
|
|
|
|
|
bool *r_free);
|
|
|
|
|
|
2023-11-16 18:29:34 +01:00
|
|
|
const EnumPropertyItem *domain_without_corner_experimental_grease_pencil_version3_fn(
|
|
|
|
|
bContext * /*C*/, PointerRNA * /*ptr*/, PropertyRNA * /*prop*/, bool *r_free);
|
|
|
|
|
|
2023-08-29 12:15:40 +02:00
|
|
|
} // namespace enums
|
|
|
|
|
|
2024-03-20 12:55:49 +01:00
|
|
|
bool custom_data_type_supports_grids(eCustomDataType data_type);
|
2023-12-20 22:33:17 +01:00
|
|
|
const EnumPropertyItem *grid_custom_data_type_items_filter_fn(bContext *C,
|
|
|
|
|
PointerRNA *ptr,
|
|
|
|
|
PropertyRNA *prop,
|
|
|
|
|
bool *r_free);
|
|
|
|
|
const EnumPropertyItem *grid_socket_type_items_filter_fn(bContext *C,
|
|
|
|
|
PointerRNA *ptr,
|
|
|
|
|
PropertyRNA *prop,
|
|
|
|
|
bool *r_free);
|
|
|
|
|
|
2023-12-20 15:58:56 +01:00
|
|
|
void node_geo_exec_with_missing_openvdb(GeoNodeExecParams ¶ms);
|
|
|
|
|
|
Geometry Nodes: support baking data block references
With this patch, materials are kept intact in simulation zones and bake nodes
without any additional user action.
This implements the design proposed in #108410 to support referencing
data-blocks (only materials for now) in the baked data. The task also describes
why this is not a trivial issue. A previous attempt was implemented in #109703
but it didn't work well-enough.
The solution is to have an explicit `name (+ library name) -> data-block`
mapping that is stored in the modifier for each bake node and simulation zone.
The `library name` is necessary for it to be unique within a .blend file. Note
that this refers to the name of the `Library` data-block and not a file path.
The baked data only contains the names of the used data-blocks. When the baked
data is loaded, the correct material data-block is looked up from the mapping.
### Automatic Mapping Generation
The most tricky aspect of this approach is to make it feel mostly automatic.
From the user point-of-view, it should just work. Therefore, we don't want the
user to have to create the mapping manually in the majority of cases. Creating
the mapping automatically is difficult because the data-blocks that should
become part of the mapping are only known during depsgraph evaluation. So we
somehow have to gather the missing data blocks during evaluation and then write
the new mappings back to the original data.
While writing back to original data is something we do in some cases already,
the situation here is different, because we are actually creating new relations
between data-blocks. This also means that we'll have to do user-counting. Since
user counts in data-blocks are *not* atomic, we can't do that from multiple
threads at the same time. Also, under some circumstances, it may be necessary to
trigger depsgraph evaluation again after the write-back because it actually
affects the result.
To solve this, a small new API is added in `DEG_depsgraph_writeback_sync.hh`. It
allows gathering tasks which write back to original data in a synchronous way
which may also require a reevaluation.
### Accessing the Mapping
A new `BakeDataBlockMap` is passed to geometry nodes evaluation by the modifier.
This map allows getting the `ID` pointer that should be used for a specific
data-block name that is stored in baked data. It's also used to gather all the
missing data mappings during evaluation.
### Weak ID References
The baked/cached geometries may have references to other data-blocks (currently
only materials, but in the future also e.g. instanced objects/collections).
However, the pointers of these data-blocks are not stable over time. That is
especially true when storing/loading the data from disk, but also just when
playing back the animation. Therefore, the used data-blocks have to referenced
in a different way at run-time.
This is solved by adding `std::unique_ptr<bake::BakeMaterialsList>` to the
run-time data of various geometry data-blocks. If the data-block is cached over
a longer period of time (such that material pointers can't be used directly), it
stores the material name (+ library name) used by each material slot. When the
geometry is used again, the material pointers are restored using these weak name
references and the `BakeDataBlockMap`.
### Manual Mapping Management
There is a new `Data-Blocks` panel in the bake settings in the node editor
sidebar that allows inspecting and modifying the data-blocks that are used when
baking. The user can change what data-block a specific name is mapped to.
Pull Request: https://projects.blender.org/blender/blender/pulls/117043
2024-02-01 09:21:55 +01:00
|
|
|
void draw_data_blocks(const bContext *C, uiLayout *layout, PointerRNA &bake_rna);
|
|
|
|
|
|
2020-12-16 16:59:30 +01:00
|
|
|
} // namespace blender::nodes
|