Since recent change in code of edge deduplication there is bunch of
reports about regressions whose were not found by tests. This commit
add some of recently explored cases to test coverage, right from
#147961. Not sure how to cover other know case from #147797 due to
use of USD file as source.
Pull Request: https://projects.blender.org/blender/blender/pulls/148237
Mesh invariants imply that edges and faces must be unique, so things
like reversed edges or duplicate faces with equal vertices are invalid.
For this reason, every time we generate new elements we have to ensure
that all new elements are unique between each other and already existing
elements. The recent refactor (ea875f6f32) introduced a new
algorithm to generate new mesh elements, and deduplication of new
elements was also a part of it. The problem is that the deduplication
only guaranteed that the original elements and new elements don't
overlap; deduplication between each new elements is not complete.
To solve the problem both new triangles and new edges have to be
deduplicated, even if there is no duplicates. Just to know this we have
to build a hash sets.
Triangle deduplication is a special part of the triangulation code,
but edges already handled elsewhere in the code base.
This refactor fixes this by replacing the original approach with one
which guarantees distinct faces and edges in the result.
Unfortunately, this fix increases runtime of the node 10x for a simple
cube with 500-vertex sides. It should be possible to make the
performance better again, but that requires more work.
Other work had to be done to enable this, so this depends on:
- [x] 157e7e0351
- [x] fa8574b80b
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/147634
The fix in this case is to properly use the stored builtin attribute defaults
when capturing the field on the mesh. I extracted that to a function so
the code would read better with early returns.
Pull Request: https://projects.blender.org/blender/blender/pulls/147646
Split the result mesh generation to work into two loops. The first loop
builds topology maps used for propagating attribute values, and the next
loop mixes attribute values with those maps. This has a few benefits:
The propagation can be done in parallel, and the topology mapping code
can get simpler. Also, we reduce the overhead that comes with iterating
over all attributes for every single element. Overall I didn't measure
a large performance difference though, because the new code has the
downside of needing to allocate two more arrays.
There is a small difference of a propagated byte color (250 vs. 251) in
the geometry nodes test due to a subtle difference in the mixing
implementation.
Part of #122398.
Pull Request: https://projects.blender.org/blender/blender/pulls/147367
Given a grid of velocities, this node advects the values of the input
grid. Different options for integration scheme and clamping are
exposed. Currently their names are just as defined in OpenVDB.
Pull Request: https://projects.blender.org/blender/blender/pulls/147273
This PR gives access to OpenVDB's level set filtering operations
through individual **SDF Grid** nodes.
- **Mean Curvature**: Smoothens high curvature areas more than flatter
ones.
- **Laplacian**: Approximates mean curvature flow for true SDFs at lower
computational cost.
- **Median**: Reduces noise while preserving sharp features.
- **Mean**: Fast separable averaging filter for general-purpose
smoothing with linear computational complexity.
- **Offset**: Uniform dilation/erosion operation that shifts the SDF
surface by a world-space distance.
- **Fillet**: Rounds off concave internal corners by operating only on
regions with negative principal curvature.
Pull Request: https://projects.blender.org/blender/blender/pulls/147224
Explicitly set the transform of a grid.
The new transform can fail to be applied if the input transform isn't
invertible or for some extremes of scaling (0 or combinations of
negative and positive) and numerical errors with `openvdb`. If a
transform is not applied an error is raised and the `Is Valid`
returns false.
Pull Request: https://projects.blender.org/blender/blender/pulls/146824
Adds four new grid operator nodes that wrap OpenVDB's differential
operators. All nodes take a grid input and output a grid, potentially
with a different data type.
New nodes:
- Grid Curl: Calculate curl of vector field (Vec3 → Vec3)
- Grid Divergence: Calculate divergence of vector field (Vec3 → Float)
- Grid Gradient: Calculate gradient of scalar field (Float → Vec3)
- Grid Laplacian: Calculate Laplacian of scalar field (Float → Float)
These operators enable vector calculus operations on volume grids for
effects like flow analysis, vortex detection, etc.
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/146845
Adds a Join Bundle node with a multi-input Bundle socket. Bundles are
iterated on the top level of each input bundle and items added to the resulting
single bundle. While creating the final bundle existing items have priority and
new items with an already existing name are discarded.
There is an info message when there are duplicate keys in the input bundles.
Co-authored-by: Brady Johnston
Co-authored-by: Jacques Lucke
Pull Request: https://projects.blender.org/blender/blender/pulls/146750
This node outputs tangent values for face corners. There are two methods:
- **Exact** is the same MikkTSpace calculation used elsewhere in Blender.
- **Fast** (from #131308) is over 4x faster, and useful in many of
the same situations, though not necessarily tangential to the surface.
The reason to include both methods is that there are use cases where the
quality of the tangents don't matter (though the results are actually very
similar visually), we just need some continuous values across faces.
Pull Request: https://projects.blender.org/blender/blender/pulls/145813
On its own, the main functionality of the Radial Tiling node
is the ability to divide a 2D Cartesian coordinate system into
as many radial segments as specified by the "Segments" input.
Each segment has its own affinely transformed coordinate system,
provided through the "Segment Coordinates" output, which can be
used to tile textures in a radially symmetric manner.
Additionally, a unique index is provided for every segment through
the "Segment ID" output, the width of each segment at Y-coordinate
of the "Segment Coordinates" output without normalization = 0 is
provided through the "Segment Width" output and the rotation value
of the affine transformation of the coordinate system of each segment
is provided through the "Segment Rotation" output.
The roundness of the coordinate lines of the "Segment Coordinates"
output can be controlled through the "Roundness" inputs.
This can be used to make the coordinate systems of the segments
a mix of Cartesian and polar coordinates.
Lastly, the lines of points of the "Segment Coordinates" output with
constant Y-coordinates have the shape of polygon with rounded corners,
which can be used to procedurally create rounded polygons.
Pull Request: https://projects.blender.org/blender/blender/pulls/127711
This adds a boolean output for each of the menu items. The output is true, if
the passed in menu value is that item. This avoids the need to compare the
output value to the input values to get a boolean for whether a specific menu
item was passed in.
Support is added for Geometry Nodes as well as the Compositor. Usage/Value
inferencing has been updated as well.
Pull Request: https://projects.blender.org/blender/blender/pulls/145712
Blender grid rendering interprets voxel transforms in such a way that the voxel
values are located at the center of a voxel. This is inconsistent with OpenVDB
where the values are located at the lower corners for the purpose or sampling
and related algorithms.
While it is possible to offset grids when communicating with the OpenVDB
library, this is also error-prone and does not add any major advantage.
Every time a grid is passed to OpenVDB we currently have to take care to
transform by half a voxel to ensure correct sampling weights are used that match
the density displayed by the viewport rendering.
This patch changes volume grid generation, conversion, and rendering code so
that grid transforms match the corner-located values in OpenVDB.
- The volume primitive cube node aligns the grid transform with the location of
the first value, which is now also the same as min/max bounds input of the
node.
- Mesh<->Grid conversion does no longer require offsetting grid transform and
mesh vertices respectively by 0.5 voxels.
- Texture space for viewport rendering is offset by half a voxel, so that it
covers the same area as before and voxel centers remain at the same texture
space locations.
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/138449
PR https://projects.blender.org/blender/blender/pulls/144233.
Fix#142296 performance regression caused by merging UV positions
based on face area weights, replacing it with mean average weights
computation. The former method resulted in higher probability in making
subdivision cache invalid between frame updates and thus causing expensive
recreation of blender::bke::subdiv::OpenSubdiv_Evaluator object.
While this fix isn't directly answering a question why specific UV position
updates would cause this reevaluation of subdivision object, it fixes
the performance regression caused by #139595 PR.
Many nodes operate on all the instances that are passed into them. For example,
the Subdivision Surface node subdivides the mesh at the root but also instanced
meshes. This works well for most nodes, but there are a few nodes were the old
`modify_geometry_sets` function was not very well defined and it was tricky to
use correctly.
The fundamental problem was that the behavior is not obvious when a node creates
or modifies instances and how those are integrated with the already existing
instances.
This patch solves this with the following changes:
* Remove the old `GeometrySet::modify_geometry_sets` and related
`*_during_modify` methods.
* Add a new `blender::geometry::foreach_real_geometry` function that is similar
to the old `modify_geometry_sets` but has a more well-defined interface:
* It never passes instances into the callback. So existing instances can't be
modified with it.
* The callback is allowed to create new instances. This will automatically be
merged back with potentially already existing instances. The callback does
not have to worry about accidentally invalidating existing instances like
before.
* A few existing usages used `modify_geometry_sets` to actually modify existing
instances (usually just removing attributes). Those can't use the new
`foreach_real_geometry`, so they just get a custom simple recursive
implementation instead of using a generic function.
Pull Request: https://projects.blender.org/blender/blender/pulls/143898
This includes a new list structure type and socket shape, a node
to create lists, a node to retrieve values from lists, and a node to
retrieve the length of lists. It also implements multi-function support
so that function nodes work on lists.
There are three nodes included in this PR.
- **List** Creates a list of elements with a given size. The values
are computed with a field that can use the index as an input.
- **Get List Item** A field node that retrieves an element from a
a list at a given index. The index input is dynamic, so if the input
is a list, the output will be a list too.
- **List Length** Just gives the length of a list.
When a function node is used with multiple list inputs, the shorter
lists are repeated to extend it to the length of the longest.
The list nodes and structure type are hidden behind an experimental
feature until we can be sure they're useful for an actual use case.
Pull Request: https://projects.blender.org/blender/blender/pulls/140679
This fixes the anonymous attribute lifetime inferencing for the closure zone and
the evaluate closure node. It also adds new regression tests for cases where a
closure outputs a field.
Pull Request: https://projects.blender.org/blender/blender/pulls/141925
In this report, adding the "resolution" attribute didn't clear the
evaluated positions cache. In some cases capturing an attribute on
the mesh might just add the mesh rather than using an attribute
writer.
This is a basic armature deformation test for #141535 using Lattices instead of
Mesh as the target object type. Lattice deformation was briefly broken, which is
caught by this test.
The test adds the general-purpose `unit_test_compare` function to lattice object
data. It only compares lattice point counts and positions for now, more data can
be added later if necessary.
The `MeshTest` class did not support lattice object types yet, so needed some
changes. The Curves case was already supported, but only by full conversion to
mesh data, without actually using the `unit_test_compare` function specific for
curves geometry. This is unchanged, because applying constructive modifiers on
curves does not work. If it were not for this limitation the test could do
actual curves comparisons now.
For lattice support the `MeshTest` class comparison function has been
generalized to all supported object data types. It runs the appropriate
`unit_test_compare` api function and validation where supported (only meshes at
this point).
Pull Request: https://projects.blender.org/blender/blender/pulls/141546
When beveling a vertex with only 2 connected edges, the resulting
geometry was not selected.
Resolve by not only selecting the faces created by the bevel operation,
but also selecting the vertices created.
Note: apply the same change as before,
the LFS data has been manually pushed.
Ref !139691
When beveling a vertex with only 2 connected edges, the resulting
geometry was not selected.
Resolve by not only selecting the faces created by the bevel operation,
but also selecting the vertices created.
Ref !139691
Test for armature deform modifier settings that are not yet covered by other tests:
- Vertex Group vs. Envelope deformation and both combined.
- Vertex group masking ('vertex_group' setting of the modifier)
- Inverted vertex group masking
- Preserve Volume (dual quaternions)
- Vertex Group/Envelope influence mixing (bone option)
- B-Bone deformation
- Multi-modifier mixing
Each case has a new test/expected mesh pair in the modifiers test.
Pull Request: https://projects.blender.org/blender/blender/pulls/141054
This test replaces the existing multires modifier test with one that
subdivides the mesh and then compares the result. The existing one has
little purpose, as it applies a modifier with 0 subdivision levels.
Pull Request: https://projects.blender.org/blender/blender/pulls/140567
Fix#79163 bug related to the bevel operation producing disconnected UVs for
new bevel faces. This change replaces previous approach using scattered and
selective usage of functions: bev_merge_uvs, bev_merge_edge_uvs and
bev_merge_end_uvs with one coherent technique for all stages of the bevel operation.
It is utilizing a concept of loop (BMLoop) buckets to keep track of UV vertices
that should be merged at the end of bevel operation by a single call to
bevel_merge_uvs function. This approach doesn't touch initial UV position
calculation done by interpolation algorithm in bev_create_ngon function and
keeps the concept of representative faces (called frep, facerep or rep_face in
code) to help decide to which bucket specific loops should be assigned.
This is from PR https://projects.blender.org/blender/blender/pulls/139595,
which has more explanation and discussion.
Fix#79163 bug related to the bevel operation producing disconnected UVs for
new bevel faces. This change replaces previous approach using scattered and
selective usage of functions: bev_merge_uvs, bev_merge_edge_uvs and
bev_merge_end_uvs with one coherent technique for all stages of the bevel operation.
It is utilizing a concept of loop (BMLoop) buckets to keep track of UV vertices
that should be merged at the end of bevel operation by a single call to
bevel_merge_uvs function. This approach doesn't touch initial UV position
calculation done by interpolation algorithm in bev_create_ngon function and
keeps the concept of representative faces (called frep, facerep or rep_face in
code) to help decide to which bucket specific loops should be assigned.
This is from PR https://projects.blender.org/blender/blender/pulls/139595,
which has more explanation and discussion.
The 2D->2D, 3D->3D, 4D->4D hash functions used in Voronoi node were
using quite an expensive hash function. Switch these to dedicated
2D/3D/4D hash functions (pcg2d, pcg3d, pcg4d) -- these are still very
good quality, but the hash function itself is 3x-4x faster.
Which makes Voronoi node calculation overall be around 2x faster. In
some cases when using OSL, the speedup is even larger.
This visibly changes output of the Voronoi noise however. The actual
noise "behaves" the same, just if someone was depending on the noise
pattern being exactly like it was before, this will change the pattern.
Images, more performance results and details wrt OSL are in the PR.
Pull Request: https://projects.blender.org/blender/blender/pulls/139520
Current NURBS evaluation handles corners or sharp angles poorly. Sharp
edges appear when a knot vector value is repeated `order - 1` times.
Users can make sharp corners by creating NURBS curve with `Bezier` knot
mode or by setting `order` to 2 for legacy curves. The problem occurs
because current algorithm takes all the curve's definition interval,
divides it into equal parts and evaluates at those points, but corners
are exactly on repeated knot's. To hit those, the resolution has to be
increased higher than required for the rest of the curve.
The new algorithm divides non zero length intervals between two adjacent
knots into equal parts. This way corners are hit with a resolution of 1.
This does change the evaluated points of NURBS curves, which is why some
test results have to be updated in this commit.
Pull Request: https://projects.blender.org/blender/blender/pulls/138565
This changes the way random integers are computed so that there is no
intermediate conversion to float which looses accuracy.
This change breaks compatibility because it changes the generated random
numbers. Therefore this is done in Blender 5.0.
Performance seems to be about ~6% better than before.
Pull Request: https://projects.blender.org/blender/blender/pulls/118795
When not all the meshes referenced by the instances recursively
are realized because of the limit of the depth input, and those
meshes have vertex groups, a crash is possible because of an
un-checked VectorSet lookup. `all_meshes_info.order` includes
all meshes regardless of the depth and mask arguments.
Pull Request: https://projects.blender.org/blender/blender/pulls/138519
This change moves the tests data files and publish folder of assets
repository to the main blender.git repository as LFS files.
The goal of this change is to eliminate toil of modifying tests,
cherry-picking changes to LFS branches, adding tests as part of a
PR which brings new features or fixes.
More detailed explanation and conversation can be found in the
design task.
Ref #137215
Pull Request: https://projects.blender.org/blender/blender/pulls/137219