The code added in commit ffc204d1fa to dissolve redundant 2-edged
vertices after a manifold boolean assumed that after dissolving such
vertices a valid face would remain. This is not true of the face
started out degenerate (all vertices on the same line).
Fixed by checking for such cases and in any case not creating
any faces with less than three vertices.
The calls to `to_geometry_set` in this file can create a temporary
Instances struct for collections. That instances component will contain
two attributes, which are currently referenced in the attributes map
even after the temporary compoment storage goes out of scope. A simple
fix is to avoid adding these attributes to the map in the first place.
An alternative that would also be more efficient would be to handle each
instance reference type explicitly, without converting it to a temporary
geometry set. That seems to significantly complicate the code though;
for now it doesn't seem worth it.
Pull Request: https://projects.blender.org/blender/blender/pulls/140999
This is discussed more in PR #140773.
The cause of the breakage was the change of the Manifold library
version from 3.0.1 to 3.1.0. That change is very positive otherwise
because we can remove the "use runids" workaround to prevent bad
face merging, and that removal is also part of this commit.
Removing that changes the time to do a big sphere-sphere test
from 660ms to 340ms.
The problem that needed fixing is that the new library version appears
not to do some aggressive simplification that the old version did,
and as a result, when we dissolve triangulation edges after the boolean
is done, it sometimes leaves valence-2 vertices on original edges.
To fix that, new code detects and then dissolves such vertices.
This is discussed more in PR #140773.
The cause of the breakage was the change of the Manifold library
version from 3.0.1 to 3.1.0. That change is very positive otherwise
because we can remove the "use runids" workaround to prevent bad
face merging, and that removal is also part of this commit.
Removing that changes the time to do a big sphere-sphere test
from 660ms to 340ms.
The problem that needed fixing is that the new library version appears
not to do some aggressive simplification that the old version did,
and as a result, when we dissolve triangulation edges after the boolean
is done, it sometimes leaves valence-2 vertices on original edges.
To fix that, new code detects and then dissolves such vertices.
Small constant trivial referencing objects should be passed by value.
Current state of code most likely legacy from the times there was an optional strings.
Pull Request: https://projects.blender.org/blender/blender/pulls/138058
Replaces pointer based EXPECT_EQ_ARRAY with EXPECT_EQ_SPAN in most cases
as they already used spans (or span compatible datastructures).
Currently EXPECT_EQ_ARRAY only takes in one size variable and doesn't
compare the number of elements between arguments (requiring an
additional line to do so).
This should make the code cleaner and safer. Goal is also to promote
the use Spans in new test code.
Pull Request: https://projects.blender.org/blender/blender/pulls/140340
The curve interpolation operator write uninitialized data to vertex
attributes if there is more than one interpolated curve pair.
This is because _partial writes_ to `VArraySpan` wrappers only work
if the original VArray is already a span or if the wrapper span is
fully initialized with the original data beforehand.
In the case of the curve interpolation tool for Grease Pencil the
interpolation is invoked for each curve pair separately, creating a
new output attribute `VArraySpan` wrapper each time. This wrapper
is only filled for the curve pair in question and writes uninitialized
data to all the other curves' vertex weight attributes.
To prevent this from happening the simple solution is to use
`lookup_or_add_for_write_span` which initializes the entire span.
This causes quite a lot of unnecessary copying, but that is acceptable
for the Grease Pencil interpolation tool. The alternative is to change
the tool so that the destination GSpanAttributeWriter is only created
once, but that is a much bigger change.
Pull Request: https://projects.blender.org/blender/blender/pulls/140283
Adds a Shape Method parameter to the UV Pack Islands node, enabling
artists to choose between faster packing and more efficient space
utilization.
Those are the three shape method options:
* Bounding Box: Fastest, less efficient space usage
* Convex Hull: Balanced performance and efficiency
* Exact Shape: Optimal packing, higher computational cost
This change consolidates arguments in `uv_parametrizer_pack()`. Now it
accept ` UVPackIsland_Params` instead of many different separate options.
This also makes it easier to expose more options in the future.
Pull Request: https://projects.blender.org/blender/blender/pulls/139110
This moves `PointCloud` to use the recently added `AttributeStorage`
at runtime. Mainly this involves implementing the higher level attribute
API on top, and implementing the RNA API as well. The attribute RNA type
is now backed by either CustomDataLayer or bke::Attribute. For now the
new code is specific to point clouds but next steps can reuse it for
Grease Pencil layer attributes, curves, and eventually meshes.
Point cloud attributes no longer have a name length limit.
Internally, the `AttributeStorage` API is extended with a few additions:
- The data structs have static constructors for convenience.
- A few functions give index-based access to attributes
- A "rename" function is added.
The `Attribute` RNA type now exposes a `storage_type` property.
For now the "single value" option is still unused at runtime, and
accessing the single value data isn't implemented yet.
Pull Request: https://projects.blender.org/blender/blender/pulls/139165
Current strategy to deal with operators not supporting custom NURBS
knots is to fall back to calculated knots for curves of the custom mode
but with no `CurvesGeometry::custom_knots` allocated. Such curves are
the result of operators that copy only `Point` and `Curve` domains. This
way the problem is only postponed. It is not possible to add new custom
knot curves to such `CurvesGeometry` as custom knot offsets are
calculated all together and there is no way to distinguish between old
curves with lost knots and new ones. This is more a future problem.
The actual problem in `main` can be shown with an attached blend file
(see PR) by applying `Subdivide` to some points and then adding new
`Bezier` curve to the same object. This particular problem could be
addressed somewhere in `realize_instances.cc` but the actual problem
would persist.
This PR handles custom knots in all places where `BKE_defgroup_copy_list`
is iused, and where `bke::curves::copy_only_curve_domain` is called.
Here the assumption is made that only these places can copy custom knots
modes without copying custom knots. Depending on operator logic knots are
handled most often in one of two ways:
- `bke::curves::nurbs::copy_custom_knots`:
copies custom knots for all curves excluding `selection`. Knot modes
for excluded curves are altered from the custom mode to calculated.
This way only curves modified by the operator will loose custom knots.
- `bke::curves::nurbs::update_custom_knot_modes;`
alters all curves to calculated mode.
In some places (e.g. `reorder.cc`) it is possible to deal with knots
without side effects.
PR also adds `BLI_assert` in `load_curve_knots` function to check if
`CurvesGeometry::custom_knots` exists for custom mode curves. Thus
versioning code is needed addressing the issue in files in case such
already exists.
Pull Request: https://projects.blender.org/blender/blender/pulls/139554
Adds a new operator in Grease Pencil edit mode to convert between curve
types. This acts as a replacment for the `Set Curve Type` operator as
the new operator better aligns with previous workflows and artist
expectations. Specifically using a threshold to adjust how well the
resulting curves fit to the original.
It can be found in the `Stroke` > `Convert Type` menu.
This operator aims at keeping visual fidelity between the curves. When
converting to a non-poly curve type, there's a `threshold` parameter
that dictates how closley the shapes will match (a value of zero meaning
an almost perfect match, and higher values will result in less accuracy
but lower control point count).
The conversion to `Catmull-Rom` does not do an actual curve fitting.
For now, this will resample the curves and then do an adaptive
simplification of the line (using the threshold parameter)
to simulate a curve fitting.
The `Set Curve Type` operator is no longer exposed in the
`Stroke` menu.
This also adds a new `geometry::fit_curves` function.
The function will fit a selection of curves to bézier curves. The
selected curves are treated as if they were poly curves.
The `thresholds` virtual array is the error threshold distance
for each curve that the fit should be within. The size of the virtual
array is assumed to have the same size as the total number of
input curves.
The `corners` virtual array allows specific input points to be treated
as sharp corners. The resulting bezier curve will have this point and
the handles will be set to "free".
There are two fitting methods:
* **Split**: Uses a least squares solver to find the control
points (faster, but less accurate).
* **Refit**: Iteratively removes knots with the least error starting
with a dense curve (slower, more accurate fit).
Co-authored-by: Casey Bianco-Davis <caseycasey739@gmail.com>
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/137808
`blender::geometry::fillet_curves` should check for situations where empty
curves are passed in (this could happen in geometry nodes) and in those
cases it should not run.
Pull Request: https://projects.blender.org/blender/blender/pulls/139787
When not all the meshes referenced by the instances recursively
are realized because of the limit of the depth input, and those
meshes have vertex groups, a crash is possible because of an
un-checked VectorSet lookup. `all_meshes_info.order` includes
all meshes regardless of the depth and mask arguments.
Pull Request: https://projects.blender.org/blender/blender/pulls/138519
These attributes are handled specifically in execute_instances_tasks
anyway so the generic code path doesn't make sense. Referencing
those attributes is incorrect since the temporary geometry set built
for collections goes out of scope after foreach_geometry_in_reference.
Part of #138279
Pull Request: https://projects.blender.org/blender/blender/pulls/138508
OpenVDB has a voxel size limit defined by the determinant of the grid
transform, which is equivalent to a uniform voxel size of
`sqrt3(3e-15) ~= 1.44e-5`.
The `mesh_to_density_grid` function was using an arbitrary threshold of
`1.0e-5` for the uniform voxel size.
In this case the voxel size is `~1.343e-5` so it passes the Blender
threshold but crashes in OpenVDB.
This fix adds some convenience functions to check for valid grid voxel
size and transform based on the same determinant metric. This is now
employed consistently in the mesh_to_density_grid, mesh_to_sdf_grid, and
points_to_sdf_grid functions to avoid exceptions in OpenVDB.
MOD_volume_to_mesh, node_geo_volume_to_mesh, BKE_mesh_remesh_voxel have
not been modified, since they have their own error checks with larger
thresholds.
Pull Request: https://projects.blender.org/blender/blender/pulls/138481
Caused by 9d13e39585
Compared to meshes (which in that commit did a
`BKE_mesh_copy_parameters_for_eval` on the new Mesh
[mesh_new_no_attributes] -- this includes copying `vertex_group_names`)
Curves were missing that.
Now added.
Pull Request: https://projects.blender.org/blender/blender/pulls/138202
Custom normals are interpolated with the new Manifold boolean
code. In the other two solvers, there is no interplation for the
CD_PROP_INT16_2D type so custom normals were left at (0,0) when
an exact copy from the input corner isn't possible.
The new code for manifold first does a join mesh, which converts
custom normals to float3, which can be (and are) interpolated.
However, if the input face (or face fragment) has flipped normal
in the boolean output, then the custom normal should be flipped to.
The fix is to do that for attributes called "custom_normal".
The general case of a "normal-like" thing with a different name
remains unsolved, but I doubt that case comes up often.
The new manifold boolean operation would merge coplanar faces
even if they had different faceIDs. This doesn't fit user
expectations.
By using the manifold library's "OriginalID" mechanism, we can
prevent that from happening. However, it slows the code by
almost a factor of 2 on large examples.
The Manifold maintainers going to work on a better solution.
The object to world transform was applied to the result (which was
meant to be in world space), rather than the inverse. However, the
processing of transforms is more complicated than necessary. Instead
of passing around a separate "target transform" that's meant to be used
inverted after the boolean operation, just make the input transforms
transform the input meshes into the desired transform space of the
output (object-local space for the modifier).
Pull Request: https://projects.blender.org/blender/blender/pulls/137919
Saves a tiny amount of time computing the loose vertices cache
after a boolean operation. Since the boolean output is just created
from faces, there are no loose vertices (there are also no loose
edges because of the call to `mesh_calc_edges`, which itself
tags the loose edge cache).
Adds the 'manifold' solver option to the Boolean geo node and to
the Boolean modifier. This solver is about as fast, or faster,
than the current float solver, and is robust against floating
point issues like the Exact solver. But currently it only
works on mesh arguments that are strictly manifold.
See https://projects.blender.org/blender/blender/issues/120182
for many more details.
In the function `gather_attributes_to_propagate` all the instance
attributes were set to be propagated to the `Point` domain.
For Grease Pencil, we want to make sure to propagate these
attributes to the `Layer` domain instead.
Pull Request: https://projects.blender.org/blender/blender/pulls/137665
Add a "dumb vector" storage option for custom normals, with the
"custom_normal" attribute. Adjust the mesh normals caching to
provide this attribute if it's available, and add a geometry node to
store custom normals.
## Free Normals
They're called "free" in the sense that they're just direction vectors
in the object's local space, rather than the existing "smooth corner
fan space" storage. They're also "free" in that they make further
normals calculation very inexpensive, since we just use the custom
normals instead. That's a big improvement from the existing custom
normals storage, which usually significantly decreases
viewport performance. For example, in a simple test file just storing
the vertex normals on a UV sphere, using free normals gives 25 times
better playback performance and 10% lower memory usage.
Free normals are adjusted when applying a transformation to the entire
mesh or when realizing instances, but in general they're not updated for
vertex deformations.
## Set Mesh Normal Node
The new geometry node allows storing free custom normals as well as
the existing corner fan space normals. When free normals are chosen,
free normals can be stored on vertices, faces, or face corners. Using
the face corner domain is necessary to bake existing mixed sharp and
smooth edges into the custom normal vectors.
The node also has a mode for storing edge and mesh sharpness, meant
as a "soft" replacement to the "Set Shade Smooth" node that's a bit
more convenient.
## Normal Input Node
The normal node outputs free custom normals mixed to whatever domain is
requested. A "true normal" output that ignores custom normals and
sharpness is added as well.
Across Blender, custom normals are generally accessed via face and
vertex normals, when "true normals" are not requested explicitly.
In many cases that means they are mixed from the face corner domain.
## Future Work
1. There are many places where propagation of free normals could be
improved. They should probably be normalized after mixing, and it
may be useful to not just use 0 vectors for new elements. To keep
the scope of this change smaller, that sort of thing generally isn't
handled here. Searching `CD_NORMAL` gives a hint of where better
propagation could be useful.
2. Free normals are displayed properly in edit mode, but the existing
custom normal editing operators don't work with free normals yet.
This will hopefully be fairly straightforward since custom normals
are usually converted to `float3` for editing anyway. Edit mode
changes aren't included here because they're unnecessary for the
procedural custom normals use cases.
3. Most importers can probably switch to using free normals instead,
or at least provide an option for it. That will give a significant
import performance improvement, and an improvement of Blender's
FPS for imported scenes too.
Pull Request: https://projects.blender.org/blender/blender/pulls/132583
Adds new NURBS knot mode NURBS_KNOT_MODE_FREE. Knots are stored in
`CurvesGeometry::custom_knots`. Knot offsets binding them to their
curves are calculated at runtime and held in a cache. This commit adds
custom knots support to OBJ import. It is the only way create such
curves for now. Legacy curve's tools don't support this knot mode,
thus on any modification knots get regenerated and whole shape changes.
If to convert imported legacy curve to new curves, point deletion,
duplicate and extrude preserve custom knots.
Rel #99891
Pull Request: https://projects.blender.org/blender/blender/pulls/130132
The main issue of 'type-less' standard C allocations is that there is no check on
allocated type possible.
This is a serious source of annoyance (and crashes) when making some
low-level structs non-trivial, as tracking down all usages of these
structs in higher-level other structs and their allocation is... really
painful.
MEM_[cm]allocN<T> templates on the other hand do check that the
given type is trivial, at build time (static assert), which makes such issue...
trivial to catch.
NOTE: New code should strive to use MEM_new (i.e. allocation and
construction) as much as possible, even for trivial PoD types.
Pull Request: https://projects.blender.org/blender/blender/pulls/136116
In Grease Pencil length modifier, if a stroke is not filtered, it
may not have a valid point count/offset. This fix ensures that we
get valid point count by copying it beforehand.
Pull Request: https://projects.blender.org/blender/blender/pulls/135733
Previously in 43fde8c39c the point span
for calling `extend_curves_straight` was set to a fixed `{1}` with only
1 element, this causes index access to go out of range, which is the
cause of crash #135586. However there are some other issues within the
code path, mainly:
- The meaning of `start/end_points` are unclear
- Behaviour of extending a 2-point stroke is undefined
This PR clarifies these by adding some comments and change a bit of the
logic in the code so it's more understandable.
Pull Request: https://projects.blender.org/blender/blender/pulls/135608