My PR https://projects.blender.org/blender/blender/pulls/104535
was committed as 88f9c55f7f and the logic was changed
while adding support for face sets, making the logic incorrect and
the warning system disfunctional.
Restore the logic from the original PR with added support for face sets,
fix const correctness issues, improve variable naming, and remove a
check for empty names, since all attribute-type layers should have
names in a valid mesh.
Fixes#105780
Skip explicit binding location for samplers in OpenGL when not needed, since drivers can usually handle more sampler declarations this way (as long as they're not actually used by the shader).
Pull Request: https://projects.blender.org/blender/blender/pulls/105770
The loading for the "All" asset library would include the "Current File"
library as if it were a regular asset libray on disk. Instead make sure
the latter is loaded properly first and is skipped when recursively
reading on disk libraries.
The transformations were applied two times and the old fix was wrong because it needs to use the evaluated point, not the original one. Also I did a small code cleanup.
Pull Request: https://projects.blender.org/blender/blender/pulls/105202
Fix support for Wireframe and parametric nodes by resolving
compilation failures surrounding barycentric coordinates.
A final missing part of the Metal implementation for barycentric
coordinates was missing.
Feedback also addressed to move barycentric calculation out
of code-gen and into surface_lib.
Authored by Apple: Michael Parkin-White
This also resolves#103606.
Ref #96261
Pull Request: https://projects.blender.org/blender/blender/pulls/105740
Now only dynamic function parameters that use ParameterDynAlloc support
dynamically sized parameters arrays.
Add tests for both dynamic arrays that don't support resizing
(Image.pixels) and dynamic sized arguments using
(VertexGroup.add(index=[..])).
Regression in [0] which extended support for dynamic sized function
arguments.
[0]: dfb8c5974e
- DriverVariable.name update function passed DriverVar to
BKE_driver_invalidate_expression as a ChannelDriver.
- DriverTarget.name update function passed DriverTarget to
BKE_driver_invalidate_expression as a ChannelDriver.
- DriverVariable.type update function DriverVar accessed ChannelDriver,
clearing a flag.
This was exposed by [0] however this issue existed beforehand.
[0]: c26566ad27
This is a non-recent regression that strangely went unreported.
It is expected that when snapping, only visible elements are considered
which does not include faces in wireframe mode.
This works like this before, and this change doesn't appear to have
been intentional.
Ref #105664
One of the advantages of separating this enum member from the others is
because mixing several members in a single one hinders debugging since
in this case the IDE does not define which enums were set.
Also separating this item makes it more readable as `SCE_SNAP_MODE_GEOM`
is not a snap mode but a combination of modes.
In most places where it appears in a menu, the operator would already
apply to all selected F-Curves. Now it is done consistently and explicitly
from all menu items. The default of the operator is now also set to 'all
selected', so that it also behaves like that when called from the operator
search menu.
- The Math node lost the headers of its operation type menu in
ee985fa925 , because a translation context was assigned to the RNA
property, but the headers declaration was not updated to extract the
messages with matching contexts.
- The message "Group Input" had a trailing space, which can be added
after translation.
Fix a regression that allowed to create several links between an
output socket and a multi input socket either by inserting
links or using the link swap feature.
This regression was caused by the link swapping feature
introduced in commit 89aae4ac82.
Pull Request: https://projects.blender.org/blender/blender/pulls/105631
Multiple user actions performed quickly could be blocked by undo
compacting memory - if the background compacting task was not complete
when the next undo step was pushed.
Notes:
- This and recent improvements to BLI_array_Store gives over ~2x speedup
compared with 3.3x, over 10x compared with 3.4x.
A sub-surfaced cube with the modifier applied was used for testing
(~1.5 million polys), both randomized & non-randomized verts/edge/faces
were used to avoid the sub-surface memory layout biasing the results.
Tested transforming ~1/3rd of the mesh and inverting selection.
- Without compacting mesh-data in parallel, the optimizations to
BLI_array_store can give similar performance to 3.3x, however there
are still cases where performance isn't quite as good - so compact the
arrays in parallel to ensure performance is at least as good as 3.3x.
Resolves#105046.
The method of accumulating values to create a hash for each chunk has
been improved for ~16% better distribution of the resulting hashes.
Improve performance of array de-duplication, see: #105046.
Accumulating hashes with a byte/boolean array didn't include enough
information for a useful hash, creating hashes with many collisions.
This is the root cause of a performance regression since 3.3 where
mesh data (used for storing edit-mesh undo steps) was changed to store
selection in a boolean array, creating a bottleneck de-duplicating
chunks of that array for edit-mesh undo's custom-data de-duplication.
Resolve by increasing hash accumulation for arrays with smaller elements,
so each chunk of memory (a candidate for de-duplication) isn't as likely
to have hash collisions.
`char` (single byte) arrays now accumulate 22 values instead of 7, it's
taking more values into account was necessary as these are effectively
bits in the case of boolean arrays, 2-byte values accumulate 32 bytes,
4-byte elements accumulate 44 bytes, larger structs accumulate
`sizeof(type) * 7` bytes (as before).
Also ensure the accumulation read-ahead never exceeds the chunk size -
technically a fix although this would only happen when passing a small
`chunk_count` to BLI_array_store_create (in the range of 1-16) so this
didn't happen in practice.
Improve performance of array de-duplication, see: #105046.
Use uint32_t since it's sufficient for hashing, using an int64_t was
especially inefficient when allocating an int64_t for every boolean
(when compacting an array of booleans).
Improve performance of array de-duplication, see: #105046.
This is a fix for the previous commit d7c023eb25.
Before, every time the lambda was called, a copy of the BitVector was
made. This was very inefficient.
Now this has been fixed by passing the BitVector by reference (&) in
the lambda function.
In very specific cases, during intersection testing, `intersect` can
add polygons already checked as duplicates in the buffer that
corresponds to the rest of polygons that can form groups of duplicates.
As the buffer cannot have repeated indices, re-adding, even
temporarily, these duplicates can cause a buffer overflow.
While this may have some impact on performance, it's difficult to
predict these cases and thus add a buffer pad.
So the solution is to check if they are already duplicated.
The count wasn't clamped above zero in some newly optimized code.
Instead of adding it there, move the clamping to the field network,
similar to some other nodes. That makes it so the rest of the code
doesn't have to deal with the clamping, and should be faster in the
single-value case.
When subsampling was introduced in VSE it was disabled during
editing. Only when rendering it was enabled. This lead to
cache being different between editing and rendering. Leading
to confusion.
This PR will enabled the subsampling when editing. This way
it is more consistent.
Pull Request: https://projects.blender.org/blender/blender/pulls/105612
Some checks here are really critical and should assert, but that one is
more an indication that something is not going right, though data itself
should still be mostly valid, so better warn the user with a LOG
warning, than be silent in release builds, and crash in debug ones.
Invalid nodes are not added to the lazy-function graph. Therefore, their
outgoing links are also not added, which implies that the targets need
some default value.
The correction bbc6bb3468 was still wrong because there it was
disregarded that `vert_ctx_len` does not necessarily indicate merges in
the same polygon.
Therefore, it is not safe to rely on `vert_ctx_len` to count possible
new polygons.
NOTE: It might be worth preempting part of the
`weld_poly_split_recursive` logic to identify what the new polygons are
in advance. But this can be left for a future refactor.
Node tree updates can crash if the tree contains a node group that points at an "undefined" tree type.
This can happen if the tree is linked from a library and the path is lost,
or if a custom (python) tree is used and the script is not run.
The fix is to check if the node group type is valid ("registered") and return an empty list otherwise.
Pull Request: https://projects.blender.org/blender/blender/pulls/105564
Pick select is only meant to change a single element from a single
data-block. However, the operator worked on each object individually
rather than first finding the closest point, then processing the
selection. Change the operator to find the closest point across all
objects, then deselect if necessary, then select the closest point.
Pull Request: https://projects.blender.org/blender/blender/pulls/105495
After ed870f87b9, panels headers displayed inside panels had their
label duplicated when translations were enabled. This is because a
string comparison was made against the original message, instead of
the translated message.
Pull Request: https://projects.blender.org/blender/blender/pulls/105151
- "Lines" in the sense of number of lines
- "Number" can mean "amount, count" or "index, offset"
- "Second" can be an ordinal number or a unit
- "Root": add the brush curve to the "square root falloff" sense
- "Strip" can be a sequence or a type of hair rendering
- "Constant" in the sense of a value, for the Geometry Nodes add
submenu (#105447).
Additionally, extract:
- "Press a key" from the Keymap preferences.
- "MaskLayer", upon new mask layer creation
Ref #43295, #105447
This was caused by an incorrect assumption in the solver:
It tries to solve both collision and length constraints simultaneously,
using the projected movement of a point as a slide direction along the surface.
This only works if the distance of the previous curve point to the surface
is less than the allowed segment length. Otherwise the segment will
exceed the allowed length even with zero slide and NaN values are computed.
The case of larger surface distance can occur if the previous segment
solve was already stretching the current segment and then the point
moves further away. In this case we can simply clamp the segment length
without violating the contact constraint.
Pull Request #105499
3e5ce23c99 introduced a regression in case the freed Main was part of a
list, and was supposed to be removed from it, since calling
`BLI_remlink` does _not_ clear the `prev`/`next` pointers of the removed
link.
This commit also contains a few more tweaks to recent related b3f42d8e98
commit.
Pull Request #105485
While this behavior can be useful in some cases, it can also create
issues (as in one of own recent commits, 3e5ce23c99), since it
implicetly keeps the removed linknode 'linked' to the listbase.
At least warn about it in the documentation of `BLI_remlink`.