This commit improves DNA parsing resilience to data corruption in two ways:
* Detect and abort on failure to allocate requested amount of memory.
* Detect multiple usages of the same `type_index` by different struct
definitions.
The second part fixes the `dna_genfile.cc:1918:40.blend` case reported
in #137870.
Pull Request: https://projects.blender.org/blender/blender/pulls/139803
The Movie distortion node crops its data if the movie size differs from
the input size. That's because boundary extensions do not take
calibration size into account. To fix this, we use the same coordinates
range as the distortion grid computation, which computes the distortion
in the space of the calibration size.
Pull Request: https://projects.blender.org/blender/blender/pulls/139822
Adds a new operator in Grease Pencil edit mode to convert between curve
types. This acts as a replacment for the `Set Curve Type` operator as
the new operator better aligns with previous workflows and artist
expectations. Specifically using a threshold to adjust how well the
resulting curves fit to the original.
It can be found in the `Stroke` > `Convert Type` menu.
This operator aims at keeping visual fidelity between the curves. When
converting to a non-poly curve type, there's a `threshold` parameter
that dictates how closley the shapes will match (a value of zero meaning
an almost perfect match, and higher values will result in less accuracy
but lower control point count).
The conversion to `Catmull-Rom` does not do an actual curve fitting.
For now, this will resample the curves and then do an adaptive
simplification of the line (using the threshold parameter)
to simulate a curve fitting.
The `Set Curve Type` operator is no longer exposed in the
`Stroke` menu.
This also adds a new `geometry::fit_curves` function.
The function will fit a selection of curves to bézier curves. The
selected curves are treated as if they were poly curves.
The `thresholds` virtual array is the error threshold distance
for each curve that the fit should be within. The size of the virtual
array is assumed to have the same size as the total number of
input curves.
The `corners` virtual array allows specific input points to be treated
as sharp corners. The resulting bezier curve will have this point and
the handles will be set to "free".
There are two fitting methods:
* **Split**: Uses a least squares solver to find the control
points (faster, but less accurate).
* **Refit**: Iteratively removes knots with the least error starting
with a dense curve (slower, more accurate fit).
Co-authored-by: Casey Bianco-Davis <caseycasey739@gmail.com>
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/137808
Set `UI_BLOCK_LIST_ITEM` flag for the block this will assign
`UI_BUT_LIST_ITEM` flag for tree view label buttons, see: `uiItemL_()`
That way `wcol_list_item` is being used for tree view.
(see: `widget_state()` / `widget_state_label()`
Pull Request: https://projects.blender.org/blender/blender/pulls/126026
This avoid having to compile specializations JIT and
use the same API as subprocess compilation.
This bridges the gap between subprocess and threaded
compilation.
Pull Request: https://projects.blender.org/blender/blender/pulls/139702
It would run into using the same frame twice, looking like "freeze
frames"
Apparently we had a similar issue before, see 3f8ec963e3
Just a PoC to show that this looks like a precision/rounding issue when
getting a "working" `UsdTimeCode`.
In the modifier code, we are doing a roundtrip going from frame >> time
(in seconds -- via `BKE_cachefile_time_offset`) and then back to frame
before we store that in `USDMeshReadParams`.
To avoid the precision loss, this PR introduces
`BKE_cachefile_frame_offset` to stay in the "frame" domain and
circumvent going through FPS alltogether.
There might be better ways to let USD handle the "sightly off"
`UsdTimeCode` better though.
Pull Request: https://projects.blender.org/blender/blender/pulls/139793
Duplicating collection doesn't work when multiple collections are
selected, instead first selected collection is just duplicated. Now fixed
by iterating over list of selected collections returned by
`outliner_collect_selected_parent_collections` after traversing. In that
function, child collections are skipped if parent collection is already
selected, this avoids extra copies from being generated (i.e creates one
copy of nested collections).
Resolves#139651
Pull Request: https://projects.blender.org/blender/blender/pulls/139719
This commit adds a feature to the bug fixes per release tool to ensure
the version numbers of the backported Blender versions are in ascending
order.
Before: 4.2.9, 4.4.1, 4.2.9
After: 3.6.22, 4.2.9, 4.4.1
Pull Request: https://projects.blender.org/blender/blender/pulls/139011
In a previous commit (1) a feature was added to the bug fixes per
release script that prompted users if they want to sort revert commits.
This commit removes this feature, instead preferring that revert
commits are sorted in the overrides task (blender/blender#137983)
This was done because:
- The overrides task allows for greater control over how revert commits
are classified, and it's synced between all users.
- The prompt to sort the commits appearing each time the script is run
is annoying for triagers who may run the script multiple times in one
day as they sort through various commits.
(1) blender/blender@9679d9a3eb
Pull Request: https://projects.blender.org/blender/blender/pulls/138505
Comparing the object with "edit_object" isn't correct as
multiple objects may be in edit-mode across multiple scenes.
Check the object for edit-mode data instead.
Annotation `Callable[[Any, ...], str | None]` is not supported by Python
typing system and ... will be misinterpreted as unknown type instead of
option to provide variable number of arguments.
Ref !138804
Regression in [0] which would re-highlight gizmos when they had been
tagged for highlighting.
This caused highlighting to be recalculated unexpectedly while
blocking modal operators run that used a timer.
The timer events would be passed though to the gizmo handler which
then re-evaluated the highlighted gizmo based on the cursor position.
Resolve by skipping pass-through for gizmos.
[0]: f839847d3b4849425c3b06a52aae4361d384fea4
Object::actcol assignments from edit-mode data wasn't clamping
the index to the valid range. This caused an out of bounds read when
accessing Object::matbits.
While material indexes should typically be within the material bounds,
this isn't guaranteed. Selecting a face for example with a material
outside the range was crashing.
Add a utility function that sets the active material index to replace
existing inline checks.
Follow up to the fix for #139369.
This changes how tooltips for dragging multiple files are shown. this
shows an `Documents` icon and a counter of how many files are dragged.
When multiple files are dragged from Blender internal file browser,
this avoids showing the thumbnail of the file selected to start
dragging, if selection is unique this thumbnail will be visible.
Pull Request: https://projects.blender.org/blender/blender/pulls/136276
This is required to make ray differentials work correctly for OSL custom
cameras.
But it also lets us simplify the implementation, and makes the OSL
functionality more complete, such as implementing all noise types.
Pull Request: https://projects.blender.org/blender/blender/pulls/138161
Keep around the dummy BVH for lights, even if it serves no purpose for now.
Previously I assumed it was not needed, but there is some device specific
code that assumes it exists, and not much point trying to refactor that now
when in the future we actually want to create a BVH for lights.
Pull Request: https://projects.blender.org/blender/blender/pulls/139798
With these changes, we can now mark devices which are expected to work as
performant as possible, and devices which were not optimized for some reason.
For example, because the device was released after the Blender release,
making it impossible for developers to optimize for devices in already
released unchangeable code. This is primarily relevant for the LTS versions,
which are supported for two years and require proper communication about
optimization status for the new devices released during this time.
This is implemented for oneAPI devices. Other device types currently are
marked as optimized for compatibility with old behavior, but may implement
the same in the future.
Pull Request: https://projects.blender.org/blender/blender/pulls/139751
A small number of USD files in the wild contain invalid face index data
for some of their meshes. This leads to asserts in debug builds and
crashes for users in retail builds(sometimes). There is already an
import option to Validate Meshes but it turns out that we, and most
other importers, perform validation too late. We crash before getting to
that validate option (see notes).
This PR implements a cheap detection mechanism and will auto-fix if we
detect broken data. The detection may not find all types of bad data but
it will detect what is known to fail today for duplicate vertex indices.
We immediately validate/fix before loading in the rest of the data. The
downside is that this will mean no additional data will be loaded.
Normals, edge creases, velocities, UVs, and all other attributes will be
lost because the incoming data arrays will no longer align.
It should be noted also that Alembic has also chosen this approach. It's
check is significantly weaker though and can be improved separately if
needed.
If auto-fix is triggered, it will typically appear as one trace on the
terminal.
```
WARN (io.usd): <path...>\io\usd\intern\usd_reader_mesh.cc:684
read_mesh_sample: Invalid face data detected for mesh
'/degenerate/m_degenerate'. Automatic correction will be used.
```
A more general downside of these fixes is that this applies to each
frame of animated mesh data. The mesh will be fixed, and re-fixed, on
every frame update when the frame in question contains bad data.
For well-behaved USD scenes, the penalty for this check is between 2-4%.
For broken USD scenes, it depends on how many meshes need the fixup. In
the case of the Intel 4004 Moore Lane scene, the penalty is a 2.7x
slowdown in import time (4.5 s to 12.5 s).
Pull Request: https://projects.blender.org/blender/blender/pulls/138633
The checks and related warnings detecting usage of blendfiles generated
by newer versions of Blender were not fully behaving as expected for
libraries. In particular, opening an older main blendfile linking
against newer library ones would not always detect and report the
`has_forward_compatibility_issues` status properly.
Found out while working on 'longer ID names' compatibility PR for 4.5
(!139336).
Seems like only lz4 is used by openvdb. The zstd support were causing
runtime issues because of version mismatches with our other libraries.
In addition to this, blosc doesn't seem to properly link to static
libraries. The resulting static library has undefied symbols in it from
both zlib and zstd. This wasn't caught before as openvdb links to zlib.
Pull Request: https://projects.blender.org/blender/blender/pulls/139792
Compilation constants are constants defined in the create info.
They cannot be changed after the shader is created.
It is a replacement to macros with added type safety.
Reuse most of the logic from Specialization constants.
Pull Request: https://projects.blender.org/blender/blender/pulls/139703
The "All Libraries" library didn't free its assets correctly on refresh,
so the asset previews didn't refresh correctly either. That's because it
didn't forward the removal request to the asset library that actually
owns the asset. It only freed assets from its own storage, which is
always empty.
This might make refreshing the all library feel a little slower, since
previews are now refreshed too. But in general this is fairly fast still
and there's an optimization to only load visible previews too.
This adds a new function to query GPUtexture from an
Image datablock without actually creating them.
This allows to keep track of all the texture that
needs to be loaded and defer their loading in
end_sync. The texture are then only used in the
next sync. This is because we do not want to stage
the texture for drawing as it would require a
valid texture.
Multithreading is used to load the texture from disk
as soon as possible in a threaded way. It is still
blocking, but it is much faster (depending on
hardware).
Before (5.7s):
After (2.5s):
On Linux workstation: 2.28x speedup in texture loading
On M1 Backbook pro: 2.72x speedup in texture loading
This includes redraw overhead but it is not super significant.
Having a vector of all the textures to be loaded
will eventually be helpful in making the
texture uploading multi-threaded. Currently, it is
a bit difficult given the need of a valid GPUContext
per thread.
- [x] Bypass deffered loading on animated textures
- [x] Add throttling to only load a few textures per frame
- [x] Do not delay for viewport render
Pull Request: https://projects.blender.org/blender/blender/pulls/139644
Legacy curves can carry material information, and the fill material is
especially useful for grease pencil. This patch converts base color of
materials from legacy curves when converting to grease pencil.
Limitations:
- This patch does not take nodes material into account.
- Neither legacy curves nor grease pencil supports per-stroke fill
attribute yet, thus the converted grease pencil will be shown as
either all fills or all strokes, depends on the configuration in the
original legacy curve object.
Pull Request: https://projects.blender.org/blender/blender/pulls/139212
Apparently the vertex group list were missing when converting meshes to
grease pencil while all the attributes seems to be transferring just
fine. This is because of a missing `BKE_defgroup_copy_list` call. Now
all vertex group names show up correctly in the list after conversion.
Pull Request: https://projects.blender.org/blender/blender/pulls/139786
"Transfer Mesh Data" operator only works on meshes, however it's `poll`
call doesn't do complete checks for all selected objects because that
would be too slow. Now we add an error message when invalid objects are
encountered during data transfer (e.g. target object type is not mesh)
it will give a report to notify users that some errors have occured. So
there will be less confusion.
Pull Request: https://projects.blender.org/blender/blender/pulls/139568
In Geometry Nodes workspace, the viewport has a default value of 0.0
for `gpencil_vertex_paint_opacity`, this causes material preview to not
show proper vertex color even when strokes have color. Considering this
property is a bit obscure, setting a default value of 1.0 here makes
sense and it's also consistent with the rest of the viewport editors in
other workspaces.
Pull Request: https://projects.blender.org/blender/blender/pulls/139356