When buffers/images are allocated that use larger limits than supported
by the GPU blender would crash. This PR adds some safety mechanism to
allow Blender to recover from allocation errors.
This has been tested on NVIDIA drivers.
Pull Request: https://projects.blender.org/blender/blender/pulls/139876
- Navigation modes has been redefined a bit and introduced in a form of
an enum so that new ones can me implemented in the future.
Additionally switching between modes shouldn't require any additional
configurations like inverting all the axes.
Currently there are only 2 modes implemented,
but 2 more are planned and will be proposed in follow-up PRs.
Implemented modes are:
- Object: works like "Orbit" option.
but has all axes implicitly inverted
- Fly: works the same as "Free".
- "Turntable" option has been turned into "Lock Horizon".
This single option works for both normal navigation and Fly/Walk
modes now.
- Pan and Rotation axes inversion has been removed from default
configuration.
- UI has been simplified following the design from #136880.
- Zoom Invert has been removed since it looks like a duplication of
`NDOF_PANZ_INVERT`.
Ref !139343
This allow for more granular requests reducing the engine
startup time in case sub-process compilation is not enabled
(when it is enabled, gains are not substantial).
This also makes engine startup less blocking.
The batches are only requested if needed.
Some of the batches can only be requested after object sync.
Given we don't have a priority system for the shader compilation
queue, the engine shader ends up compiling after the scene
ones.
The remaining blocking is the texture loading and geometry loading.
The world compilation is still blocking in this patch to avoid making it
more complex. But this can be another optimization we can do later on.
See PR for performance numbers.
Pull Request: https://projects.blender.org/blender/blender/pulls/139454
This PR will remove the filtering when gathering texels for HiZ. The
algorithm that we follow doesn't use it, but had issues when
implementing using textureGather where he mentioned that he needed to
enable filtering.
```
I was experimenting with using texture gather lookups to reduce
the number of texture fetches from 4-to-7 fetches per fragment
down to 1-to-3 fetches per fragment (see the extension
ARB_texture_gather) it seems that texture gather works only if
the image is linearly sampled and to avoid the additional burden
involved by switching filtering state during rendering I stuck
to simple texture lookups as using texture gather lookups did not
show any visible effect on the construction time of the Hi-Z map.
```
https://www.rastergrid.com/blog/2010/10/hierarchical-z-map-based-occlusion-culling/
After testing we got identical results when turning off filtering.
Turning off filtering allows supporting devices that don't support
linear filtering on depth stencil texture (WoA) using the Vulkan
backend.
Pull Request: https://projects.blender.org/blender/blender/pulls/139868
For linked grease pencil data, layer operators are accessible right
now. Grey out them in UI by adjusting poll functions. Also disabled
the individual layer row of tree-view.`
See images in PR description
Pull Request: https://projects.blender.org/blender/blender/pulls/137946
The versioning code for the new `Scale` input (a92b68939a)
always added new versioning nodes connected to the `Scale` input to ensure
that the node behaves as before.
But these versioning nodes are only necessary when a profile curve is connected.
Otherwise, they don't have any effect at all, since the node just outputs
a wire mesh in this case.
This skips adding the versioning nodes in case the profile socket
is unused. The default scale value will be 1.
Pull Request: https://projects.blender.org/blender/blender/pulls/138968
The argument `-a` is used twice (render animation & animation playback).
Parsing logic works since their handled at different passes however
printing help text could return either since passes aren't used
for matching.
Workaround the problem using a deterministic lookup when printing
help text which that skips arguments that have already been handled.
When dissolving an edge merges faces, use an angle threshold before
dissolving vertices from the face which have become chains as reult
of the merge (connected to 2 edges).
Also fix edge-flag handling when dissolving multiple edges
from a chain into a single edge, previously flags from the
resulting edge was effectively random.
Now flags from all edges are merged.
Resolves#100184.
Ref !134017
- Place doc-strings before arguments (avoid over long lines).
- Enable clang-format for BMesh operator definitions.
- Remove invalid comments (likely left in from copy-paste).
- Use double back-ticks for RST.
- Use full sentences.
No functional changes.
We attempt to enforce a minimum area height so that they cannot be made
smaller than header height. This works correctly as we drag resize
areas but not when loading blend files. We skip doing so if resizing
from smaller to a bigger vertical size. This PR just makes it so we
enforce minimum size always.
Pull Request: https://projects.blender.org/blender/blender/pulls/139804
In 5.0, we plan to change the brush size from representing radius to
diameter. This means that for 5.0 files loaded in 4.5, we need to
scale the stored value when reading the relevant brush fields.
Related to #134204
Pull Request: https://projects.blender.org/blender/blender/pulls/139561
The value isn't created unless jitter is enabled, but it was always
retrieved for a function argument. The uninitialized memory wasn't read,
but it caused a crash in debug builds.
Pull Request: https://projects.blender.org/blender/blender/pulls/139854
Current strategy to deal with operators not supporting custom NURBS
knots is to fall back to calculated knots for curves of the custom mode
but with no `CurvesGeometry::custom_knots` allocated. Such curves are
the result of operators that copy only `Point` and `Curve` domains. This
way the problem is only postponed. It is not possible to add new custom
knot curves to such `CurvesGeometry` as custom knot offsets are
calculated all together and there is no way to distinguish between old
curves with lost knots and new ones. This is more a future problem.
The actual problem in `main` can be shown with an attached blend file
(see PR) by applying `Subdivide` to some points and then adding new
`Bezier` curve to the same object. This particular problem could be
addressed somewhere in `realize_instances.cc` but the actual problem
would persist.
This PR handles custom knots in all places where `BKE_defgroup_copy_list`
is iused, and where `bke::curves::copy_only_curve_domain` is called.
Here the assumption is made that only these places can copy custom knots
modes without copying custom knots. Depending on operator logic knots are
handled most often in one of two ways:
- `bke::curves::nurbs::copy_custom_knots`:
copies custom knots for all curves excluding `selection`. Knot modes
for excluded curves are altered from the custom mode to calculated.
This way only curves modified by the operator will loose custom knots.
- `bke::curves::nurbs::update_custom_knot_modes;`
alters all curves to calculated mode.
In some places (e.g. `reorder.cc`) it is possible to deal with knots
without side effects.
PR also adds `BLI_assert` in `load_curve_knots` function to check if
`CurvesGeometry::custom_knots` exists for custom mode curves. Thus
versioning code is needed addressing the issue in files in case such
already exists.
Pull Request: https://projects.blender.org/blender/blender/pulls/139554
This commit improves DNA parsing resilience to data corruption in two ways:
* Detect and abort on failure to allocate requested amount of memory.
* Detect multiple usages of the same `type_index` by different struct
definitions.
The second part fixes the `dna_genfile.cc:1918:40.blend` case reported
in #137870.
Pull Request: https://projects.blender.org/blender/blender/pulls/139803
The Movie distortion node crops its data if the movie size differs from
the input size. That's because boundary extensions do not take
calibration size into account. To fix this, we use the same coordinates
range as the distortion grid computation, which computes the distortion
in the space of the calibration size.
Pull Request: https://projects.blender.org/blender/blender/pulls/139822
Adds a new operator in Grease Pencil edit mode to convert between curve
types. This acts as a replacment for the `Set Curve Type` operator as
the new operator better aligns with previous workflows and artist
expectations. Specifically using a threshold to adjust how well the
resulting curves fit to the original.
It can be found in the `Stroke` > `Convert Type` menu.
This operator aims at keeping visual fidelity between the curves. When
converting to a non-poly curve type, there's a `threshold` parameter
that dictates how closley the shapes will match (a value of zero meaning
an almost perfect match, and higher values will result in less accuracy
but lower control point count).
The conversion to `Catmull-Rom` does not do an actual curve fitting.
For now, this will resample the curves and then do an adaptive
simplification of the line (using the threshold parameter)
to simulate a curve fitting.
The `Set Curve Type` operator is no longer exposed in the
`Stroke` menu.
This also adds a new `geometry::fit_curves` function.
The function will fit a selection of curves to bézier curves. The
selected curves are treated as if they were poly curves.
The `thresholds` virtual array is the error threshold distance
for each curve that the fit should be within. The size of the virtual
array is assumed to have the same size as the total number of
input curves.
The `corners` virtual array allows specific input points to be treated
as sharp corners. The resulting bezier curve will have this point and
the handles will be set to "free".
There are two fitting methods:
* **Split**: Uses a least squares solver to find the control
points (faster, but less accurate).
* **Refit**: Iteratively removes knots with the least error starting
with a dense curve (slower, more accurate fit).
Co-authored-by: Casey Bianco-Davis <caseycasey739@gmail.com>
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/137808
Set `UI_BLOCK_LIST_ITEM` flag for the block this will assign
`UI_BUT_LIST_ITEM` flag for tree view label buttons, see: `uiItemL_()`
That way `wcol_list_item` is being used for tree view.
(see: `widget_state()` / `widget_state_label()`
Pull Request: https://projects.blender.org/blender/blender/pulls/126026
This avoid having to compile specializations JIT and
use the same API as subprocess compilation.
This bridges the gap between subprocess and threaded
compilation.
Pull Request: https://projects.blender.org/blender/blender/pulls/139702
It would run into using the same frame twice, looking like "freeze
frames"
Apparently we had a similar issue before, see 3f8ec963e3
Just a PoC to show that this looks like a precision/rounding issue when
getting a "working" `UsdTimeCode`.
In the modifier code, we are doing a roundtrip going from frame >> time
(in seconds -- via `BKE_cachefile_time_offset`) and then back to frame
before we store that in `USDMeshReadParams`.
To avoid the precision loss, this PR introduces
`BKE_cachefile_frame_offset` to stay in the "frame" domain and
circumvent going through FPS alltogether.
There might be better ways to let USD handle the "sightly off"
`UsdTimeCode` better though.
Pull Request: https://projects.blender.org/blender/blender/pulls/139793
Duplicating collection doesn't work when multiple collections are
selected, instead first selected collection is just duplicated. Now fixed
by iterating over list of selected collections returned by
`outliner_collect_selected_parent_collections` after traversing. In that
function, child collections are skipped if parent collection is already
selected, this avoids extra copies from being generated (i.e creates one
copy of nested collections).
Resolves#139651
Pull Request: https://projects.blender.org/blender/blender/pulls/139719
Comparing the object with "edit_object" isn't correct as
multiple objects may be in edit-mode across multiple scenes.
Check the object for edit-mode data instead.
Annotation `Callable[[Any, ...], str | None]` is not supported by Python
typing system and ... will be misinterpreted as unknown type instead of
option to provide variable number of arguments.
Ref !138804
Regression in [0] which would re-highlight gizmos when they had been
tagged for highlighting.
This caused highlighting to be recalculated unexpectedly while
blocking modal operators run that used a timer.
The timer events would be passed though to the gizmo handler which
then re-evaluated the highlighted gizmo based on the cursor position.
Resolve by skipping pass-through for gizmos.
[0]: f839847d3b4849425c3b06a52aae4361d384fea4
Object::actcol assignments from edit-mode data wasn't clamping
the index to the valid range. This caused an out of bounds read when
accessing Object::matbits.
While material indexes should typically be within the material bounds,
this isn't guaranteed. Selecting a face for example with a material
outside the range was crashing.
Add a utility function that sets the active material index to replace
existing inline checks.
Follow up to the fix for #139369.
This changes how tooltips for dragging multiple files are shown. this
shows an `Documents` icon and a counter of how many files are dragged.
When multiple files are dragged from Blender internal file browser,
this avoids showing the thumbnail of the file selected to start
dragging, if selection is unique this thumbnail will be visible.
Pull Request: https://projects.blender.org/blender/blender/pulls/136276
A small number of USD files in the wild contain invalid face index data
for some of their meshes. This leads to asserts in debug builds and
crashes for users in retail builds(sometimes). There is already an
import option to Validate Meshes but it turns out that we, and most
other importers, perform validation too late. We crash before getting to
that validate option (see notes).
This PR implements a cheap detection mechanism and will auto-fix if we
detect broken data. The detection may not find all types of bad data but
it will detect what is known to fail today for duplicate vertex indices.
We immediately validate/fix before loading in the rest of the data. The
downside is that this will mean no additional data will be loaded.
Normals, edge creases, velocities, UVs, and all other attributes will be
lost because the incoming data arrays will no longer align.
It should be noted also that Alembic has also chosen this approach. It's
check is significantly weaker though and can be improved separately if
needed.
If auto-fix is triggered, it will typically appear as one trace on the
terminal.
```
WARN (io.usd): <path...>\io\usd\intern\usd_reader_mesh.cc:684
read_mesh_sample: Invalid face data detected for mesh
'/degenerate/m_degenerate'. Automatic correction will be used.
```
A more general downside of these fixes is that this applies to each
frame of animated mesh data. The mesh will be fixed, and re-fixed, on
every frame update when the frame in question contains bad data.
For well-behaved USD scenes, the penalty for this check is between 2-4%.
For broken USD scenes, it depends on how many meshes need the fixup. In
the case of the Intel 4004 Moore Lane scene, the penalty is a 2.7x
slowdown in import time (4.5 s to 12.5 s).
Pull Request: https://projects.blender.org/blender/blender/pulls/138633
The checks and related warnings detecting usage of blendfiles generated
by newer versions of Blender were not fully behaving as expected for
libraries. In particular, opening an older main blendfile linking
against newer library ones would not always detect and report the
`has_forward_compatibility_issues` status properly.
Found out while working on 'longer ID names' compatibility PR for 4.5
(!139336).
Compilation constants are constants defined in the create info.
They cannot be changed after the shader is created.
It is a replacement to macros with added type safety.
Reuse most of the logic from Specialization constants.
Pull Request: https://projects.blender.org/blender/blender/pulls/139703
The "All Libraries" library didn't free its assets correctly on refresh,
so the asset previews didn't refresh correctly either. That's because it
didn't forward the removal request to the asset library that actually
owns the asset. It only freed assets from its own storage, which is
always empty.
This might make refreshing the all library feel a little slower, since
previews are now refreshed too. But in general this is fairly fast still
and there's an optimization to only load visible previews too.
This adds a new function to query GPUtexture from an
Image datablock without actually creating them.
This allows to keep track of all the texture that
needs to be loaded and defer their loading in
end_sync. The texture are then only used in the
next sync. This is because we do not want to stage
the texture for drawing as it would require a
valid texture.
Multithreading is used to load the texture from disk
as soon as possible in a threaded way. It is still
blocking, but it is much faster (depending on
hardware).
Before (5.7s):
After (2.5s):
On Linux workstation: 2.28x speedup in texture loading
On M1 Backbook pro: 2.72x speedup in texture loading
This includes redraw overhead but it is not super significant.
Having a vector of all the textures to be loaded
will eventually be helpful in making the
texture uploading multi-threaded. Currently, it is
a bit difficult given the need of a valid GPUContext
per thread.
- [x] Bypass deffered loading on animated textures
- [x] Add throttling to only load a few textures per frame
- [x] Do not delay for viewport render
Pull Request: https://projects.blender.org/blender/blender/pulls/139644
Legacy curves can carry material information, and the fill material is
especially useful for grease pencil. This patch converts base color of
materials from legacy curves when converting to grease pencil.
Limitations:
- This patch does not take nodes material into account.
- Neither legacy curves nor grease pencil supports per-stroke fill
attribute yet, thus the converted grease pencil will be shown as
either all fills or all strokes, depends on the configuration in the
original legacy curve object.
Pull Request: https://projects.blender.org/blender/blender/pulls/139212