Paddings were applied a bit oddly, resulting in a double padding for the
drawing. Each "tile" (the space to draw the file in) already had a padding
applied (`FileLayout.tile_border_x`), we'd apply another one to it. This second
padding should only be used for spacing of individual items *inside* the tile.
Code makes more sense too now, there are fewer arbitrary/eyeballed numbers to
make things "look good".
Pull Request: https://projects.blender.org/blender/blender/pulls/135915
Fix an issue when the NLA is used in conjunction with a directly-
assigned Action.
When the directly-assigned Action is also used in an NLA strip, that
strip would not be evaluated any more. This was even the case when
different slots were used, which entirely muted the strip when there
was no slot directly assigned. Now the "this has been handled already"
logic considers the action and the slot.
Pull Request: https://projects.blender.org/blender/blender/pulls/135911
The "Merge Animation" operator now properly tags the affected Actions
with `ID_RECALC_ANIMATION_NO_FLUSH` so that new 'evaluated copies' are
created. Because this was missing, the animation evaluation code didn't
see the moved channelbags yet, which caused animation playback to be
broken.
Pull Request: https://projects.blender.org/blender/blender/pulls/135906
Vulkan handles are currently only requested once. In the future OpenXR
also needs acces to these handles and additional handles will be needed
when introducing copy queues and async compute.
This PR will collect the handles in a struct to ensure we don't need to
alter the GHOST interface for every change.
Pull Request: https://projects.blender.org/blender/blender/pulls/135905
This patch refactors GPU implicit conversion code by generating the name
of the shaders on the fly using fmt. This reduces boilerplate and makes
it easier to add new types.
When sync select is enabled, the data layers aren't accessed
so there is no need to add them.
Also skip adding a "pin" layer when clearing pinned UV's.
Context path in the header of material properties tab always indicates
material is on object. Include object.data in context path when material
is on mesh for example.
Pull Request: https://projects.blender.org/blender/blender/pulls/134968
Similar changes done elsewhere (#116901), replace usage of the GPU API's
`GPU_indexbuf_add_generic_vert` function by simply writing the index
data that we need. This avoids a function call and min/max tests for
every index added.
Pull Request: https://projects.blender.org/blender/blender/pulls/135404
Instead of computing an index mask for all curves, then returning an
intersection with the visible curves, just use the visible curves as
a universe for the original calculation. Also add another early out
for when there are no NURBS curves.
Currently the drawing data extraction code uses the offset indices API
quite inefficiently, copying the size of every selected every curve, then
accumulating those sizes. Instead just use the existing API function that
counts the size of all selected curves. Also for the weight overlay, avoid
doing the same calculation twice.
There is no real reason to keep these read-only, they are not exposed in
the UI anyway, and being able to edit them can become necessary in a
pipeline mamangement context.
The main issue of 'type-less' standard C allocations is that there is no check on
allocated type possible.
This is a serious source of annoyance (and crashes) when making some
low-level structs non-trivial, as tracking down all usages of these
structs in higher-level other structs and their allocation is... really
painful.
MEM_[cm]allocN<T> templates on the other hand do check that the
given type is trivial, at build time (static assert), which makes such issue...
trivial to catch.
NOTE: New code should strive to use MEM_new (i.e. allocation and
construction) as much as possible, even for trivial PoD types.
Pull Request: https://projects.blender.org/blender/blender/pulls/135870
This patch refactors the result class to replace proxy results with the
possibility of doing data sharing through a shared heap allocated data
reference count. This is more robust and simpler since proxy results no
longer need to be handled as a special case in a lot of the results
code. Additionally, it allows stronger const correctness since inputs to
operations can now be const.
This is somewhat similar to implicit sharing used in other parts of
Blender, so we can look into using that in the future.
Pull Request: https://projects.blender.org/blender/blender/pulls/135778
It seems that this was 'fine', as non-trivial data in `BVHTreeFromMesh`
appear to be 'safe' when simply zero-initialized instead of being
properly constructed.
Note that this 'calloced' data was already 'MEM_deleted', this is
currently considered as a valid use-case unfortunately, otherwise the
issue would have been detected earlier.
Directly use 'copy' `MEM_new` code instead.
Pull Request: https://projects.blender.org/blender/blender/pulls/135862
The main issue of 'type-less' standard C allocations is that there is no check on
allocated type possible.
This is a serious source of annoyance (and crashes) when making some
low-level structs non-trivial, as tracking down all usages of these
structs in higher-level other structs and their allocation is... really
painful.
MEM_[cm]allocN<T> templates on the other hand do check that the
given type is trivial, at build time (static assert), which makes such issue...
trivial to catch.
NOTE: New code should strive to use MEM_new (i.e. allocation and
construction) as much as possible, even for trivial PoD types.
The main issue of 'type-less' standard C allocations is that there is no check on
allocated type possible.
This is a serious source of annoyance (and crashes) when making some
low-level structs non-trivial, as tracking down all usages of these
structs in higher-level other structs and their allocation is... really
painful.
MEM_[cm]allocN<T> templates on the other hand do check that the
given type is trivial, at build time (static assert), which makes such issue...
trivial to catch.
NOTE: New code should strive to use MEM_new (i.e. allocation and
construction) as much as possible, even for trivial PoD types.
Pull Request: https://projects.blender.org/blender/blender/pulls/135855
The main issue of 'type-less' standard C allocations is that there is no check on
allocated type possible.
This is a serious source of annoyance (and crashes) when making some
low-level structs non-trivial, as tracking down all usages of these
structs in higher-level other structs and their allocation is... really
painful.
MEM_[cm]allocN<T> templates on the other hand do check that the
given type is trivial, at build time (static assert), which makes such issue...
trivial to catch.
NOTE: New code should strive to use MEM_new (i.e. allocation and
construction) as much as possible, even for trivial PoD types.
Pull Request: https://projects.blender.org/blender/blender/pulls/135852
The main issue of 'type-less' standard C allocations is that there is no check on
allocated type possible.
This is a serious source of annoyance (and crashes) when making some
low-level structs non-trivial, as tracking down all usages of these
structs in higher-level other structs and their allocation is... really
painful.
MEM_[cm]allocN<T> templates on the other hand do check that the
given type is trivial, at build time (static assert), which makes such issue...
trivial to catch.
NOTE: New code should strive to use MEM_new (i.e. allocation and
construction) as much as possible, even for trivial PoD types.
Pull Request: https://projects.blender.org/blender/blender/pulls/135813
This patch adds the ability to snap to the frame range bounds in the VSE
timeline, on by default. End frame snap location is offset by 1 to
ensure the snap point aligns with the visible end frame boundary
(otherwise e.g. right handle of strip would be one frame short).
Timeline and preview snapping is also turned on by default using the
same versioning block.
Pull Request: https://projects.blender.org/blender/blender/pulls/135753
There is a bug in OpenEXR bug where it requests at least 4096 bytes even
if the file is smaller than that. Work around it by padding with zeros.
This was fixed upstream in commit 97f857131e6b4c43ab after the 3.3.2
release, but not in any official release yet.
Pull Request: https://projects.blender.org/blender/blender/pulls/135796
This replaces the implicit use of the `radius` attribute on the input
curves with an input field `Scale` that gets evaluated on the point
domain of the input curves to scale the profile.
It wasn't super intuitive that the `radius` would actually act as
a scale of the profile. E.g. if the radius of the input curve was
`1 meter` the resulting profile was unscaled (scaled by 1), but
wouldn't necessarily have a size of `1 meter` (only if the profile
also had a size of 1m)! If imperial units were used, `3.28084 ft`
would correspond to a scale of 1.
This change makes this behavior a lot more clear and potentially
removes the need for the assumption that the default curve radius
is `1.0f` (Ideally, the default curve radius should be `0.01f`).
While we did consider making the `Scale` input use the `radius`
field implicitly by default, we decided against it, because it again
"hides" the dependency on the radius and the fact that the radius
is used as a scale. Letting the user make this decision seems better.
Pull Request: https://projects.blender.org/blender/blender/pulls/134187
The issue was that the `FCURVE_DISABLED` flag is set when an
RNA path cannot be resolved, and that skips evaluation.
However in the specific case, that is actually something expected as
the property only exists on one ID and not the other.
Pull Request: https://projects.blender.org/blender/blender/pulls/135782
Part of #134755 / #134766.
Add the blend file name and directory path to the tooltip of assets in
custom asset libraries (as configured in the Preferences), for the asset
browser and asset shelf. This information is often important when
dealing with production assets. In the asset shelf it wasn't available
at all.
Based on feedback from the Blender Studio.