The Separate operator uses the ubiquitous `gather_attributes` function
to fill in attributes of layers. However, it only makes sure the layer
exists _after_ calling `gather_attributes`, which crashes if the layer
does not already exist.
Pull Request: https://projects.blender.org/blender/blender/pulls/121013
Remove another use of the `BKE_pbvh_vertex_iter_begin` macro
and significantly simplify hot loops used when cancelling a brush
stroke or calculating an anchored stroke.
Avoid mixing different abstraction levels, remove the conversion
of the brush tool into `undo::Type`. Make it simpler to specialize
the implementation further for separate PBVH types later.
Also fix race condition retrieving write access to an attribute from
multiple threads at the same time, and reduce per-PBVH-node
overhead.
Design updates as per #118288:
- Tweak text labels (colors, drop shadows)
- Strip border colors, inset outlines
- Muted strips are mostly gray, and their thumbnails are faded
- Overlapping strips are not semitransparent anymore
- Locked stripes only in content area
- Missing data blocks
- Updates to meta strips w/ missing data blocks
Pull Request: https://projects.blender.org/blender/blender/pulls/118581
There are two ways the first stroke point is moved after initial
placement:
- The `process_extension_sample` overwrites the initial values of the
sample point after `process_start_sample`. It copies the _new_ sample
position, until a threshold (3 pixels) is reached and the 2nd point
is created, and the 1st point remains stationary.
- After initial placement the point may still get shifted due to the
resampling and interpolation used. Long sections between stroke
samples may get subdivided and the positions of the two samples are
linearly interpolated. However, the interpolation starts at 1/n,
meaning the first interpolated point never matches the first sample.
This is correct for later samples where the last point should not be
repeated, but it ends up moving the first curve point again when the
2nd sample is processed.
This patch fixes both issues by keeping the first generated point
stationary and never touching its position again.
Pull Request: https://projects.blender.org/blender/blender/pulls/121011
The usage for `points_in_planes` might require different epsilons set
for parallel/intersection determination. This adds those epsilon values
to the bpy function so it benefits script users.
Pull Request: https://projects.blender.org/blender/blender/pulls/120910
Both flags shared the same value, causing repeating a transform action
always recalculate the orientation matrix instead of using the value
from the initial execution. This is most noticeable when repeating a
transform that used a view matrix, where orbiting the view doesn't
use the view orientation from the original transform.
Cleans up the following issues:
- USD arrays were passed by value instead of references
- UsdGeomBasisCurves and UsdGeomNurbsCurves were potentially sliced when
assigning to their parent UsdGeomCurves object in a non-polymorphic way
- Make more parameters and functions const
- Align with how Alembic validates the curve_types and cyclic values
- Standarize CurvesGeometry naming like what was done in 5ed9c8c9dd
(Curves data-block is called "curve_id", CurvesGeometry is called
"curves")
Pull Request: https://projects.blender.org/blender/blender/pulls/120760
This was due to using a different normal than the deferred
pipeline for light facing attenuation.
Use the same heuristic as the deferred pipeline for
consistency and smoother look.
Fix#119750
During Export, we were accidentally duplicating the `velocity` attribute
data. Once inside the `write_surface_velocity` function (which was
correct) and again while writing out all "custom" attributes inside
`write_custom_data` (which was incorrect). Fixed by excluding the
"velocity" attribute inside `write_custom_data`.
During Import, we were only loading back in those "custom" primvars so
things happened to work, by accident, but only for USD files produced by
Blender. Now we import just the Velocities attribute which should work
with all files.
This should fully address #96182
Pull Request: https://projects.blender.org/blender/blender/pulls/120771
The normalization factor can divide by zero. Add a small
bias to avoid this. Since the bias is the same on both
the numerator and the denominator, the result converges
to 1 as the denominator reaches zero.
This also adds a `saturate` to avoid lighting being
weirdly increase in some part of the volume probe.
Fix#119799
Adding support for converting between Blender custom properties and
USD user-defined custom attributes. Custom attributes on Xforms, many
data types, and materials are all supported for round-tripping.
Please see the USD attributes documentation for more information on
custom attributes.
Properties are exported with a userProperties: namespace for simple
filtering in external apps. This namespace is stripped on import,
but other namespace are allowed to persist.
An "Import Attributes" parameter has been added with options "None" (do
not import attributes), "User" (import attributes in the 'userProperties'
namespace only), "All custom" (import all USD custom attributes, the
default).
An "Export Custom Properties" export option has been added.
The property conversion code handles float, double, string and bool
types, as well as tuples of size 2, 3 and 4. Note that USD quaternions
and arrays of arbitrary length are not yet supported.
There is currently no attempt to set the Blender property subtype based
on the USD type "role" (e.g., specifying Color or XYZ vector subtypes).
This can be addressed in future work.
In addition to exporting custom properties, the original Blender object
and data names are now saved as USD custom string attributes
"userProperties:blender:object_name" and "userProperties:blender:data_name",
respectively, on the corresponding USD prims. This feature is enabled
with the "Author Blender Name" export option.
If a Blender custom string property is named "displayName", it's handled
in a special way on export in that its value is used to set the USD
prim's "displayName" metadata.
Co-authored-by: kiki <charles@skeletalstudios.com>
Co-authored-by: Michael Kowalski <makowalski@nvidia.com>
Co-authored-by: Charles Wardlaw <kattkieru@users.noreply.github.com>
Pull Request: https://projects.blender.org/blender/blender/pulls/118938
This PR implements the viewport overlay for Weight Paint mode in GPv3.
In Weight Paint mode the stroke points are colored depending on their
weights in the active vertex group.
Pull Request: https://projects.blender.org/blender/blender/pulls/118273
This PR adds support for compute shaders to render graph. Only direct dispatch
is supported. indirect dispatch will be added in a future PR.
This change enables the next test cases to be supported when using render graphs
- `GPUVulkanTest.push_constants*`
- `GPUVulkanTest.shader_compute_*`
- `GPUVulkanTest.buffer_texture`
- `GPUVulkanTest.specialization_constants_compute`
- `GPUVulkanTest.compute_direct`
```
[==========] 95 tests from 2 test suites ran. (24059 ms total)
[ PASSED ] 95 tests.
```
Specialization constants are supported when using the render graph. This should conclude
the conversion the prototype of the render graph.
Pull Request: https://projects.blender.org/blender/blender/pulls/120963
VKPipeline class is deprecated and will be phased out in the near future.
This PR moves the push constants to VKShader as it was wrongly placed in the
pipeline.
Pull Request: https://projects.blender.org/blender/blender/pulls/120980
Add a simple node to compute the intersection, difference, or union
between SDF grids. This should be the first new use case for the
new volume grid nodes that wasn't possible before.
For naming and multi-inputs, the node uses the same design as the
mesh boolean node. We considered splitting each operation into a
separate node, but though most users considered these different
"modes" of the same operation.
One thing to keep in mind is that it's important for the grids to
have exactly the same transform. If they have different transforms,
the second grid must be resampled to match the first, because the
OpenVDB CSG tools have that requirement. Resampling is expensive
(for SDF grids it means a grid -> mesh -> grid round trip) and should
be avoided.
Pull Request: https://projects.blender.org/blender/blender/pulls/118879
The fade and facing overlays use a depth equal test. EEVEE next uses
multiple samples to construct the depth and the depth can be different
than the death-center pixel depth.
This PR uses depth-less tests, but in case of the facing overlays it
can produce some artifacts at sharp edges where the normals bleeds.
Other solution would be to render the depth center depth buffer when
one of these overlays are turned on, but that adds overhead as that
will most likely be redrawn for each draw loop.
Pull Request: https://projects.blender.org/blender/blender/pulls/120976
Blender expects that only the filename is provided which is true
for internal shader sources.
Metal shaders can error and return a full path to a system shader.
This doesn't fit inside the reserved memory that Blender reserved
for logging filename and line number.
This out of bound write can be triggered when using `min`
where the parameters aren't of the same kind.
`uint min(uint, int)` for example.
This PR reserves more space to store the filename.
Pull Request: https://projects.blender.org/blender/blender/pulls/120967
In Vulkan, a Blender shader is organized in multiple
objects. A VkPipeline is the highest level concept and represents
somewhat we call a shader. A pipeline is an device/platform optimized
version of the shader that is uploaded and executed in the GPU device.
A key difference with shaders is that its usage is also compiled
in. When using the same shader with a different blending, a new pipeline
needs to be created.
In the current implementation of the Vulkan backend the pipeline is
re-created when any pipeline parameter changes. This triggers many
pipeline compilations. Especially when common shaders are used in
different parts of the drawing code.
A requirement of our render graph implementation is that changes
of the pipeline can be detected based on the VkPipeline handle.
We only want to rebind the pipeline handle when the handle actually
changes. This improves performance (especially on NVIDIA) devices
where pipeline binds are known to be costly.
The solution of this PR is to add a pipeline pool. This holds all
pipelines and can find an already created pipeline based on pipeline
infos. Only compute pipelines support has been added.
# Future enhancements
- Recent drivers replace `VkShaderModule` with pipeline libraries.
It improves sharing pipeline stages and reduce pipeline creation times.
- GPUMaterials should be removed from the pipeline pool when they are
destroyed. Details on this will be more clear when EEVEE support is
added.
Pull Request: https://projects.blender.org/blender/blender/pulls/120899