Adds two modes of GPU frame capture support for
enhanced debugging. GPU frame capture begin/end
allow instantaneous frame capture of all GPU commands
within the capture boundary.
GPU frame capture scopes allow several user-defined capture
regions which can wrap key parts of code. These scopes are
exposed to connected GPU tools allowing the user to manually
trigger a capture of a known scope at the desired time.
This is currently integrated with the Metal backend for
support with Xcode.
Related to #105591
Pull Request: https://projects.blender.org/blender/blender/pulls/105717
Optimized node graphs do not get cached and were
not correctly freed once their reference count reached
zero, due to being excluded from the GPUPass garbage
collection.
Also suppress Metal shader warnings, which are prevalent
during material optimization.
Authored by Apple: Michael Parkin-White
Pull Request: https://projects.blender.org/blender/blender/pulls/105795
Refactoring mesh code, it has become clear that local cleanups and
simplifications are limited by the need to keep a C public API for
mesh functions. This change makes code more obvious and makes further
refactoring much easier.
- Add a new `BKE_mesh.hh` header for a C++ only mesh API
- Introduce a new `blender::bke::mesh` namespace, documented here:
https://wiki.blender.org/wiki/Source/Objects/Mesh#Namespaces
- Move some functions to the new namespace, cleaning up their arguments
- Move code to `Array` and `float3` where necessary to use the new API
- Define existing inline mesh data access functions to the new header
- Keep some C API functions where necessary because of RNA
- Move all C++ files to use the new header, which includes the old one
In the future it may make sense to split up `BKE_mesh.hh` more, but for
now keeping the same name as the existing header keeps things simple.
Pull Request: https://projects.blender.org/blender/blender/pulls/105416
The previous API for clearing storage buffers was following the OpenGL
api. OpenGL has many options to support for data conversions, striding
and sizzling. Metal and Vulkan don't have these features and we have to
deal it ourselves.
Blender internally only uses a tiny subset for what is possible in
OpenGL. Making the current API to difficult to implement on our future
platforms as we had to implement all cases, most even not used at all.
By changing the API we make future development easier as we only need
to implement what we are actually using.
**New API**
`GPU_storagebuf_clear(GPUStorageBuf* ssbo, uint32_t clear_value)`
Related issue: #105492
Pull Request: https://projects.blender.org/blender/blender/pulls/105521
With the goal of clearly differentiating between arrays and single
elements, improving consistency across Blender, and using wording
that's easier to read and say, change variable names for Mesh edges
and polygons/faces.
Common renames are the following, with some extra prefixes, etc.
- `mpoly` -> `polys`
- `mpoly`/`mp`/`p` -> `poly`
- `medge` -> `edges`
- `med`/`ed`/`e` -> `edge`
`MLoop` variables aren't affected because they will be replaced
when they're split up into to arrays in #104424.
This pull request adds a new tipe of resource handles (thin handles).
These are intended for cases where a resource buffer with more than one
entry for each object is needed (for example, one entry per material
slot).
While it's already possible to have multiple regular handles for the
same object, they have a non-trivial overhead in terms of uploaded
data (matrix, bounds, object info) and computation (visibility
culling).
Thin handles store an indirection buffer pointing to their "parent"
regular handle, therefore multiple thin handles can share the same
per-object data and visibility culling computation.
Thin handles can only be used in their own Pass type (PassMainThin),
so passes that don't need them don't have to pay the overhead.
This pull request also includes the update of the Workbench Next
pre-pass to use PassMainThin, which is the main reason for the
implementation of this feature.
The main change from the previous PR is that the thin handles are now
stored directly in the main resource_id_buf, to avoid wasting an extra
bind slot.
Pull Request #105261
GPencil 3D stroke rendering uses a geometry shader.
This is unsupported by the Metal backend, so implement
fix for this failing compilation by shifting geometry shader
logic into the Vertex shader for Metal backend.
Authored by Apple: Michael Parkin-White
Ref #96261
Pull Request #105143
Resolves issue with nearest filtering on UI Icons. Note that as
Metal does not support LOD bias as a parameter on a sampler
object, the original code has been modified to perform LOD
biasing at the shader level.
As GPU_SAMPLER_ICON is not widely used, it is more
efficient to apply directly to the affected shaders, rather
than workaround passing in the sampler LOD bias as a
separate value e.g. uniform or push constant.
Original PR feedback addressed to also refactor ICON
shaders to use consistent style for single and multi
Icon rendering.
Authored by Apple: Michael Parkin-White
Ref #96261
Pull Request #105145
This remove default casses from the `switch` statements to catch where
the missing cases are.
Uncomment unimplemented cases for the sake of completeness. Improving the
overall API.
This make the format conversion lists exhaustive and documented.
This replace `validate_data_format_mtl` by the common version as they
don't differ at all now.
For every other texture types this is expected to be implicitly
`GPU_DATA_FLOAT`. There is only one case where this is not the case.
I believe this was previously needed because the data type was
conditionning the texture creation. This is not the case anymore.
The stoke shader of grease pencil uses a geometry shader stage. Apple
devices don't support shaders with geometry shader stage. In the
OpenGL driver there was a pass-through implemented so it didn't fail.
When using the metal backend this needs to be solved more explicitly.
This change patches the grease pencil shader to support both the
backends supporting a geometry stage and those without.
Fixes#105059
Pull Request #105116
Erroneous cache warming case where the generated material is
identical to default material and cached shader is re-used,
resulting in case where the parent shader is identical to the
source.
Authored by Apple: Michael Parkin-White
Ref #96261
This patch adds initial support for compute shaders to
the vulkan backend. As the development is oriented to the test-
cases we have the implementation is limited to what is used there.
It has been validated that with this patch that the following test
cases are running as expected
- `GPUVulkanTest.gpu_shader_compute_vbo`
- `GPUVulkanTest.gpu_shader_compute_ibo`
- `GPUVulkanTest.gpu_shader_compute_ssbo`
- `GPUVulkanTest.gpu_storage_buffer_create_update_read`
- `GPUVulkanTest.gpu_shader_compute_2d`
This patch includes:
- Allocating VkBuffer on device.
- Uploading data from CPU to VkBuffer.
- Binding VkBuffer as SSBO to a compute shader.
- Execute compute shader and altering VkBuffer.
- Download the VkBuffer to CPU ram.
- Validate that it worked.
- Use device only vertex buffer as SSBO
- Use device only index buffer as SSBO
- Use device only image buffers
GHOST API has been changed as the original design was created before
we even had support for compute shaders in blender. The function
`GHOST_getVulkanBackbuffer` has been separated to retrieve the command
buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order
to do correct command buffer processing we needed access to the queue
owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles`
function.
Open topics (not considered part of this patch)
- Memory barriers & command buffer encoding
- Indirect compute dispatching
- Rest of the test cases
- Data conversions when requested data format is different than on device.
- GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices.
NVIDIA doesn't seem to support 1d textures.
Pull-request: #104518
This reverts commit 19222627c6.
Something went wrong here, seems like this commit merged the main branch
into the release branch, which should never be done.
Certain material node graphs can be very expensive to run. This feature aims to produce secondary GPUPass shaders within a GPUMaterial which provide optimal runtime performance. Such optimizations include baking constant data into the shader source directly, allowing the compiler to propogate constants and perform aggressive optimization upfront.
As optimizations can result in reduction of shader editor and animation interactivity, optimized pass generation and compilation is deferred until all outstanding compilations have completed. Optimization is also delayed util a material has remained unmodified for a set period of time, to reduce excessive compilation. The original variant of the material shader is kept to maintain interactivity.
Also adding a new concept to gpu::Shader allowing assignment of a parent shader from which a shader can pull PSO descriptors and any required metadata for asynchronous shader cache warming. This enables fully asynchronous shader optimization, without runtime hitching, while also reducing runtime hitching for standard materials, by using PSO descriptors from default materials, ahead of rendering.
Further shader graph optimizations are likely also possible with this architecture. Certain scenes, such as Wanderer benefit significantly. Viewport performance for this scene is 2-3x faster on Apple-silicon based GPUs.
Authored by Apple: Michael Parkin-White
Ref T96261
Pull Request #104536
This replaces `GPU_SHADER_3D_POINT_FIXED_SIZE_VARYING_COLOR` by
GPU_SHADER_2D_POINT_UNIFORM_SIZE_UNIFORM_COLOR_OUTLINE_AA`.
None of the usage made sense to not use the AA shader.
Scale the point size to account for the rounded shape.
The GPU module has 2 different styles when reading back data from
GPU buffers. The SSBOs used a memcpy to copy the data to a
pre-allocated buffer. IndexBuf/VertBuf gave back a driver/platform
controlled pointer to the memory.
Readback is done for test cases returning mapped pointers is not safe.
For this reason we settled on using the same approach as the SSBO.
Copy the data to a caller pre-allocated buffer.
Reason why this API is currently changed is that the Vulkan API is more
strict on mapping/unmapping buffers that can lead to potential issues
down the road.
Pull Request #104571
Keep using the 3 evaluations dF_branch method for the Displacement output.
The optimized 2 evaluations method used by node_bump is now on its own macro (dF_branch_incomplete).
displacement_bump modifies the normal that nodetree_exec uses, so even with a refactor it wouldn’t be possible to re-use the computation anyway.
Avoid computing the non-derivative height twice.
The height is now computed as part of the main function, while the height at x and y offsets are still computed on a separate function.
The differentials are now computed directly at node_bump.
Co-authored-by: Miguel Pozo <pragma37@gmail.com>
Pull Request #104595
Documented all functions, adding use case and side effects.
Also replace the use of shortened argument name by more meaningful ones.
Renamed `GPU_batch_instbuf_add_ex` and `GPU_batch_vertbuf_add_ex` to remove
the `ex` suffix as they are the main version used (removed the few usage
of the other version).
Renamed `GPU_batch_draw_instanced` to `GPU_batch_draw_instance_range` and
make it consistent with `GPU_batch_draw_range`.