Files
test/source/blender/gpu/intern/gpu_storage_buffer_private.hh

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

71 lines
1.6 KiB
C++
Raw Normal View History

/* SPDX-FileCopyrightText: 2022 Blender Authors
*
* SPDX-License-Identifier: GPL-2.0-or-later */
/** \file
* \ingroup gpu
*/
#pragma once
#include "BLI_span.hh"
#include "BLI_sys_types.h"
struct GPUStorageBuf;
namespace blender {
namespace gpu {
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
class VertBuf;
#ifndef NDEBUG
# define DEBUG_NAME_LEN 64
#else
# define DEBUG_NAME_LEN 8
#endif
/**
* Implementation of Storage Buffers.
* Base class which is then specialized for each implementation (GL, VK, ...).
*/
class StorageBuf {
protected:
/** Data size in bytes. */
size_t size_in_bytes_;
/** Continuous memory block to copy to GPU. This data is owned by the StorageBuf. */
void *data_ = nullptr;
/** Debugging name */
char name_[DEBUG_NAME_LEN];
public:
StorageBuf(size_t size, const char *name);
virtual ~StorageBuf();
virtual void update(const void *data) = 0;
virtual void bind(int slot) = 0;
virtual void unbind() = 0;
virtual void clear(uint32_t clear_value) = 0;
virtual void copy_sub(VertBuf *src, uint dst_offset, uint src_offset, uint copy_size) = 0;
virtual void read(void *data) = 0;
GPU: Add explicit API to sync storage buffer back to host PR Introduces GPU_storagebuf_sync_to_host as an explicit routine to flush GPU-resident storage buffer memory back to the host within the GPU command stream. The previous implmentation relied on implicit synchronization of resources using OpenGL barriers which does not match the paradigm of explicit APIs, where indiviaul resources may need to be tracked. This patch ensures GPU_storagebuf_read can be called without stalling the GPU pipeline while work finishes executing. There are two possible use cases: 1) If GPU_storagebuf_read is called AFTER an explicit call to GPU_storagebuf_sync_to_host, the read will be synchronized. If the dependent work is still executing on the GPU, the host will stall until GPU work has completed and results are available. 2) If GPU_storagebuf_read is called WITHOUT an explicit call to GPU_storagebuf_sync_to_host, the read will be asynchronous and whatever memory is visible to the host at that time will be used. (This is the same as assuming a sync event has already been signalled.) This patch also addresses a gap in the Metal implementation where there was missing read support for GPU-only storage buffers. This routine now uses a staging buffer to copy results if no host-visible buffer was available. Reading from a GPU-only storage buffer will always stall the host, as it is not possible to pre-flush results, as no host-resident buffer is available. Authored by Apple: Michael Parkin-White Pull Request: https://projects.blender.org/blender/blender/pulls/113456
2023-10-20 17:04:36 +02:00
virtual void async_flush_to_host() = 0;
};
/* Syntactic sugar. */
static inline GPUStorageBuf *wrap(StorageBuf *storage_buf)
{
return reinterpret_cast<GPUStorageBuf *>(storage_buf);
}
static inline StorageBuf *unwrap(GPUStorageBuf *storage_buf)
{
return reinterpret_cast<StorageBuf *>(storage_buf);
}
static inline const StorageBuf *unwrap(const GPUStorageBuf *storage_buf)
{
return reinterpret_cast<const StorageBuf *>(storage_buf);
}
#undef DEBUG_NAME_LEN
} // namespace gpu
} // namespace blender