Files
test2/intern/ghost/GHOST_IContext.hh

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

119 lines
4.0 KiB
C++
Raw Normal View History

/* SPDX-FileCopyrightText: 2001-2002 NaN Holding BV. All rights reserved.
*
* SPDX-License-Identifier: GPL-2.0-or-later */
/** \file
* \ingroup GHOST
* Declaration of GHOST_IContext interface class.
*/
#pragma once
#include "GHOST_Types.h"
#ifdef WITH_VULKAN_BACKEND
# include <functional>
#endif
/**
* Interface for GHOST context.
*
2021-07-26 12:32:42 +10:00
* You can create a off-screen context (windowless) with the system's
* #GHOST_ISystem::createOffscreenContext method.
* \see GHOST_ISystem#createOffscreenContext
*/
class GHOST_IContext {
public:
/**
* Destructor.
*/
virtual ~GHOST_IContext() = default;
/**
* Returns the thread's currently active drawing context.
*/
static GHOST_IContext *getActiveDrawingContext();
/**
* Activates the drawing context.
* \return A boolean success indicator.
*/
virtual GHOST_TSuccess activateDrawingContext() = 0;
/**
* Release the drawing context of the calling thread.
* \return A boolean success indicator.
*/
virtual GHOST_TSuccess releaseDrawingContext() = 0;
/**
* Gets the OpenGL frame-buffer associated with the OpenGL context
* \return The ID of an OpenGL frame-buffer object.
*/
VR: Initial Virtual Reality support - Milestone 1, Scene Inspection NOTE: While most of the milestone 1 goals are there, a few smaller features and improvements are still to be done. Big picture of this milestone: Initial, OpenXR-based virtual reality support for users and foundation for advanced use cases. Maniphest Task: https://developer.blender.org/T71347 The tasks contains more information about this milestone. To be clear: This is not a feature rich VR implementation, it's focused on the initial scene inspection use case. We intentionally focused on that, further features like controller support are part of the next milestone. - How to use? Instructions on how to use this are here: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/How_to_Test These will be updated and moved to a more official place (likely the manual) soon. Currently Windows Mixed Reality and Oculus devices are usable. Valve/HTC headsets don't support the OpenXR standard yet and hence, do not work with this implementation. --------------- This is the C-side implementation of the features added for initial VR support as per milestone 1. A "VR Scene Inspection" Add-on will be committed separately, to expose the VR functionality in the UI. It also adds some further features for milestone 1, namely a landmarking system (stored view locations in the VR space) Main additions/features: * Support for rendering viewports to an HMD, with good performance. * Option to sync the VR view perspective with a fully interactive, regular 3D View (VR-Mirror). * Option to disable positional tracking. Keeps the current position (calculated based on the VR eye center pose) when enabled while a VR session is running. * Some regular viewport settings for the VR view * RNA/Python-API to query and set VR session state information. * WM-XR: Layer tying Ghost-XR to the Blender specific APIs/data * wmSurface API: drawable, non-window container (manages Ghost-OpenGL and GPU context) * DNA/RNA for management of VR session settings * `--debug-xr` and `--debug-xr-time` commandline options * Utility batch & config file for using the Oculus runtime on Windows. * Most VR data is runtime only. The exception is user settings which are saved to files (`XrSessionSettings`). * VR support can be disabled through the `WITH_XR_OPENXR` compiler flag. For architecture and code documentation, see https://wiki.blender.org/wiki/Source/Interface/XR. --------------- A few thank you's: * A huge shoutout to Ray Molenkamp for his help during the project - it would have not been that successful without him! * Sebastian Koenig and Simeon Conzendorf for testing and feedback! * The reviewers, especially Brecht Van Lommel! * Dalai Felinto for pushing and managing me to get this done ;) * The OpenXR working group for providing an open standard. I think we're the first bigger application to adopt OpenXR. Congratulations to them and ourselves :) This project started as a Google Summer of Code 2019 project - "Core Support of Virtual Reality Headsets through OpenXR" (see https://wiki.blender.org/wiki/User:Severin/GSoC-2019/). Some further information, including ideas for further improvements can be found in the final GSoC report: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/Final_Report Differential Revisions: D6193, D7098 Reviewed by: Brecht Van Lommel, Jeroen Bakker
2020-03-17 20:20:55 +01:00
virtual unsigned int getDefaultFramebuffer() = 0;
/**
* Acquire next buffer for drawing.
* \return A boolean success indicator.
*/
virtual GHOST_TSuccess swapBufferAcquire() = 0;
/**
* Swaps front and back buffers of a window.
* \return A boolean success indicator.
*/
virtual GHOST_TSuccess swapBufferRelease() = 0;
VR: Initial Virtual Reality support - Milestone 1, Scene Inspection NOTE: While most of the milestone 1 goals are there, a few smaller features and improvements are still to be done. Big picture of this milestone: Initial, OpenXR-based virtual reality support for users and foundation for advanced use cases. Maniphest Task: https://developer.blender.org/T71347 The tasks contains more information about this milestone. To be clear: This is not a feature rich VR implementation, it's focused on the initial scene inspection use case. We intentionally focused on that, further features like controller support are part of the next milestone. - How to use? Instructions on how to use this are here: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/How_to_Test These will be updated and moved to a more official place (likely the manual) soon. Currently Windows Mixed Reality and Oculus devices are usable. Valve/HTC headsets don't support the OpenXR standard yet and hence, do not work with this implementation. --------------- This is the C-side implementation of the features added for initial VR support as per milestone 1. A "VR Scene Inspection" Add-on will be committed separately, to expose the VR functionality in the UI. It also adds some further features for milestone 1, namely a landmarking system (stored view locations in the VR space) Main additions/features: * Support for rendering viewports to an HMD, with good performance. * Option to sync the VR view perspective with a fully interactive, regular 3D View (VR-Mirror). * Option to disable positional tracking. Keeps the current position (calculated based on the VR eye center pose) when enabled while a VR session is running. * Some regular viewport settings for the VR view * RNA/Python-API to query and set VR session state information. * WM-XR: Layer tying Ghost-XR to the Blender specific APIs/data * wmSurface API: drawable, non-window container (manages Ghost-OpenGL and GPU context) * DNA/RNA for management of VR session settings * `--debug-xr` and `--debug-xr-time` commandline options * Utility batch & config file for using the Oculus runtime on Windows. * Most VR data is runtime only. The exception is user settings which are saved to files (`XrSessionSettings`). * VR support can be disabled through the `WITH_XR_OPENXR` compiler flag. For architecture and code documentation, see https://wiki.blender.org/wiki/Source/Interface/XR. --------------- A few thank you's: * A huge shoutout to Ray Molenkamp for his help during the project - it would have not been that successful without him! * Sebastian Koenig and Simeon Conzendorf for testing and feedback! * The reviewers, especially Brecht Van Lommel! * Dalai Felinto for pushing and managing me to get this done ;) * The OpenXR working group for providing an open standard. I think we're the first bigger application to adopt OpenXR. Congratulations to them and ourselves :) This project started as a Google Summer of Code 2019 project - "Core Support of Virtual Reality Headsets through OpenXR" (see https://wiki.blender.org/wiki/User:Severin/GSoC-2019/). Some further information, including ideas for further improvements can be found in the final GSoC report: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/Final_Report Differential Revisions: D6193, D7098 Reviewed by: Brecht Van Lommel, Jeroen Bakker
2020-03-17 20:20:55 +01:00
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
#ifdef WITH_VULKAN_BACKEND
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
/**
* Get Vulkan handles for the given context.
*
* These handles are the same for a given context.
2023-03-29 14:16:26 +11:00
* Should only be called when using a Vulkan context.
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
* Other contexts will not return any handles and leave the
* handles where the parameters are referring to unmodified.
*
* \param r_handles: After calling this structure is filled with
* the vulkan handles of the context.
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
*/
virtual GHOST_TSuccess getVulkanHandles(GHOST_VulkanHandles &r_handles) = 0;
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
/**
* Acquire the current swap-chain format.
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
*
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
* \param windowhandle: GHOST window handle to a window to get the resource from.
* \param r_surface_format: After calling this function the VkSurfaceFormatKHR
* referenced by this parameter will contain the surface format of the
* surface. The format is the same as the image returned in the r_image
* parameter.
Vulkan: Initial Compute Shaders support This patch adds initial support for compute shaders to the vulkan backend. As the development is oriented to the test- cases we have the implementation is limited to what is used there. It has been validated that with this patch that the following test cases are running as expected - `GPUVulkanTest.gpu_shader_compute_vbo` - `GPUVulkanTest.gpu_shader_compute_ibo` - `GPUVulkanTest.gpu_shader_compute_ssbo` - `GPUVulkanTest.gpu_storage_buffer_create_update_read` - `GPUVulkanTest.gpu_shader_compute_2d` This patch includes: - Allocating VkBuffer on device. - Uploading data from CPU to VkBuffer. - Binding VkBuffer as SSBO to a compute shader. - Execute compute shader and altering VkBuffer. - Download the VkBuffer to CPU ram. - Validate that it worked. - Use device only vertex buffer as SSBO - Use device only index buffer as SSBO - Use device only image buffers GHOST API has been changed as the original design was created before we even had support for compute shaders in blender. The function `GHOST_getVulkanBackbuffer` has been separated to retrieve the command buffer without a backbuffer (`GHOST_getVulkanCommandBuffer`). In order to do correct command buffer processing we needed access to the queue owned by GHOST. This is returned as part of the `GHOST_getVulkanHandles` function. Open topics (not considered part of this patch) - Memory barriers & command buffer encoding - Indirect compute dispatching - Rest of the test cases - Data conversions when requested data format is different than on device. - GPUVulkanTest.gpu_shader_compute_1d is supported on AMD devices. NVIDIA doesn't seem to support 1d textures. Pull-request: #104518
2023-02-21 15:03:12 +01:00
* \param r_extent: After calling this function the VkExtent2D
* referenced by this parameter will contain the size of the
* frame buffer and image in pixels.
*/
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
virtual GHOST_TSuccess getVulkanSwapChainFormat(
GHOST_VulkanSwapChainData *r_swap_chain_data) = 0;
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
/**
* Set the pre and post callbacks for vulkan swap-chain in the given context.
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
*
* \param context: GHOST context handle of a vulkan context to
* get the Vulkan handles from.
* \param swap_buffer_draw_callback: Function pointer to be called when acquired swap buffer is
* released, allowing Vulkan backend to update the swap chain.
* \param swap_buffer_acquired_callback: Function to be called at when swap buffer is acquired.
* Allowing Vulkan backend to update the framebuffer. It is also called when no swap chain
* exists indicating that the window was minimuzed.
* \param openxr_acquire_image_callback: Function to be called when an image needs to be acquired
* to be drawn to an OpenXR swap-chain.
Vulkan: Initial OpenXR support The Blender's VkInstance cannot be shared with OpenXR VkInstance. The reason is a chicken and egg problem where OpenXR needs to be started before Vulkan. OpenXR can add special vulkan specific requirements (instance&device) that are only available when the user starts an OpenXR session. The goal implementation is to share memory between both instances using [VK_KHR_external_memory](https://registry.khronos.org/vulkan/specs/latest/man/html/VK_KHR_external_memory.html) and related extensions. However this seems to be a bridge to far as a initial step. Reason: There are not that many samples/ guides and documentation to be found to handle the workflow that we require. We want to do a smaller step by step approach to gain the needed knowledge. For that reason this PR does the most stupidest thing that can be done to share memory between instances. Download the render result to CPU RAM share the host pointer with the OpenXR instance which copies it to the swap chain. Also the synchronization is done using wait idle commands. <video src="attachments/32a0d69b-c3fa-4272-aea0-d207609afaaf" title="Screencast From 2025-03-18 11-16-17.webm" controls></video> **Gaining knowledge** - Experiment with `VK_KHR_external_memory_host` extension for uploading vertex buffers (not related to OpenXR). - Import host pointer with `VK_KHR_external_memory_host`. This reduces the additional memcpy on OpenXR side. - Export host pointer from Blender side from a mappable buffer. - Replace host pointers with fd/dmabuf/winhandle - Remove mappable buffer. Ref #133718 Pull Request: https://projects.blender.org/blender/blender/pulls/133824
2025-03-27 16:57:51 +01:00
* \param openxr_release_image_callback: Function to be called after an image has been drawn to
* the OpenXR swap-chain.
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
*/
virtual GHOST_TSuccess setVulkanSwapBuffersCallbacks(
std::function<void(const GHOST_VulkanSwapChainData *)> swap_buffer_draw_callback,
std::function<void(void)> swap_buffer_acquired_callback,
Vulkan: Initial OpenXR support The Blender's VkInstance cannot be shared with OpenXR VkInstance. The reason is a chicken and egg problem where OpenXR needs to be started before Vulkan. OpenXR can add special vulkan specific requirements (instance&device) that are only available when the user starts an OpenXR session. The goal implementation is to share memory between both instances using [VK_KHR_external_memory](https://registry.khronos.org/vulkan/specs/latest/man/html/VK_KHR_external_memory.html) and related extensions. However this seems to be a bridge to far as a initial step. Reason: There are not that many samples/ guides and documentation to be found to handle the workflow that we require. We want to do a smaller step by step approach to gain the needed knowledge. For that reason this PR does the most stupidest thing that can be done to share memory between instances. Download the render result to CPU RAM share the host pointer with the OpenXR instance which copies it to the swap chain. Also the synchronization is done using wait idle commands. <video src="attachments/32a0d69b-c3fa-4272-aea0-d207609afaaf" title="Screencast From 2025-03-18 11-16-17.webm" controls></video> **Gaining knowledge** - Experiment with `VK_KHR_external_memory_host` extension for uploading vertex buffers (not related to OpenXR). - Import host pointer with `VK_KHR_external_memory_host`. This reduces the additional memcpy on OpenXR side. - Export host pointer from Blender side from a mappable buffer. - Replace host pointers with fd/dmabuf/winhandle - Remove mappable buffer. Ref #133718 Pull Request: https://projects.blender.org/blender/blender/pulls/133824
2025-03-27 16:57:51 +01:00
std::function<void(GHOST_VulkanOpenXRData *)> openxr_acquire_framebuffer_image_callback,
std::function<void(GHOST_VulkanOpenXRData *)> openxr_release_framebuffer_image_callback) = 0;
Vulkan: Rewrite GHOST_ContextVK This is a rewrite of GHOST_ContextVK to align with Metal backend as described in #111389 - solution 3 with the adaptation that GHOST is still responsible for presenting the swap chain image and a post callback is still needed in case the swapchain is recreated. This PR also includes some smaller improvements in stability. Technical documentation: https://developer.blender.org/docs/eevee_and_viewport/gpu/vulkan/swap_chain/ * Renderpasses and framebuffers are not owned anymore by GHOST_ContextVK * VKFramebuffer doesn't contain a swap chain image. * Swapchain images can only be used as a blit destination or present source. Not as an attachment. * GHOST_ContextVK::swapBuffers would call a callback with the image the GPU module needs to blit the results to. * Clearing of depth/stencil attachments when no depth write state is set. * Enable VK_KHR_maintenance4 to relax the stage interface mapping. * Removes most vulkan validation warnings/errors. * Detection of frame buffer changes that needs to be applied before performing a command requiring render pass (draw/clear attachment) **Benefits** * Late retrieval of a swap chain image results in better overall performance as Blender doesn't need to wait until the image is presented on the screen. * Easier API and clearer state (transitions) * More control over Image layouts and command buffer states. (Better alignment with Vulkan API) Pull Request: https://projects.blender.org/blender/blender/pulls/111473
2023-08-29 15:05:08 +02:00
#endif
VR: Initial Virtual Reality support - Milestone 1, Scene Inspection NOTE: While most of the milestone 1 goals are there, a few smaller features and improvements are still to be done. Big picture of this milestone: Initial, OpenXR-based virtual reality support for users and foundation for advanced use cases. Maniphest Task: https://developer.blender.org/T71347 The tasks contains more information about this milestone. To be clear: This is not a feature rich VR implementation, it's focused on the initial scene inspection use case. We intentionally focused on that, further features like controller support are part of the next milestone. - How to use? Instructions on how to use this are here: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/How_to_Test These will be updated and moved to a more official place (likely the manual) soon. Currently Windows Mixed Reality and Oculus devices are usable. Valve/HTC headsets don't support the OpenXR standard yet and hence, do not work with this implementation. --------------- This is the C-side implementation of the features added for initial VR support as per milestone 1. A "VR Scene Inspection" Add-on will be committed separately, to expose the VR functionality in the UI. It also adds some further features for milestone 1, namely a landmarking system (stored view locations in the VR space) Main additions/features: * Support for rendering viewports to an HMD, with good performance. * Option to sync the VR view perspective with a fully interactive, regular 3D View (VR-Mirror). * Option to disable positional tracking. Keeps the current position (calculated based on the VR eye center pose) when enabled while a VR session is running. * Some regular viewport settings for the VR view * RNA/Python-API to query and set VR session state information. * WM-XR: Layer tying Ghost-XR to the Blender specific APIs/data * wmSurface API: drawable, non-window container (manages Ghost-OpenGL and GPU context) * DNA/RNA for management of VR session settings * `--debug-xr` and `--debug-xr-time` commandline options * Utility batch & config file for using the Oculus runtime on Windows. * Most VR data is runtime only. The exception is user settings which are saved to files (`XrSessionSettings`). * VR support can be disabled through the `WITH_XR_OPENXR` compiler flag. For architecture and code documentation, see https://wiki.blender.org/wiki/Source/Interface/XR. --------------- A few thank you's: * A huge shoutout to Ray Molenkamp for his help during the project - it would have not been that successful without him! * Sebastian Koenig and Simeon Conzendorf for testing and feedback! * The reviewers, especially Brecht Van Lommel! * Dalai Felinto for pushing and managing me to get this done ;) * The OpenXR working group for providing an open standard. I think we're the first bigger application to adopt OpenXR. Congratulations to them and ourselves :) This project started as a Google Summer of Code 2019 project - "Core Support of Virtual Reality Headsets through OpenXR" (see https://wiki.blender.org/wiki/User:Severin/GSoC-2019/). Some further information, including ideas for further improvements can be found in the final GSoC report: https://wiki.blender.org/wiki/User:Severin/GSoC-2019/Final_Report Differential Revisions: D6193, D7098 Reviewed by: Brecht Van Lommel, Jeroen Bakker
2020-03-17 20:20:55 +01:00
MEM_CXX_CLASS_ALLOC_FUNCS("GHOST:GHOST_IContext")
};