Fix #124720: Metal: Crash when trying to allocate large 3d textures

The metal backend assumes that textures can always be allocated. When
metal later detects that the texture cannot be 'baked' it leads to
undesired behavior as Blender think it has a texture with memory
allocated.

This PR only changes the GPU_TEXTURE_3D case as that leads to issues
when loading large voluetric data. Arrayed textures will still fail
as that requires different checks to be added. I rather re-view the
current implementation in the future.

> NOTE: The max depth is hardcoded to 2048

Change should be backported to 4.2

Pull Request: https://projects.blender.org/blender/blender/pulls/128365
This commit is contained in:
Jeroen Bakker
2024-10-03 10:11:23 +02:00
parent 06a4198329
commit 52dfb4aa8f

View File

@@ -2054,6 +2054,11 @@ uint gpu::MTLTexture::gl_bindcode_get() const
bool gpu::MTLTexture::init_internal()
{
this->prepare_internal();
/* TODO(jbakker): Other limit checks should be added as well. When a texture violates a limit it
* is not backed by a texture and will crash when used. */
if (type_ == GPU_TEXTURE_3D && d_ > GPU_max_texture_3d_size()) {
return false;
}
return true;
}