Cycles: Compress GPU kernels to reduce file size

Precompiled Cycles kernels make up a considerable fraction of the total size of
Blender builds nowadays. As we add more features and support for more
architectures, this will only continue to increase.

However, since these kernels tend to be quite compressible, we can save a lot
of storage by storing them in compressed form and decompressing the required
kernel(s) during loading.

By using Zstandard compression with a high level, we can get decent compression
ratios (~5x for the current kernels) while keeping decompression time low
(about 30ms in the worse case in my tests). And since we already require zstd
for Blender, this doesn't introduce a new dependency.

While the main improvement is to the size of the extracted Blender installation
(which is reduced by ~400-500MB currently), this also shrinks the download on
Windows, since .zip's deflate compression is less effective. It doesn't help on
Linux since we're already using .tar.xz there, but the smaller installed size
is still a good thing.

See #123522 for initial discussion.

Pull Request: https://projects.blender.org/blender/blender/pulls/123557
This commit is contained in:
Lukas Stockner
2024-06-23 00:52:30 +02:00
committed by Lukas Stockner
parent cf73897690
commit 4bde68cdd6
9 changed files with 152 additions and 20 deletions

View File

@@ -19,6 +19,8 @@ OIIO_NAMESPACE_USING
#include <sys/stat.h>
#include <zstd.h>
#if defined(_WIN32)
# define DIR_SEP '\\'
# define DIR_SEP_ALT '/'
@@ -704,6 +706,36 @@ bool path_read_binary(const string &path, vector<uint8_t> &binary)
return true;
}
bool path_read_compressed_binary(const string &path, vector<uint8_t> &binary)
{
if (!string_endswith(path, ".zst")) {
return path_read_binary(path, binary);
}
vector<uint8_t> compressed;
if (!path_read_binary(path, compressed)) {
return false;
}
const size_t full_size = ZSTD_getFrameContentSize(compressed.data(), compressed.size());
if (full_size == ZSTD_CONTENTSIZE_ERROR) {
/* Potentially corrupted file? */
return false;
}
if (full_size == ZSTD_CONTENTSIZE_UNKNOWN) {
/* Technically this is an optional field, but we can expect it to be set for now.
* Otherwise we'd need streaming decompression and repeated resizing of the vector. */
return false;
}
binary.resize(full_size);
size_t err = ZSTD_decompress(binary.data(), binary.size(), compressed.data(), compressed.size());
return ZSTD_isError(err) == 0;
}
bool path_read_text(const string &path, string &text)
{
vector<uint8_t> binary;
@@ -719,6 +751,21 @@ bool path_read_text(const string &path, string &text)
return true;
}
bool path_read_compressed_text(const string &path, string &text)
{
vector<uint8_t> binary;
if (!path_exists(path) || !path_read_compressed_binary(path, binary)) {
return false;
}
const char *str = (const char *)&binary[0];
size_t size = binary.size();
text = string(str, size);
return true;
}
uint64_t path_modified_time(const string &path)
{
path_stat_t st;