This changes how the lazy-loading and unloading of volume grids works. With that it should also fix #124164. The cache is now moved to a deeper and more global level. This allows reloadable volume grids to be unloaded automatically when a memory limit is reached. The previous system for automatically unloading grids only worked in fairly specific cases and also did not work all that well with caching (parts of) volume sequences. At its core, this patch adds a general cache system in `BLI_memory_cache.hh`. It has a simple interface of the form `get(key, compute_if_not_cached_fn) -> value`. To avoid growing the cache indefinitly, it uses the new `BLI_memory_counter.hh` API to detect when the cache size limit is reached. In this case it can automatically free some cached values. Currently, this uses an LRU system, where the items that have not been used in a while are removed first. Other heuristics can be implemented too, but especially for caches for loading files from disk this works well already. The new memory cache is internally used by `volume_grid_file_cache.cc` for loading individual volume grids and their simplified variants. It could potentially also be used to cache which grids are stored in a file. Additionally, it can potentially also be used as caching layer in more places like loading bakes or in import geometry nodes. It's not clear yet whether this will need an extension to the API which currently is fairly minimal. To allow different systems to use the same memory cache, it has to support arbitrary identifiers for the cached data. Therefore, this patch also introduces `GenericKey`, which is an abstract base class for any kind of key that is comparable, hashable and copyable. The implementation of the cache currently relies on a new `ConcurrentMap` data-structure which is a thin wrapper around `tbb::concurrent_hash_map` with a fallback implementation for when `tbb` is not available. This data structure allows concurrent reads and writes to the cache. Note that adding data to the cache is still serialized because of the memory counting. The size of the cache depends on the `memory_cache_limit` property that's already shown in the user preferences. While it has a generic name, it's currently only used by the VSE which is currently using the `MEM_CacheLimiter` API which has a similar purpose but seems to be less automatic, thread-safe and also has no idea of implicit-sharing. It also seems to be designed in a way where one is expected to create multiple "cache limiters" each of which has its own limit. Longer term, we should probably strive towards unifying these systems, which seems feasible but a bit out of scope right now. While it's not ideal that these cache systems don't use a shared memory limit, it's essentially what we already have for all cache systems in Blender, so it's nothing new. Some tests for lazy-loading had to be removed because this behavior is more implicit now and is not as easily observable from the outside. Pull Request: https://projects.blender.org/blender/blender/pulls/126411
84 lines
2.0 KiB
C++
84 lines
2.0 KiB
C++
/* SPDX-FileCopyrightText: 2024 Blender Authors
|
|
*
|
|
* SPDX-License-Identifier: Apache-2.0 */
|
|
|
|
#include "BLI_hash.hh"
|
|
#include "BLI_memory_cache.hh"
|
|
#include "BLI_memory_counter.hh"
|
|
|
|
#include "testing/testing.h"
|
|
|
|
#include "BLI_strict_flags.h" /* Keep last. */
|
|
|
|
namespace blender::memory_cache::tests {
|
|
|
|
class GenericIntKey : public GenericKey {
|
|
private:
|
|
int value_;
|
|
|
|
public:
|
|
GenericIntKey(int value) : value_(value) {}
|
|
|
|
uint64_t hash() const override
|
|
{
|
|
return get_default_hash(value_);
|
|
}
|
|
|
|
bool equal_to(const GenericKey &other) const override
|
|
{
|
|
if (const auto *other_typed = dynamic_cast<const GenericIntKey *>(&other)) {
|
|
return other_typed->value_ == value_;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
std::unique_ptr<GenericKey> to_storable() const override
|
|
{
|
|
return std::make_unique<GenericIntKey>(*this);
|
|
}
|
|
};
|
|
|
|
class CachedInt : public memory_cache::CachedValue {
|
|
public:
|
|
int value;
|
|
|
|
CachedInt(int initial_value) : value(initial_value) {}
|
|
|
|
void count_memory(MemoryCounter &memory) const override
|
|
{
|
|
memory.add(sizeof(int));
|
|
}
|
|
};
|
|
|
|
TEST(memory_cache, Simple)
|
|
{
|
|
memory_cache::clear();
|
|
{
|
|
bool newly_computed = false;
|
|
EXPECT_EQ(4, memory_cache::get<CachedInt>(GenericIntKey(0), [&]() {
|
|
newly_computed = true;
|
|
return std::make_unique<CachedInt>(4);
|
|
})->value);
|
|
EXPECT_TRUE(newly_computed);
|
|
}
|
|
{
|
|
bool newly_computed = false;
|
|
EXPECT_EQ(4, memory_cache::get<CachedInt>(GenericIntKey(0), [&]() {
|
|
newly_computed = true;
|
|
return std::make_unique<CachedInt>(4);
|
|
})->value);
|
|
EXPECT_FALSE(newly_computed);
|
|
}
|
|
memory_cache::clear();
|
|
{
|
|
bool newly_computed = false;
|
|
EXPECT_EQ(4, memory_cache::get<CachedInt>(GenericIntKey(0), [&]() {
|
|
newly_computed = true;
|
|
return std::make_unique<CachedInt>(4);
|
|
})->value);
|
|
EXPECT_TRUE(newly_computed);
|
|
}
|
|
}
|
|
|
|
} // namespace blender::memory_cache::tests
|