Files
test/source/blender/blenlib/BLI_memory_cache.hh
Jacques Lucke 354a097ce0 Volumes: improve file cache and unloading
This changes how the lazy-loading and unloading of volume grids works. With that
it should also fix #124164.

The cache is now moved to a deeper and more global level. This allows reloadable
volume grids to be unloaded automatically when a memory limit is reached. The
previous system for automatically unloading grids only worked in fairly specific
cases and also did not work all that well with caching (parts of) volume
sequences.

At its core, this patch adds a general cache system in `BLI_memory_cache.hh`. It
has a simple interface of the form `get(key, compute_if_not_cached_fn) ->
value`. To avoid growing the cache indefinitly, it uses the new
`BLI_memory_counter.hh` API to detect when the cache size limit is reached. In
this case it can automatically free some cached values. Currently, this uses an
LRU system, where the items that have not been used in a while are removed
first. Other heuristics can be implemented too, but especially for caches for
loading files from disk this works well already.

The new memory cache is internally used by `volume_grid_file_cache.cc` for
loading individual volume grids and their simplified variants. It could
potentially also be used to cache which grids are stored in a file.
Additionally, it can potentially also be used as caching layer in more places
like loading bakes or in import geometry nodes. It's not clear yet whether this
will need an extension to the API which currently is fairly minimal.

To allow different systems to use the same memory cache, it has to support
arbitrary identifiers for the cached data. Therefore, this patch also introduces
`GenericKey`, which is an abstract base class for any kind of key that is
comparable, hashable and copyable.

The implementation of the cache currently relies on a new `ConcurrentMap`
data-structure which is a thin wrapper around `tbb::concurrent_hash_map` with a
fallback implementation for when `tbb` is not available. This data structure
allows concurrent reads and writes to the cache. Note that adding data to the
cache is still serialized because of the memory counting.

The size of the cache depends on the `memory_cache_limit` property that's
already shown in the user preferences. While it has a generic name, it's
currently only used by the VSE which is currently using the `MEM_CacheLimiter`
API which has a similar purpose but seems to be less automatic, thread-safe and
also has no idea of implicit-sharing. It also seems to be designed in a way
where one is expected to create multiple "cache limiters" each of which has its
own limit. Longer term, we should probably strive towards unifying these
systems, which seems feasible but a bit out of scope right now. While it's not
ideal that these cache systems don't use a shared memory limit, it's essentially
what we already have for all cache systems in Blender, so it's nothing new.

Some tests for lazy-loading had to be removed because this behavior is more
implicit now and is not as easily observable from the outside.

Pull Request: https://projects.blender.org/blender/blender/pulls/126411
2024-08-19 20:39:32 +02:00

70 lines
2.1 KiB
C++

/* SPDX-FileCopyrightText: 2024 Blender Authors
*
* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
#include "BLI_function_ref.hh"
#include "BLI_generic_key.hh"
#include "BLI_memory_counter_fwd.hh"
namespace blender::memory_cache {
/**
* A value that is stored in the cache. It may be freed automatically when the cache is full. This
* is expected to be subclassed by users of the memory cache.
*/
class CachedValue {
public:
virtual ~CachedValue() = default;
/**
* Gather the memory used by this value. This allows the cache system to determine when it is
* full.
*/
virtual void count_memory(MemoryCounter &memory) const = 0;
};
/**
* Returns the value that corresponds to the given key. If it's not cached yet, #compute_fn is
* called and its result is cached for the next time.
*
* If the cache is full, older values may be freed.
*/
template<typename T>
std::shared_ptr<const T> get(const GenericKey &key, FunctionRef<std::unique_ptr<T>()> compute_fn);
/**
* A non-templated version of the main entry point above.
*/
std::shared_ptr<CachedValue> get_base(const GenericKey &key,
FunctionRef<std::unique_ptr<CachedValue>()> compute_fn);
/**
* Set how much memory the cache is allowed to use. This is only an approximation because counting
* the memory is not 100% accurate, and for some types the memory usage may even change over time.
*/
void set_approximate_size_limit(int64_t limit_in_bytes);
/**
* Remove all elements from the cache. Note that this does not guarantee that no elements are in
* the cache after the function returned. This is because another thread may have added a new
* element right after the clearing.
*/
void clear();
/* -------------------------------------------------------------------- */
/** \name Inline Functions
* \{ */
template<typename T>
inline std::shared_ptr<const T> get(const GenericKey &key,
FunctionRef<std::unique_ptr<T>()> compute_fn)
{
return std::dynamic_pointer_cast<const T>(get_base(key, compute_fn));
}
/** \} */
} // namespace blender::memory_cache