Files
test2/source/blender/blenlib/BLI_shared_cache.hh
Hans Goudey 81a63153d0 Despgraph: Rename "copy-on-write" to "copy-on-evaluation"
The depsgraph CoW mechanism is a bit of a misnomer. It creates an
evaluated copy for data-blocks regardless of whether the copy will
actually be written to. The point is to have physical separation between
original and evaluated data. This is in contrast to the commonly used
performance improvement of keeping a user count and copying data
implicitly when it needs to be changed. In Blender code we call this
"implicit sharing" instead. Importantly, the dependency graph has no
idea about the _actual_ CoW behavior in Blender.

Renaming this functionality in the despgraph removes some of the
confusion that comes up when talking about this, and will hopefully
make the depsgraph less confusing to understand initially too. Wording
like "the evaluated copy" (as opposed to the original data-block) has
also become common anyway.

Pull Request: https://projects.blender.org/blender/blender/pulls/118338
2024-02-19 15:54:08 +01:00

107 lines
3.3 KiB
C++

/* SPDX-FileCopyrightText: 2023 Blender Authors
*
* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
#include "BLI_cache_mutex.hh"
namespace blender {
/**
* A `SharedCache` is meant to share lazily computed data between equivalent objects. It allows
* saving unnecessary computation by making a calculated value accessible from any object that
* shares the cache. Unlike `CacheMutex`, the cached data is embedded inside of this object.
*
* When data is copied (copy-on-evaluation before changing a mesh, for example), the cache is
* shared, allowing its calculation on either the source or original to make the result available
* on both objects. As soon as either object is changed in a way that invalidates the cache, the
* data is "un-shared", and they will no-longer influence each other.
*
* One important use case is a typical copy-on-evaluation update loop of a persistent geometry
* data-block in `Main`. Even if bounds are only calculated on the evaluated *copied* geometry, if
* nothing changes them, they only need to be calculated on the first evaluation, because the same
* evaluated bounds are also accessible from the original geometry.
*
* The cache is implemented with a shared pointer, so it is relatively cheap, but to avoid
* unnecessary overhead it should only be used for relatively expensive computations.
*/
template<typename T> class SharedCache {
struct CacheData {
CacheMutex mutex;
T data;
CacheData() = default;
CacheData(const T &data) : data(data) {}
};
std::shared_ptr<CacheData> cache_;
public:
SharedCache()
{
/* The cache should be allocated to trigger sharing of the cached data as early as possible. */
cache_ = std::make_shared<CacheData>();
}
/** Tag the data for recomputation and stop sharing the cache with other objects. */
void tag_dirty()
{
if (cache_.unique()) {
cache_->mutex.tag_dirty();
}
else {
cache_ = std::make_shared<CacheData>();
}
}
/**
* If the cache is dirty, trigger its computation with the provided function which should set
* the proper data.
*/
void ensure(FunctionRef<void(T &data)> compute_cache)
{
cache_->mutex.ensure([&]() { compute_cache(this->cache_->data); });
}
/**
* Represents a combination of "tag dirty" and "update cache for new data." Existing cached
* values are kept available (copied from shared data if necessary). This can be helpful when
* the recalculation is only expected to make a small change to the cached data, since using
* #tag_dirty() and #ensure() separately may require rebuilding the cache from scratch.
*/
void update(FunctionRef<void(T &data)> compute_cache)
{
if (cache_.unique()) {
cache_->mutex.tag_dirty();
}
else {
cache_ = std::make_shared<CacheData>(cache_->data);
}
cache_->mutex.ensure([&]() { compute_cache(this->cache_->data); });
}
/** Retrieve the cached data. */
const T &data() const
{
BLI_assert(cache_->mutex.is_cached());
return cache_->data;
}
/**
* Return true if the cache currently does not exist or has been invalidated.
*/
bool is_dirty() const
{
return cache_->mutex.is_dirty();
}
/**
* Return true if the cache exists and is valid.
*/
bool is_cached() const
{
return cache_->mutex.is_cached();
}
};
} // namespace blender