Rebuilding the tree immediately after changes could cause the tree to be
rebuilt multiple times. More importantly, it made it harder to reason
about thread safety, since we would touch the tree within a whole bunch
of API functions. Now tree building is simplified and managed in a
single place, so making the tree building thread safe can be made
trivially in a follow-up.
Note, this means the initial catalog tree building doesn't happen in a
background thread together with loading the asset library and catalogs
anymore. But we would already do all further rebuilds on the main thread
anyway, this shouldn't have any notable impact.
The objective is to be able to create your own GLSL shaders in Blender.
This improves the workflow since all shader programming can be done
directly in Blender. In addition, the GLSL language is a very popular
language in the video games industry and even in general.
Ref !116793
Co-authored-by: Clément Foucault <foucault.clem@gmail.com>
This optimizes a few loops that become significant bottlenecks during
viewport rendering of scenes with large numbers of curves.
To render a curves object, Blender needs to generate a potentially
very large (but trivial) index buffer. As previously implemented,
this index buffer is generated in an extremely inefficient manner,
with a single-threaded loop and an explicit function call per entry.
The buffer then needs to be pushed onto the GPU, which is also a fairly
slow task.
The PR generates the index buffer directly on the GPU with compute
shader.
Pull Request: https://projects.blender.org/blender/blender/pulls/116617
The standard `threading::parallel_for` function tries to split the range into
uniformly sized subranges. This is great if each element takes approximately
the same amount of time to compute.
However, there are also situations where the time required to do the work for
a single index differs significantly between different indices. In such a case,
it's better to split the tasks into segments while taking the size of each task into
account.
This patch implements `threading::parallel_for_weighted` which allows passing
in an additional callback that returns the size of each task.
Pull Request: https://projects.blender.org/blender/blender/pulls/118348
The goal of this task is to remove noise in the most common material
layering configuration.
Subsequently, this also split the evaluation of different closure to
their own buffer to avoid discontinuity when denoising them.
This commit does a few things:
- [x] Removes use of global for closure random number.
- [x] Refactor the forward evaluation to be closure type agnostic.
- [x] Refactor the gbuffer lib to be closure type agnostic.
- [x] Reduces the number of picked closure to 3 maximum or less.
- [x] Use GPU_MATFLAG_COAT to tag the use of multiple usage of glossy BSDF.
- [x] Use two closure bin for Glossy when more than one.
- [x] Set closure bin per type for best noise level for most materials.
- [x] Change the gbuffer header to put the closure at their bin index.
- [x] Add a method to get a closure from the gbuffer from a specific bin.
- [x] Split lighting passes per Closure.
Pull Request: https://projects.blender.org/blender/blender/pulls/118079
Basic motivation is that `AssetCatalogService::get_catalog_tree()`
should return a const tree, since this tree is internal state and
shouldn't be modified from outside. This exposed a whole bunch of const
incorrectnesses and just generally allows to make much more of the API
const (as it should be).
Also use references instead of pointers in testing functions, where null
is not an expected value.
The asset catalog selector tree-view would store a copy of each of the
items in the catalog tree, including all of its sub-hierarchy. This can
be avoided, it can just use a reference.
Solves the issue of the script potentially sitting for a long time
without having any progress reported. Confusingly, such behavior
depends on Git version.
In older versions (< 2.33) there will be progress reported, but it
then got changed by the commit in Git:
7a132c628e
The delayed checkout is exactly how Git LFS integrates into the Git
process. Another affecting factor for the behavior is that submodule
configured to use "checkout" update policy is forced to have quite
flag passed to the "git checkout":
https://github.com/git/git/blob/v2.43.2/builtin/submodule--helper.c#L2258
This is done to avoid the long message at the end of checkout about
the detached state of HEAD, with instructions how to resolve that.
There are two possible solutions: either use "rebase" update policy
for submodules, or skip Git LFS download during the submodule update.
Changing the update policy is possible, but it needs to be done with
a bit of care, and possible revised process for updating/merging
tests data.
This change follows the second idea of delaying LFS download for a
later step, so the process is the following:
- Run `git submodule update`, but tell Git LFS to not resolve the links
by using GIT_LFS_SKIP_SMUDGE=1 environment variable.
- Run `git lfs pull` for the submodule, to resolve the links.
Doing so bypasses hardcoded silencing in the Git. It also potentially
allows to recover from an aborted download process.
The `git lfs pull` seems to be a nominal step to resolve the LFS links
after the smudging has been skipped. It is also how in earlier Git
versions some Windows limitations were bypassed:
https://www.mankier.com/7/git-lfs-faq
The submodule update now also receives the "--progress" flag, which
logs the initial Git repository checkout process, which further
improves the feedback.
The byproduct of this change is that an error during precompiled
libraries and tests data update is not considered to be fatal.
It seems to be more fitting with other update steps, and allows
more easily reuse some code.
There is also a cosmetic change: the messages at the end of the
update process now have their own header, allowing more easily
see them in the wall-of-text.
Pull Request: https://projects.blender.org/blender/blender/pulls/118673
This fixes an issue where the number of viewport samples are set
to 1 and reprojection is deactivated. In this case the sample that
has the data to update the probes is ignored as all samples where
already rendered. A tweak in the viewport was needed to fix this issue.
Pull Request: https://projects.blender.org/blender/blender/pulls/118654
Oversight in e3d31b8dfb
While most situations would have other vertexgroups set anyways (so this
probably wasnt noticed, it was only ignored if it is the only
vertexgroup used), at least theoretically it could happen that
`cloth_uses_vgroup` would return false even then `vgroup_shear` is set
(thus skipping actually setting these weights later).
When disabling/enabling reflection probes the atlas texture can be
recreated removing the existing content of the texture. When this
happens the world probe needs to be rerendered.
Pull Request: https://projects.blender.org/blender/blender/pulls/118656
`refresh_catalogs()` for the "All" asset library effectively does the
same as iterating over all other asset libraries and calling
`get_asset_library()` on them. So doing both just performs the same work
twice.
Mistake in #118463.
Updating the catalogs of the "All" asset library would also reload
catalog data of the other asset libraries from disk. This wasn't
intended, this should be done with an explicit load request only (and on
a thread to not block the main thread).
area light with zero spread was introduced in bf18032977. Such paths can
only be sampled with NEE, so MIS should not be used.
This fixes the discrepancy when Direct Light Sampling is set to MIS or NEE.
Pull Request: #118584
Likewise, skip tests update when --use-tests is not provided.
It was a bit of ambiguous situation because libraries and tests
are technically submodules. After some feedback it seems that it
is better to ignore submodule for libraries and tests unless
requested explicitly.
Pull Request: https://projects.blender.org/blender/blender/pulls/118631
Caused by 0a633a4e07
NLA and driver tree-elements were not added to the outliner when
"action" is unlinked from the object. This is due to the wrong `if`
condition preventing the excution of `expand_drivers/expand_NLA_tracks`.
Pull Request: https://projects.blender.org/blender/blender/pulls/118597