The selection engine has some complex tricks that improve performance.
These are:
- Only draws objects whose bounding box intersects the selection
threshold;
- If the viewport or objects are not "dirty", it does not clean the
texture IDs and only adds objects that have not yet been drawn;
- Only updates the depth buffer if a new object is drawn;
- Skip drawing if no object is found;
These tricks were initially implemented so that this engine could be
used for snapping.
But this initial idea has changed and now the engine is only used to
select Vertices, Edges or Faces.
Due to this limited use, these tricks bring no real benefit.
In fact, it's even worse with the Retopology Overlay, as it forces the
Depth buffer to be redrawn.
This commit removes these tricks and only keeps those that indicate
whether the drawing needs to be updated.
Pull Request: https://projects.blender.org/blender/blender/pulls/113308
This PR adds support for Intel ARC GPUs. Due barriers inside a non
uniform control flow the Intel ARC can stall the whole system.
The cause is that a barrier is used, but some threads in the shader
have completed. The barriers might wait until it gets the signal from
the exited threads and stalls the system.
Although some implementations support it it is safer to limit the
number of HiZ levels.
Pull Request: https://projects.blender.org/blender/blender/pulls/113447
Use the typical combination of an "array utils" function used by an
attribute interpolation function. This helps moves us towards having
a more centralized implementaiton of attribute propagation that can
be changed and optimized more easily.
When the first planar probe is added to the scene, or the last probe
is removed from the scene the samples needs to be reset. This removes
artifacts when only a single sample is used.
Pull Request: https://projects.blender.org/blender/blender/pulls/113440
The function `BLI_findlinkfrom` returns the link that is n-steps
after the given start link. This did not work for negative steps.
This change makes it so that both positive and negative step
values work.
Code to insert new keyframe when drawing is included in
7e87435cf4 . But current condition fails
to add new frame. In `frame_key_at()`, last drawing or the next drawing is
returned (instead of -1). Hence, no new drawing/frame was added in draw-invoke
function.
So add keyframe if no keyframe exists already at `current_number`
Pull Request: https://projects.blender.org/blender/blender/pulls/113408
Move the three current 'status variables' (stop, update and progress)
into a single 'WorkerStatus' struct. This is cleaner and will allow for
future workin this area without having to edit tens of 'startjob'
callbacks signatures all the time.
No functional change expected here.
Note: jobs' specific internal code has been modified as little as
possible, in many cases the job's own data still just store pointers to
these three values. Ideally in the future more refactor will be using a
single pointer to the shared `wmJobWorkerStatus` data instead.
Pull Request: https://projects.blender.org/blender/blender/pulls/113343
Curve type conversion can cause changing of some curves size.
Also, changing of curve type can cause deletion of unnecessary
built-in attributes and creating new ones. All of this make sense
only for converted curves. Others, unselected, should simply make
copy all old attributes, both built-in and not. This fix simply replaces
the last copy from incorrect only non-built-in, to a correct one,
for all attributes.
Pull Request: https://projects.blender.org/blender/blender/pulls/110683
When deleting the last planar probe in the scene the color and
depth textures are resized with 0 layers. This isn't allowed.
This is fixed by adding an early exit and creating dummy textures.
Pull Request: https://projects.blender.org/blender/blender/pulls/113437
This PR is contains the initial capture pipeline for planar probes.
It requires work to generate the correct view to capture and to include
the result during ray tracing. These will be developed in a separate PR.
This PR detects if a planar probe is active in the scene. If this is
the case the planar probe pipeline will be activated. During rendering
this is done by querying the depsgraph, during viewport drawing this
is done during sync. If an planar probe is detected and the pipeline
wasn't activated. The pipeline will be activated and the sampling
will be reset to ensure the pipeline is filled with all objects.
Per object the user can set the visibility of the object in planar
reflections.

For a reflection plane the resolution and clipping offset can be set.
EDIT: Resolution option was removed because too complex to
implement with the little time we have at the moment.

Related to #112966
Co-authored-by: Clément Foucault <foucault.clem@gmail.com>
Pull Request: https://projects.blender.org/blender/blender/pulls/113203
Nodes that are scheduled can be executed in any order in theory. So when
there are many scheduled nodes, it can be benefitial to start evaluating
them in parallel.
Note that it is not very common that many nodes are scheduled at the
same time in typical setups because the evaluator uses a depth-first heuristic
to decide in which order to evaluate nodes. It can happen more easily in
generated node trees though.
Also, this change only has an affect in practice if none of the scheduled nodes
uses multi-threading internally, as this would also trigger the user of multiple
threads in the graph executor.
The goal here is to make it easier to use the socket declaration builder
for cases where the actual socket type is not known at compile time.
For that purpose, all the methods that are not dependent on the specific
socket type are moved to the base socket declaration builder.
A nice side effect of this is reduced templated boilerplate and that more
code can be moved out of the header.
With this patch, one is now forced to put type specific method calls before
generic method calls in a chain. For example `.default_value(...).supports_field()`
instead of `supports_field().default_value(...)`. In theory, we could keep
support for both orders but that would involve a lot of additional boilerplate
code. Enforcing this order is simple enough. Note that this limitation only
applies when chaining multiple method calls. This is still possible:
```
auto &decl = b.add_input<decl::Vector>("Value");
decl.supports_field();
decl.default_value(...);
```
Pull Request: https://projects.blender.org/blender/blender/pulls/113410
In some cases processing events was modifying them, as there can be
multiple event consumers, manipulating events isn't correct.
Even though in practice it didn't cause issues, it's straightforward
not to do this and makes logic easier to reason about.
IME editing would cast GHOST_TEventImeData to wmIMEData then read/write
an additional member that doesn't exist in GHOST_TEventImeData.
In practice it's likely struct padding prevented this from showing up
as a bug. Nevertheless it's bad practice to rely on this.
- Make GHOST_TEventImeData read-only, move the is_ime_composing boolean
into the window.
- Add static assert to ensure both structs are the same size.
- Correct code comments.
This moves the thickness from shadow map
approximation out of the lighting and shadowing
loop. Instead of using the thickness from
the shadow tracing from each individual light
before SSS evaluation, we precompute the
average thickness from all shadowed light.
This is then mixed with the nodetree thicknes.
## SSS Translucency
This add back SSS transmission support by using
the mentionned thickness computation, and applying the
transmission profile on it. This is then applied on top of a
flipped normal LTC computation.
Pull Request: https://projects.blender.org/blender/blender/pulls/113401
This allow splitting shadow and light evaluation.
This is the first step to deferred shadowing.
The evaluate closure types can be dynamically
set which mean we can have arbitrary BSDF
evaluation inside the same shader.
This also contain some refactor to `light_lib.glsl`
for more consistency and less clutter.
Note that this breaks the SSS translucency
as the shadow evaluation changes for these.
A new solution for this feature is to be found
Pull Request: https://projects.blender.org/blender/blender/pulls/113257
A copy has to compare equal to itself and have the same hash
when it is supposed to be used as a reference in a hash table
like `VectorSet`.
Just making the hash not change during a copy by hashing the
geometry component pointers instead of the geometry-set pointer
does not work because of `geometry_set_from_reference` which
assumed that changing the geometry set does not change the
hash of the reference.
For now the solution is to just not use a hash table as this
makes it easier to get corretness right. Instead, just use a
regular `Vector` to store all the references which avoids
the need for a hash function.
This can now lead to some O(n^2) behavior when adding many
references. Fortunately, this is not too common yet, as usually
one has few references but many instances that use those.
It's still something that has to be solved at some point. It's
not clear yet what approach would work best:
* Reintroduce `VectorSet` for the references and properly update
the reference positions in the hash table after the references
have changed.
* Add a separate `Map<Object*/Collection*, int>` for the
deduplication.
* Do deduplication on the call-site of `add_reference` by building
a temporary map there.