Although the normal was normalized when evaluating the lighting, the normal
is often used for other purpose. In this case using the non normalized
normal maybe the source of errors.
Was temporarily replace with code that used the tool-system,
bring back logic which cycles and toggles brushes.
Tool slots are used for the initial brush,
after that toggle or cycle is used.
* Area and Workspace duplicate.
* Toggle Area Fullscreen
* Operator Search
* Workspace reorder to front/back (arrows help to know which direction means front/back)
From own changes in that area... Now we also enforce handling shortcuts
in case relevant drawflag of searchbutton is set. Should allow to cover
all cases, hopefully.
The Nearest Surface Point shrink method, while fast, is neither
smooth nor continuous: as the source point moves, the projected
point can both stop and jump. This causes distortions in the
deformation of the shrinkwrap modifier, and the motion of an
animated object with a shrinkwrap constraint.
This patch implements a new mode, which, instead of using the simple
nearest point search, iteratively solves an equation for each triangle
to find a point which has its interpolated normal point to or from the
original vertex. Non-manifold boundary edges are treated as infinitely
thin cylinders that cast normals in all perpendicular directions.
Since this is useful for the constraint, and having multiple
objects with constraints targeting the same guide mesh is a quite
reasonable use case, rather than calculating the mesh boundary edge
data over and over again, it is precomputed and cached in the mesh.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D3836
Not all three sides of a tesselated mesh triangle are guaranteed
to be original mesh edges, so a somewhat complicated check is
required to detect which ones are real. It seems that until now
there was no utility function for that, only some example code.
Simple find_nearest relies on a heuristic for efficient culling of
the BVH tree, which involves a fast callback that always updates the
result, and the caller reusing the result of the previous find_nearest
to prime the process for the next vertex.
If the callback is slow and/or applies significant restrictions on
what kind of nodes can qualify for the result, the heuristic can't
work. Thus for such tasks it is necessary to order and prune nodes
before the callback at BVH tree level using a priority queue.
Since, according to code history, for simple find_nearest the
heuristic approach is faster, this mode has to be an option.
This reverts back to the 2.79 situation. The better solution would be to make
physics caching somehow simulate the skipped frames. But for now the more
important thing is to have working physics.
Ref T54943, T56352.
RNA's ViewLayer would present 'first level' of layer collection as a
list (collection property), when it is actually now only a single item,
same as the scene's master collection.
Note: did not try to update view_layer python tests, those are already
fully broken for quiet some time I guess (they still assume
view_layer.collections to be mutable e.g.)...
"Type of active data to display and edit" is a bit too descriptive and takes
away from reading which context we're actually viewing. Also minor tweaks to
context description.
Now it is forumlated in terms of deltas, and consists of the
following steps:
- Original displacement at the reshape level are being backed up.
- New displacement is being written by the reshape routines.
- Delta of the displacement is calculated.
- Deltas are propagated to the higher levels, which also includes
their interpolation/
- Original displacement is restored.
- New interpolated displacements are added to the original ones.
This is a base ground for the upcoming change related on using
Catmull-Clark smoothing for the deltas instead of linear
interpolation. Currently is no changes for artists, just preparing
for upcoming work.
This is a nasty bug. Because the node does not get properlly tagged as SSS
(sss_id is 0) but is still evaluated (so tagging the GPUMaterial as having
SSS). The sssProfile UBO is still declared and we need to bind something
to it.
This is a quick workaround, but I don't see the point of making the
lighting functions more complex than it is now in order to optimize this
rather not so common case.
The main use one can imagine for this is adding tweak controls to
parts of a model that are already deformed by multiple other major
bones. It is natural to expect such locations to deform as if the
tweaks aren't there by default; however currently there is no easy
way to make a bone follow multiple other bones.
This adds a new constraint that implements the math behind the Armature
modifier, with support for explicit weights, bone envelopes, and dual
quaternion blending. It can also access bones from multiple armatures
at the same time (mainly because it's easier to code it that way.)
This also fixes dquat_to_mat4, which wasn't used anywhere before.
Differential Revision: https://developer.blender.org/D3664
- Vertex & weight paint now use the 'blend' setting.
- Weight paint now has it's own tool setting,
since weight paint doesn't deal with color - we'll likely
support different tools eventually.
Previously the brush names were used which had the limit that:
- Brush names that were deleted wouldn't show up in the toolbar.
- Naming collisions between user defined brushes and existing tools
broke tool selection.
Now brushes are created as needed when tools are selected.
Note, vertex/weight paint combine tool and blend modes,
this should be split out into a separate enum.
When you check for undone work before acquiring a lock that ensures you
are the only one actually doing the work, you have to redo the check
*after* acquiring said lock.
Otherwise, there is room for nasty random race condition issues...
If the user only needs insertion and removal from top, there is
no need to allocate and manage separate HeapNode objects: the
data can be stored directly in the main tree array.
This measured a 24% FPS increase on a ~50% heap-heavy workload.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D3898
Now that the 3d grid is infinite, antialiased, not occluded, and overlaid
on top of rendered view, being able to decrease its opacity to reduce
visual strain is a must.