Reason is motsly that dealing with type conversion in calling code is
not great, makes it less readable, and can generate hidden bugs in case
original type changes and atomic primitive calls are not updated
accordingly...
This reverts commit d749320e3b.
It's possible the container struct is larger,
we could do sizeof checks that falls back to memmove
but rather avoid complicating things.
Just removing it, such cases are not bottlenecks and not worth the
complication of doing real threading with own BLI_task.
Other (remaining) usages may be relevant, need case-by-case check.
This was expected behavior for over-exposured lamps when the mode was originally
created for Tears of Steel. Turns out, there could be really bad green screen in
real production which will only have green (or rather screen) channel over
exposured.
Tweaked condition now so we use least bright channel to see if the area has
proper exposure or not.
Seems to work fine in tests, but further tweaks are possible.
CUDA 9.0.176 apparently caused some slow down on high-end Pascal cards that can be mitigated by increasing the number of registers. See https://developer.blender.org/F1142667 for a detailed comparison.
Was giving difference when using sharpness of 1.0 and 0.999 even though the
result was expected to be really close to each other.
This SSS profile will probably be removed in the future in favor of more
physically bases Burley, but for the time being don't see anything wrong
fixing an existing code.
Regression from rB823bcf1689a3 (VPaint 2017 GSoC, this is not in 2.79 release).
Also cleanup, using fake-array-ification to access struct members is
generally not a great idea, but when we already have a totally confusing
broken struct layout, this is pure evil, as demonstrated here!
Found while investigating T53341.
- initialize the cube-size from the bounding box when it's not set.
- no longer wrap faces to keep in 0-1 bounds,
other projection methods don't do this and calculating the scale
prevents the UV's from being too far outside the view.
Was using cursor position from within menu,
clicking on the same position for every selected item (toggling).
Now operate on each selected outliner element, without toggling.
This commit introduces the following changes:
* Modified the poll callback on the "Update Paths" operator for bones
so that it only checks if there are bones that have motion paths
(instead of checking whether the active bone has paths).
This makes it easier to update paths without having to first select one
that has them - useful when the paths are all on hidden/hard-to-select bones.
* Add a readonly property, "has_motion_paths" to the animviz.motion_path
RNA struct, providing easier access to the internal flag used above.
This makes it possible for the UI to display the "Update" button without
having to check various bones for motion paths.
Notes:
* The flag being used in these changes already existed, and was only really
intended for internal use. However, since it was already used in many places
for determining if auto-update of all bone paths was needed (e.g. after certain
editing ops), it should be safe to use here too.
* The update_paths operator currently bakes all paths when activated, so there's
currently no loss of functionality with changing to not checking if the active
bone has any paths (e.g. we couldn't only update the active bone only either).
That is still listed as a todo in the code.
There were 2 issues here (first was the one reported):
1) Curve shape changes if multiple consecutive pairs of keyframes
are selected. The problem is that after the first pair is handled,
subsequent pairs get sampled on the basis of the modified curve.
2) With multiple separate "islands" selected, unselected points in between
would get ignored, causing the entire curve to get sampled.
Previously, Mikktspace just bucketed the vertices based on one spatial coordinate and then ran full pairwise comparisons inside each bucket.
However, since models are three-dimensional, the bucketing has a massive false-positive rate, and since pairwise comparison is O(n^2), the merging process is very slow.
But, since we only care about exactly identical vertices, there is a much more efficient approach - we can just hash all values belonging to each vertex and form buckets based on the hash.
Since the hash has 32 bits and considers all values, false-positives are very unlikely - and since both hashing and the radixsort that's used for bucketing are O(n), both asymptotical and
real-world performance (as well as code complexity) are significantly improved.