This reverts commit 19222627c6.
Something went wrong here, seems like this commit merged the main branch
into the release branch, which should never be done.
This reverts commit 68181c2560.
I merged 3.6 into 3.5 by mistake. Basically I had a PR against main,
then changed it in the last minute to be against 3.5 via the
web-interface unaware that I shouldn't do it without updating the
patch.
Original Pull Request: #104889
Note that the node group has its sockets names
translated, while the built-in nodes don't.
So we need to use data_ for the built-in nodes names,
and the sockets of the created node groups.
Pull Request #104889
This better aligns with OSX/Linux warnings.
Although `__pragma(warning(suppress:4100))` is not the same as
`__attribute__((__unused__))` in gcc (which only affects the attribute
instead of the line), it still seems to be better to use it than to
hide the warning entirely.
Using larger integer types allows for more efficient code, because we
can use the hardware better. Instead of working on individual bytes,
the code can now work on 8 bytes at a time. We don't really benefit
from this immediately but I'm planning to implement some more optimized
bit vector operations for #104629.
Pull Request #104658
Add a test to address the issue raised in #103913, where zero area
triangles could be created from polygons that have co-linear edges
but were not degenerate.
During hair grooming in curves sculpt mode, it is very useful when hair strands
are prevented from intersecting with the surface mesh. Unfortunately, it also
decreases performance significantly so we don't want it to be turned on all the time.
The surface collision is used by the Comb, Pinch and Puff brushes currently.
It can be turned on or off on a per-geometry basis.
The intersection prevention quality of this patch is not perfect yet. This can
be improved over time using a better solver. Overall, perfect collision detection
at the cost of bad performance is not necessary for interactive sculpting,
because the user can fix small mistakes very quickly. Nevertheless, the quality
can probably still be improved significantly without too big slow-downs depending
on the use case. This can be done separately from this patch.
Pull Request #104469
Implements virtual shadow mapping for EEVEE-Next primary shadow solution.
This technique aims to deliver really high precision shadowing for many
lights while keeping a relatively low cost.
The technique works by splitting each shadows in tiles that are only
allocated & updated on demand by visible surfaces and volumes.
Local lights use cubemap projection with mipmap level of detail to adapt
the resolution to the receiver distance.
Sun lights use clipmap distribution or cascade distribution (depending on
which is better) for selecting the level of detail with the distance to
the camera.
Current maximum shadow precision for local light is about 1 pixel per 0.01
degrees.
For sun light, the maximum resolution is based on the camera far clip
distance which sets the most coarse clipmap.
## Limitation:
Alpha Blended surfaces might not get correct shadowing in some corner
casses. This is to be fixed in another commit.
While resolution is greatly increase, it is still finite. It is virtually
equivalent to one 8K shadow per shadow cube face and per clipmap level.
There is no filtering present for now.
## Parameters:
Shadow Pool Size: In bytes, amount of GPU memory to dedicate to the
shadow pool (is allocated per viewport).
Shadow Scaling: Scale the shadow resolution. Base resolution should
target subpixel accuracy (within the limitation of the technique).
Related to #93220
Related to #104472
The ear clipping method used by polyfill_2d only excluded concave ears
which meant ears exactly co-linear edges created zero area triangles
even when convex ears are available.
While polyfill_2d prioritizes performance over *pretty* results,
there is no need to pick degenerate triangles with other candidates
are available. As noted in code-comments, callers that require higher
quality tessellation should use BLI_polyfill_beautify.
Straightforward port. I took the oportunity to remove some C vector
functions (ex: copy_v2_v2).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
`9c14039a8f4b5f` broke blenlib tests in release builds, due to how
`EXPECT_BLI_ASSERT` works (in release builds it just calls the given
function, so if that crashes then the test fails).
For now remove that check in the test.
If GetFileAttributesW returns an error, only debug assert if the reason
is file not found.
See D17204 for more details.
Differential Revision: https://developer.blender.org/D17204
Reviewed by Ray Molenkamp
Straightforward port. I took the oportunity to remove some C vector
functions (ex: `copy_v2_v2`).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
After Win32 API call GetFileAttributesW, check for
INVALID_FILE_ATTRIBUTES, which is returned on error,
usually if file not found.
See D17176 for more details.
Differential Revision: https://developer.blender.org/D17176
Reviewed by Ray Molenkamp
Thoses are added for better component masking syntax.
This avoids the harder to read `float2(my_vec4)`.
This is not meant to be fully usable swizzling support but could be
extended in the future.
- Don't call exit() when memory allocation fails, while unlikely
internal failures should not be exiting the application.
- Don't print a message when the directory is empty as it's
unnecessarily noisy.
- Print errors the the stderr & include the reason for opendir failing.
Regression in [0] caused by a change where path joining would
replace a forward slash with a back-slash when joining paths WIN32.
Now the directory is always used as a prefix for the paths returned
by BLI_filelist_dir_contents which resolves the regression.
[0]: 9f6a045e23
The code in questions comes from Shewchuk's triangle code, which
hasn't been updated to fix the out-of-buffer access problem
that ASAN finds in the delaunay unit test. The problem is benign:
the code would exit the loop before using the value fetched from
beyond the end of the buffer, but to make ASAN happy, I put in
a couple extra tests to not fetch values that aren't going to be used.
This can improve performance in some circumstances when there are
vectorized and/or unrolled loops. I especially noticed that this helps
a lot while working on D16970 (got a 10-20% speedup there by avoiding
running into the non-vectorized fallback loop too often).
This adds a new `Interpolate Curves` node. It allows generating new curves
between a set of existing guide curves. This is essential for procedural hair.
Usage:
- One has to provide a set of guide curves and a set of root positions for
the generated curves. New curves are created starting from these root
positions. The N closest guide curves are used for the interpolation.
- An additional up vector can be provided for every guide curve and
root position. This is typically a surface normal or nothing. This allows
generating child curves that are properly oriented based on the
surface orientation.
- Sometimes a point should only be interpolated using a subset of the
guides. This can be achieved using the `Guide Group ID` and
`Point Group ID` inputs. The curve generated at a specific point will
only take the guides with the same id into account. This allows e.g.
for hair parting.
- The `Max Neighbors` input limits how many guide curves are taken
into account for every interpolated curve.
Differential Revision: https://developer.blender.org/D16642
The same logic from D17025 is used in other places in the curve code.
This patch uses the class for the evaluated point offsets and the Bezier
control point offsets. This helps to standardize the behavior and make
it easier to read.
Previously the Bezier control point offsets used a slightly different standard
where the first point was the first offset, just so they could have the same
size as the number of points. However two nodes used a helper function
to use the same `OffsetIndices` system, so switch to that there too.
That requires removing the subtraction by one to find the actual offset.
Also add const when accessing data arrays from curves, for consistency.
Differential Revision: https://developer.blender.org/D17038