BLI code for enums that are meant to be used as "bit flags" defined
an ENUM_OPERATORS macro in BLI_utildefines.h. This cleans up things
related to said macro:
- Move it out into a separate BLI_enum_flags.hh header, instead of
"random bag of things" that is the current place,
- Update it to no longer need manual indication of highest individual
bit value. This originally was added in a31a87f89 (2020 Oct), in
order to silence some UBSan warnings that were coming
from GPU related structures (looking at current GPU code, I don't
think this is happening anymore). However, that caused actual
user-visible bugs due to incorrectly specified max. enum bit value,
and today 14% of all usages have incorrect highest individual
bit value spelled out.
- I have reviewed all usages of operator ~ and none of them are
used for directly producing a DNA-serialized value; all the
usages are for masking out other bits for which the new ~
behavior that just flips all bits is fine.
- Make the macro define flag_is_set() function to ease check of bits
that are set in C++ enum class cases; update existing cases to use
that instead of three other ways that were used.
Pull Request: https://projects.blender.org/blender/blender/pulls/148230
Improve mmap handling of IO errors on WIN32.
Make MMAP gracefully handle IO errors on Windows by replacing the
mapping with zeros using a vectored exception handler when an
EXCEPTION_IN_PAGE_ERROR is raised. This is similar to how such errors
are handled on non-Windows platforms.
On Windows, this is implemented by first creating a placeholder
allocation and then mapping the file into it. When an error occurs, the
exception handler unmaps the file, keeping the placeholder intact, and
creates an anonymous mapping into it, after which execution can
continue.
Since some required functions don't exist on older Windows versions,
the error handling will only work on Windows 10, version 1803 or newer.
Ref !139739
Mesh invariants imply that edges and faces must be unique, so things
like reversed edges or duplicate faces with equal vertices are invalid.
For this reason, every time we generate new elements we have to ensure
that all new elements are unique between each other and already existing
elements. The recent refactor (ea875f6f32) introduced a new
algorithm to generate new mesh elements, and deduplication of new
elements was also a part of it. The problem is that the deduplication
only guaranteed that the original elements and new elements don't
overlap; deduplication between each new elements is not complete.
To solve the problem both new triangles and new edges have to be
deduplicated, even if there is no duplicates. Just to know this we have
to build a hash sets.
Triangle deduplication is a special part of the triangulation code,
but edges already handled elsewhere in the code base.
This refactor fixes this by replacing the original approach with one
which guarantees distinct faces and edges in the result.
Unfortunately, this fix increases runtime of the node 10x for a simple
cube with 500-vertex sides. It should be possible to make the
performance better again, but that requires more work.
Other work had to be done to enable this, so this depends on:
- [x] 157e7e0351
- [x] fa8574b80b
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/147634
The issue here was that I wrongly assumed that the locale would always be available.
Instead just use our built-in function to add thousands-separators (which I didn't
find last time).
Pull Request: https://projects.blender.org/blender/blender/pulls/147515
Regression in [0] removed checks for indices referencing themselves
which need to be kept but can still be used as targets.
Restore this logic as well as fixing another problem (#147022)
where auto-merge would not merge into the nearest vertex, this
was especially noticeable then the threshold was set to a large value
but would happen at smaller values too.
[0]: bdae3e28a2
On its own, the main functionality of the Radial Tiling node
is the ability to divide a 2D Cartesian coordinate system into
as many radial segments as specified by the "Segments" input.
Each segment has its own affinely transformed coordinate system,
provided through the "Segment Coordinates" output, which can be
used to tile textures in a radially symmetric manner.
Additionally, a unique index is provided for every segment through
the "Segment ID" output, the width of each segment at Y-coordinate
of the "Segment Coordinates" output without normalization = 0 is
provided through the "Segment Width" output and the rotation value
of the affine transformation of the coordinate system of each segment
is provided through the "Segment Rotation" output.
The roundness of the coordinate lines of the "Segment Coordinates"
output can be controlled through the "Roundness" inputs.
This can be used to make the coordinate systems of the segments
a mix of Cartesian and polar coordinates.
Lastly, the lines of points of the "Segment Coordinates" output with
constant Y-coordinates have the shape of polygon with rounded corners,
which can be used to procedurally create rounded polygons.
Pull Request: https://projects.blender.org/blender/blender/pulls/127711
Previously the method of picking the "target" duplicate wasn't
deterministic from a user perspective.
The behavior has been changed so:
- For a cluster of 3 or more vertices,
use the vertex closest to the centroid.
- For a cluster of 2 use the lowest index.
This mitigates #78916, solving some cases where clusters have a
central vertex although can't be considered fixed as the 2 vertex
case doesn't work so well.
Added a BLI_kdtree_{N}d_calc_duplicates_cb function that lets the
caller choose the index to keep from a cluster of duplicates.
Refactored from !145851.
Ref !146492
Co-authored-by: tariqsulley <tariqsulley3c@gmail.com>
The previous implementation used KDTree duplicate search with
BLI_kdtree_3d_calc_duplicates_fast(). The survivor was always
one of the input vertices, not the centroid of the cluster.
This caused cases where merging a line of vertices did not
collapse to their average position, resulting in jagged loops.
Now vertices within the threshold are clustered, their centroid
is computed, and the chosen survivor is snapped to this centroid.
This ensures predictable and consistent merge results.
Ref !145851
This adds a `VArray::from_std_func` next to the existing `VArray::from_func`. It
does almost the same but inserts some type erasure by using `std::function`
which avoids the need to instantiate the full virtual array. This has is a bit
slower at run-time, but in practice there are some cases where it doesn't
matter. Currently, this patch reduces binary size by ~35kb. Not too much, but
still good or such a simple change.
Pull Request: https://projects.blender.org/blender/blender/pulls/146011
When reporting a crash with unknown exception we just display
UNKNOWN EXCEPTION which is not as helpful as it could be. this
commit adds the actual exception code for all reports.
Also adds a msvc specific exception to the known list for easy
identification.
the exception record from #145762 goes from
ExceptionCode : UNKNOWN EXCEPTION
to
ExceptionCode : Microsoft C++ Exception (0xe06d7363)
These don't really work as scene linear with sRGB transfer function for e.g.
ACEScg, there are not enough bits. If you want wide gamut you need to use
float colors.
Pull Request: https://projects.blender.org/blender/blender/pulls/145763
Functions for convert between the color types and ostream support are
now outside the classes.
Many files were changed to fix cases where direct includes for headers
were missing.
Pull Request: https://projects.blender.org/blender/blender/pulls/145756
This PR adds code for setting the Quality of Service (QoS) level of the
process on Windows. This can, e.g., make sure that on hybrid systems
P-cores are utilized even when the app window is out of focus.
In the following cases, it is adjusted from the default behavior:
- In wm_jobs.cc the QoS level is raised while a priority job is running.
- The command line option `--qos [high|eco|default]` can be used to
change the QoS level of the process.
- By default, the QoS level is raised for the EEVEE performance tests,
as they check viewport rendering performance and would otherwise be
reliant on never going out of focus to not get a downgraded QoS level.
By default, they are created with an out of focus window at the time
of landing this PR. This PR makes sure that they actually measure the
animation replay performance attainable during real-world use.
Pull Request: https://projects.blender.org/blender/blender/pulls/144224
The pattern of transforming many position vectors at once is quite
common, both with separate source and result arrays, and when modifying
an array in place. In some cases at least we used a separate function
with a consistent name across files, but there were also many duplicate
parallel transform implementations.
This commit adds these utilities to the BLI_math_matrix.hh API and uses
them where many positions from contiguous arrays are transformed at
once. While there might be a more ideal location for these utilities,
it's consistent with 3936d7a93e, and certainly better
than duplicating them.
This also reduces binary size of my build by 15 KB.
Pull Request: https://projects.blender.org/blender/blender/pulls/145352
Previously, `VArrayImpl` had a `materialize` and `materialize_to_uninitialized`
function. Now both are merged into one with an additional `bool
dst_is_uninitialized` parameter. The same is done for the
`materialize_compressed` method as all as `GVArrayImpl`.
While this kind of merging is typically not ideal, it reduces the binary size by
~200kb while being basically free performance wise. The cost of this predictable
boolean check is expected to be negligible even if only very few indices are
materialized. Additionally, in most cases, this parameter does not even have to
be checked, because for trivial types it does not matter if the destination
array is already initialized or not when overwriting it.
It saves this much memory, because there are quite a few implementations being
generated with e.g. `VArray::from_func` and a lot of code was duplicated for
each instantiation.
This changes only the actual `(G)VArrayImpl`, but not the `VArray` and `GVArray`
API which is typically used to work with virtual arrays.
Pull Request: https://projects.blender.org/blender/blender/pulls/145144
When creating a directory recursively (via
`BLI_file_ensure_parent_dir_exists()` or `BLI_dir_create_recursive()`,
filter out errors that indicate that the directory already exists.
This is now also covered by a unit test.
I've also removed `BLI_assert(BLI_exists(dirname) == 0);` from
`dir_create_recursive()` (the workhorse for the above-mentioned
functions), for two reasons:
- The function is only used in an "ensure this directory exist"
context, so the directory existing should be fine, and
- the assertion doesn't guarantee that the subsequent call to
`mkdir()` actually succeeds. Race conditions between Blender and
other processes (potentially on other computers, when using
networked filesystems) will make such a precondition test
unreliable.
This is a followup of 00abaa571a.
Pull Request: https://projects.blender.org/blender/blender/pulls/145173
Replacing the "cyclic offsets" cache from cfb8696a73.
That was more information than was necessary in the end.
This is implemented by de-duplicating the existing "contains"
function that's implemented twice for boolean virtual arrays.
Pull Request: https://projects.blender.org/blender/blender/pulls/144761
The existing suppression from [0] wasn't working for release builds.
Explicitly suppress the warning instead of using a dummy check.
Ref !144898
[0]: 9ad46ebd2f
`asset_shelf::regiondata_duplicate()` first creates a shallow copy of
the `AssetShelf`, including its `AssetShelfSettings` member. So the
contained pointer point to the same memory.
While this is a rather unusual case for a copy assignment operator to
consider, I think this is fine since the API allows these shadow copies.
This is a bit of a consequence of mixing C and C++ style memory
management.
Pull Request: https://projects.blender.org/blender/blender/pulls/144613
Point Caches (used by particle system, cloth, boids etc.) are now
always compressed, uzing zstd coupled with lossless data filtering.
- This is both smaller cache files _and_ faster than the old
"Heavy" compression mode,
- And smaller data files and same or slightly faster speed than
using no compression at all,
- There was not much difference between compression levels once
data filtering got added, so option to pick them was removed.
- So there's no downside to just always using the compression,
which makes for a simpler UI.
- RNA change: removes PointCache.compression property.
More details and cache size / performance numbers in the PR.
Pull Request: https://projects.blender.org/blender/blender/pulls/144356
Now that every socket type uses `SocketValueVariant` since #144355, the
parameter access code can be simplified a bit.
This also adds a new `GeoNodesMultiInput<T>` type that's used to access the list
of inputs value multi-input sockets.
Pull Request: https://projects.blender.org/blender/blender/pulls/144409
Depend only on `getcwd` and `chdir` rather than attempting to also
consider the `PWD` environment variable.
There's situations where these differ, and most easily seen when running
the Python tests. Calling `pathlib.Path.cwd()` will return something
like `/home/blender/git/blender-vexp/build_asserts/tests/python` while
PWD is `/home/blender/git/blender-vexp/build_asserts`. In this case,
calling `getcwd` will match what Python shows.
Additionally, this now matches what Windows and Mac do for these
affected APIs.
Pull Request: https://projects.blender.org/blender/blender/pulls/144235