Improve mmap handling of IO errors on WIN32.
Make MMAP gracefully handle IO errors on Windows by replacing the
mapping with zeros using a vectored exception handler when an
EXCEPTION_IN_PAGE_ERROR is raised. This is similar to how such errors
are handled on non-Windows platforms.
On Windows, this is implemented by first creating a placeholder
allocation and then mapping the file into it. When an error occurs, the
exception handler unmaps the file, keeping the placeholder intact, and
creates an anonymous mapping into it, after which execution can
continue.
Since some required functions don't exist on older Windows versions,
the error handling will only work on Windows 10, version 1803 or newer.
Ref !139739
The issue here was that I wrongly assumed that the locale would always be available.
Instead just use our built-in function to add thousands-separators (which I didn't
find last time).
Pull Request: https://projects.blender.org/blender/blender/pulls/147515
Regression in [0] removed checks for indices referencing themselves
which need to be kept but can still be used as targets.
Restore this logic as well as fixing another problem (#147022)
where auto-merge would not merge into the nearest vertex, this
was especially noticeable then the threshold was set to a large value
but would happen at smaller values too.
[0]: bdae3e28a2
On its own, the main functionality of the Radial Tiling node
is the ability to divide a 2D Cartesian coordinate system into
as many radial segments as specified by the "Segments" input.
Each segment has its own affinely transformed coordinate system,
provided through the "Segment Coordinates" output, which can be
used to tile textures in a radially symmetric manner.
Additionally, a unique index is provided for every segment through
the "Segment ID" output, the width of each segment at Y-coordinate
of the "Segment Coordinates" output without normalization = 0 is
provided through the "Segment Width" output and the rotation value
of the affine transformation of the coordinate system of each segment
is provided through the "Segment Rotation" output.
The roundness of the coordinate lines of the "Segment Coordinates"
output can be controlled through the "Roundness" inputs.
This can be used to make the coordinate systems of the segments
a mix of Cartesian and polar coordinates.
Lastly, the lines of points of the "Segment Coordinates" output with
constant Y-coordinates have the shape of polygon with rounded corners,
which can be used to procedurally create rounded polygons.
Pull Request: https://projects.blender.org/blender/blender/pulls/127711
Previously the method of picking the "target" duplicate wasn't
deterministic from a user perspective.
The behavior has been changed so:
- For a cluster of 3 or more vertices,
use the vertex closest to the centroid.
- For a cluster of 2 use the lowest index.
This mitigates #78916, solving some cases where clusters have a
central vertex although can't be considered fixed as the 2 vertex
case doesn't work so well.
Added a BLI_kdtree_{N}d_calc_duplicates_cb function that lets the
caller choose the index to keep from a cluster of duplicates.
Refactored from !145851.
Ref !146492
Co-authored-by: tariqsulley <tariqsulley3c@gmail.com>
The previous implementation used KDTree duplicate search with
BLI_kdtree_3d_calc_duplicates_fast(). The survivor was always
one of the input vertices, not the centroid of the cluster.
This caused cases where merging a line of vertices did not
collapse to their average position, resulting in jagged loops.
Now vertices within the threshold are clustered, their centroid
is computed, and the chosen survivor is snapped to this centroid.
This ensures predictable and consistent merge results.
Ref !145851
When reporting a crash with unknown exception we just display
UNKNOWN EXCEPTION which is not as helpful as it could be. this
commit adds the actual exception code for all reports.
Also adds a msvc specific exception to the known list for easy
identification.
the exception record from #145762 goes from
ExceptionCode : UNKNOWN EXCEPTION
to
ExceptionCode : Microsoft C++ Exception (0xe06d7363)
Functions for convert between the color types and ostream support are
now outside the classes.
Many files were changed to fix cases where direct includes for headers
were missing.
Pull Request: https://projects.blender.org/blender/blender/pulls/145756
This PR adds code for setting the Quality of Service (QoS) level of the
process on Windows. This can, e.g., make sure that on hybrid systems
P-cores are utilized even when the app window is out of focus.
In the following cases, it is adjusted from the default behavior:
- In wm_jobs.cc the QoS level is raised while a priority job is running.
- The command line option `--qos [high|eco|default]` can be used to
change the QoS level of the process.
- By default, the QoS level is raised for the EEVEE performance tests,
as they check viewport rendering performance and would otherwise be
reliant on never going out of focus to not get a downgraded QoS level.
By default, they are created with an out of focus window at the time
of landing this PR. This PR makes sure that they actually measure the
animation replay performance attainable during real-world use.
Pull Request: https://projects.blender.org/blender/blender/pulls/144224
The pattern of transforming many position vectors at once is quite
common, both with separate source and result arrays, and when modifying
an array in place. In some cases at least we used a separate function
with a consistent name across files, but there were also many duplicate
parallel transform implementations.
This commit adds these utilities to the BLI_math_matrix.hh API and uses
them where many positions from contiguous arrays are transformed at
once. While there might be a more ideal location for these utilities,
it's consistent with 3936d7a93e, and certainly better
than duplicating them.
This also reduces binary size of my build by 15 KB.
Pull Request: https://projects.blender.org/blender/blender/pulls/145352
Previously, `VArrayImpl` had a `materialize` and `materialize_to_uninitialized`
function. Now both are merged into one with an additional `bool
dst_is_uninitialized` parameter. The same is done for the
`materialize_compressed` method as all as `GVArrayImpl`.
While this kind of merging is typically not ideal, it reduces the binary size by
~200kb while being basically free performance wise. The cost of this predictable
boolean check is expected to be negligible even if only very few indices are
materialized. Additionally, in most cases, this parameter does not even have to
be checked, because for trivial types it does not matter if the destination
array is already initialized or not when overwriting it.
It saves this much memory, because there are quite a few implementations being
generated with e.g. `VArray::from_func` and a lot of code was duplicated for
each instantiation.
This changes only the actual `(G)VArrayImpl`, but not the `VArray` and `GVArray`
API which is typically used to work with virtual arrays.
Pull Request: https://projects.blender.org/blender/blender/pulls/145144
When creating a directory recursively (via
`BLI_file_ensure_parent_dir_exists()` or `BLI_dir_create_recursive()`,
filter out errors that indicate that the directory already exists.
This is now also covered by a unit test.
I've also removed `BLI_assert(BLI_exists(dirname) == 0);` from
`dir_create_recursive()` (the workhorse for the above-mentioned
functions), for two reasons:
- The function is only used in an "ensure this directory exist"
context, so the directory existing should be fine, and
- the assertion doesn't guarantee that the subsequent call to
`mkdir()` actually succeeds. Race conditions between Blender and
other processes (potentially on other computers, when using
networked filesystems) will make such a precondition test
unreliable.
This is a followup of 00abaa571a.
Pull Request: https://projects.blender.org/blender/blender/pulls/145173
Replacing the "cyclic offsets" cache from cfb8696a73.
That was more information than was necessary in the end.
This is implemented by de-duplicating the existing "contains"
function that's implemented twice for boolean virtual arrays.
Pull Request: https://projects.blender.org/blender/blender/pulls/144761
Point Caches (used by particle system, cloth, boids etc.) are now
always compressed, uzing zstd coupled with lossless data filtering.
- This is both smaller cache files _and_ faster than the old
"Heavy" compression mode,
- And smaller data files and same or slightly faster speed than
using no compression at all,
- There was not much difference between compression levels once
data filtering got added, so option to pick them was removed.
- So there's no downside to just always using the compression,
which makes for a simpler UI.
- RNA change: removes PointCache.compression property.
More details and cache size / performance numbers in the PR.
Pull Request: https://projects.blender.org/blender/blender/pulls/144356
Depend only on `getcwd` and `chdir` rather than attempting to also
consider the `PWD` environment variable.
There's situations where these differ, and most easily seen when running
the Python tests. Calling `pathlib.Path.cwd()` will return something
like `/home/blender/git/blender-vexp/build_asserts/tests/python` while
PWD is `/home/blender/git/blender-vexp/build_asserts`. In this case,
calling `getcwd` will match what Python shows.
Additionally, this now matches what Windows and Mac do for these
affected APIs.
Pull Request: https://projects.blender.org/blender/blender/pulls/144235
Waveform, Parade and Vectorscopes were calculated by copying the
rendered image, transforming it into display space, and calculating
the the scope from that. On large resolutions, this
copy+transform+free of the image was taking up majority of the time.
Especially for default case when the display transform is a no-op.
Change the code so that display transform, if needed, is done directly
inside scope calculation, without needing a full-size temporary image.
Additionally, the vectorscope calculation was single threaded.
Multi-thread it by doing a parallel reduction, where each job
calculates their own scope image, and they are merged. Since job
payload is fairly large (512x512 bytes), make jobs pretty large
(256k pixels each).
Time (in ms) taken to calculate scope at 4K resolution (Ryzen 5950X,
Windows). Default color management settings:
- Waveform, PNG/SDR: 5.5 -> 5.2
- Waveform, EXR/HDR: 33.5 -> 10.3
- Vectorscope, PNG/SDR: 32.4 -> 4.5
- Vectorscope, EXR/HDR: 53.2 -> 9.8
Timings when additional color space management is needed (display
space set to Display P3, sequencer kept at sRGB):
- Waveform, PNG/SDR: 29.5 -> 10.9
- Waveform, EXR/HDR: 67.6 -> 10.9
- Vectorscope, PNG/SDR: 56.8 -> 12.0
- Vectorscope, EXR/HDR: 85.9 -> 13.4
This also fixes calculation of waveform / vectorscope on float (HDR)
images that have alpha channel; the scope was wrongly calculated on
premultiplied color values, which was not consistent with how it was
calculated on the byte images.
Pull Request: https://projects.blender.org/blender/blender/pulls/144059
This fixes an edge case in how the max acceptable edit distance is calculated
for deletions in fuzzy search, wherein the padding added to the max error count
could be negative when the query was longer than the matched term, producing a
max distance of zero.
This came up in chat a while back, where someone noted that, in the compositor,
searching for "bluir" wouldn't return the blur nodes; "blu" worked, "blui"
worked, but "bluir" returned no results, despite all three having equal edit
distances (1 insertion, 1 substitution, and 1 deletion, respectively). The edit
distance metrics themselves are calculated correctly; the issue was just with
how the distance threshold was set.
Pull Request: https://projects.blender.org/blender/blender/pulls/143741
Since [0], removing degenerate points at the beginning of the hull
would re-order points so the last were moved to the beginning.
While this isn't an error, having the resulting hull *sometimes*
re-ordering it's result based on internal error correction isn't ideal.
Document that the first point in the hull has the lowest Y value and
update tests to ensure this.
Also correct the doc-string regarding the hulls cross-product
and tests this is working as documented.
[0]: 87f9fd8fb3
Resolve an error in `BLI_convexhull_2d` where *almost* overlapping
points could result in the hull including *concave* points.
This tended to happen with larger polygons in the range of 100-500.
The regression is likely caused by [0] since this optimization
relies on the input not having any concave coordinates.
[0]: 888c4d0766