This indirectly also fixes `blend2json.py` because it's build on top of
`blendfile.py`.
Main changes:
* Update .blend file header parsing to support the different header types.
* Support unpacking `LargeBHead8`.
* Use `namedtuple` instead of index access when accessing block header because
the order of items is different now.
* Update comments that were either fully redundant or already outdated.
Pull Request: https://projects.blender.org/blender/blender/pulls/140195
This reduces the time needed to get to the first pixel
on screen by multithreading the builtin shader compilation.
We avoid doing this if subprocess compilation is on as
the overhead of potentially partially starting all subprocess
is far greater than the benefit of paralllel compilation.
For some reasons, the compilation is much slower when
done async for these shaders (on Metal ~200ms > ~1.2ms),
so the saving might not be substantial.
Mac M1: First frame 6s > 5s.
Pull Request: https://projects.blender.org/blender/blender/pulls/139627
Edit the language list to make it simpler to scan.
- Display languages in a form "Language (Variant)", such as
"English (US)" instead of "American English" and
"Portuguese (Brazil)" instead of "Brazilian Portuguese".
This allows alphabetical sorting by language first.
This does not apply to endonyms (languages in their own language).
- Use a dash instead of parentheses to separate the endonyms.
- Deduplicate languages (Automatic, American English, British
English), which all are in English and don't appear in another
language.
- Remove language categories as headers. They are replaced with
percentages in the language tooltips. The percentages are
generated in utils_languages_menu.py and stored in
locale/languages.
Co-authored-by: Bastien Montagne <bastien@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/140087
The Extend Bounds input has no effect when the Fast Gaussian filter is
used. Similarly, it has no effect if the Bokeh Blur node is using
variable size. This is a known limitation and was just not implemented.
So to fix this, we implement a general solution that works globally
across the node by pre-padding the inputs of the blur. This uses more
memory but also speeds up the base case when Extend Bounds is disabled,
while also reducing the binary size due to fewer blur specializations.
The variable size Bokeh Blur test was updated since it Extend Bounds was
silently ignored.
Pull Request: https://projects.blender.org/blender/blender/pulls/140192
Instead of allowing leaks when parsing arguments, always cleanup before
calling exit(). This impacts -a (animation player), --help & --version
arguments, as well as scripts executed via --python which meant tests
that ran scripts could leak memory without raising an error as intended.
Avoid having suppress warnings & rationalize in code-comments when
leaking memory is/isn't acceptable, any leaks from the animation-player
are now reported as well.
This change exposed leaks: !140182, !140116.
Ref !140098
When switching render output between different formats (e.g.
ffmpeg video and png images), the previously used ffmpeg settings
were lost if audio codec was set to "No Audio" (which is the default).
Pull Request: https://projects.blender.org/blender/blender/pulls/139878
When switching render output between different formats (e.g.
ffmpeg video and png images), the previously used ffmpeg settings
were lost if audio codec was set to "No Audio" (which is the default).
Pull Request: https://projects.blender.org/blender/blender/pulls/139878
When creating a Combine Bundle node using link-drag-search from a
bundle-input-socket, the new node will be initialized to have all the sockets
that linked Separate Bundle nodes have. This uses the same mechanism that's used
for creating closure zones, so it works even if the Separate Bundle node is in
some nested node group.
Pull Request: https://projects.blender.org/blender/blender/pulls/140185
When calculating the cavity factor, it is possible for the relative
distance of all traversed connected vertices to be zero. This results in
a division by zero which does not get clamped correctly to the expected
[0.0, 1.0] bounds. Prior to 4.2, this would have had no effect, as the
processing of this vertex would be have been skipped entirely. Due to
changes during the brush refactor, this flaw in the existing code was
exposed.
Pull Request: https://projects.blender.org/blender/blender/pulls/140168
Prior to this commit, the sculpt scene and object was only initialized
once, at the very beginning of the test. This has the downside of slowly
changing the performance characteristics as more and more strokes are
performed, as the number of vertices within a given stroke may
eventually approach 1. To ensure a consistent measurement each time,
rebuild the scene between each brush stroke.
Pull Request: https://projects.blender.org/blender/blender/pulls/139960
This commit lowers the size of the mesh from approximately 2 million
verts to 250 thousand verts. This brings the graph more in line with the
mesh and multires usecase, and is more helpful in measuring and
detecting real performance issues.
While the upper end of dyntopo can certainly be massively improved,
strokes that take in the seconds to complete are already unusable from a
user perspective,
This patch removes the older handle selection and transformation logic,
which was accessible through user preferences by switching off the
default "simple tweaking" mode in Editing -> Video Sequencer.
There was some initial bugginess with the new handle behavior, hence
the option to revert to legacy mode, but with recent updates this no
longer applies. With the new system, all selection workflows
are still possible -- this just drops older code to make things simpler.
No functional changes intended.
Pull Request: https://projects.blender.org/blender/blender/pulls/140031
Usually, this enum should only ever be compared with `==`.
Confusingly, however, to check if a strip is an effect, one must
'bitwise-and' it instead. This can backfire if e.g. one tries to do
(strip->type & STRIP_TYPE_IMAGE) which doesn't work.
Make sure we only ever 'and' against the type enum when checking if
it is an effect. There was only this one case that didn't adhere.
Prior to this commit, the `object.subdivision_set` would prevent
actually applying a subdivision to a multires modifier when the relative
option was set. This is commonly accessed via Alt-1 / Alt-2 or D / Shift
D in Sculpt mode.
However, when the multires modifier did not already exist, pressing
these keybinds would still create the modifier and further subdivide the
mesh.
To fix this, this commit adds a hidden property to the operator:
`ensure_modifier` to indicate if the keybind should create the modifier
or not.
Pull Request: https://projects.blender.org/blender/blender/pulls/130254
Partial revert of 5102880f51. That commit decreased pen tablet drag
threshold as pressure is added. Unfortunately, although this allowed a
quicker response to start of drag, this also increased the difficultly
of initiating a press when both actions are available. With numerical
inputs for example, although dragging is more responsive clicking into
the input to edit was more difficult.
Pull Request: https://projects.blender.org/blender/blender/pulls/140066
Introduced with 23951e1b12
The multiplane scrape brush uses two separate distances, one in world
space and the other in local brush space. Both need to be filtered on
for determining the brush strength.
Pull Request: https://projects.blender.org/blender/blender/pulls/140143
The tabs in the sidebar are aligned to the content, which looks great
when region overlap is off, but when it isn't, it looks like the tabs
are attaching to nothing. Solve this by drawing sidebar tabs as pills,
when using region overlap. This matches the top-bar workspace tabs.
Pull Request: https://projects.blender.org/blender/blender/pulls/139951
Similar to the recent fix 457cccd964.
There was another code path which could pass in the wrong time into USD
for certain Mesh Sequence Cache scenarios.
I was not able to craft a faulty scenario by hand to observe a real
problem though. The scenario begins by importing a USD file needing a
Mesh Sequence Cache modifier and then attaching a particle system (like
Hair) to the object. This will trigger the specific check calling into
`can_use_mesh_for_orco_evaluation` with the wrong time.
This makes the code path more explicit and passes in the correct time to
USD regardless now e.g. frame 28 vs time 1.166666666666667 (24fps)
Pull Request: https://projects.blender.org/blender/blender/pulls/140092