Doing such writes leaves dangling file in the installation directory which
then is packaged as well. Not only this makes it so a random file gets
packaged for installation, it also makes notarization process to fail because
of not-so-clear reason.
The `ply_exporter_ply_data_test.SuzanneLoadPLYDataUV` fixture seems to be
unreliable and fails at random, even before this change. This makes it
hard to reliably get green light on all tests.
Pull Request #105504
Don't call recursion where it's redundant. The recursive algorithm
can carry dangerous behavior due to stack growth and overflow. The
probability is low for something like the frame nodes. But using a loop
is cheap, providing O(N = const) memory cost. A loop through the links
in a singly linked list is sufficient. The use of 2D vectors for
location mapping and other things can be separate.
Pull Request #105394
3e5ce23c99 introduced a regression in case the freed Main was part of a
list, and was supposed to be removed from it, since calling
`BLI_remlink` does _not_ clear the `prev`/`next` pointers of the removed
link.
This commit also contains a few more tweaks to recent related b3f42d8e98
commit.
Pull Request #105485
While this behavior can be useful in some cases, it can also create
issues (as in one of own recent commits, 3e5ce23c99), since it
implicetly keeps the removed linknode 'linked' to the listbase.
At least warn about it in the documentation of `BLI_remlink`.
This commit introduces a new Main boolean flag that marks is as invalid.
Higher-level file reading code does checks on this flag to abort reading
process if needed.
This is an implementation of the #105083 design task.
Given the extense of the change, I do not think this should be
considered for 3.5 and previous LTS releases.
Drivers: Introduce the Context Properties
The goal: allow accessing context dependent data, such as active scene camera
without linking to a specific scene data-block. This is useful in cases when,
for example, geometry node setup needs to be aware of the camera position.
A possible work-around without changes like this is to have some scene
evaluation hook which will update driver variables for the currently evaluating
scene. But this raises an issue of linking: it is undesirable that the asset
scene is linked to the shot file.
Surely, it is possible to have post-evaluation handler to clear the variables,
but it all starts to be quite messy. Not to mention possible threading
conflicts.
Another possibility of introducing a way to achieve the goal is to make it so
the dependency graph somehow parses the python expression where artists can
(and already are trying to) type something like:
depsgraph.scene.camera.matrix_world.col[3][0]
But this is not only tricky to implement properly and reliably, it hits two
limitations:
- Currently dependency graph can only easily resolve dependencies to a RNA
property.
- Some properties access which are valid in Python are not considered valid
RNA properties by the existing property resolution functions:
`camera.matrix_world[3][0]` is a valid RNA property, but
`camera.matrix_world.col[3][0]` is not.
Using driver variables allows to have visual feedback when the path resolution
fails, and there is no way to visualize errors in the python expression itself.
This change introduces the new variable type: Context Property. Using this
variable type makes allows to choose between Active Scene and Active View
Layer. These scene and view layer are resolved during the driver evaluation
time, based on the current dependency graph.
This allows to create a driver variable in the following configuration:
- Type: Context Property
- Context Property: Active Scene
- Path: camera.matrix_world[3][0]
The naming is a bit confusing. Tried my best to keep it clear keeping two
aspects in mind: using UI naming when possible, and follow the existing
naming.
A lot of the changes are related on making it so the required data is available
from the variable evaluation functions. It wasn't really clear what the data
would be, and the scope of the changes, so it is done together with the
functional changes.
It seems that there is some variable evaluation logic duplicated in the
`bpy_rna_driver.c`. This change does not change it. It is not really clear why
this separate code path with much more limited scope of supported target types
is even needed.
There is also a possible change in the behavior of the dependency graph: it
is now using ID of the resolved path when building driver variables. It used
to use the variable ID. In common cases they match, but when going into nested
data-blocks it is actually correct to use relation to the resolved ID. Not sure
if there was some code to ensure that, which now can be resolved. Also not sure
whether it is still needed to ensure the ID specified in the driver target is
build as well. Intuitively it is not needed.
Pull Request #105132
Currently, curves have a default offset of 1.0, while the initial (and
expected) value is 0.0. When resetting this value to its default, the
curve is now modified unexpectedly. This is most noticeable with text
objects: when resetting the offset of a new text, it will look very
broken.
Internally the value is stored with an offset of 1.0, with custom
setter and getter adding and subtracting 1.0 respectively. To give
this property a default of 0.0, we also need to add 1.0 to the initial
value upon curve creation.
Pull Request #105182
In order to properly translate UI messages, they sometimes need to be
disambiguated using translation contexts. Until now, node sockets had
no way to specify contexts and collisions occurred.
This commit adds a way to declare contexts for each socket using:
`.translation_context()`
If no context is specified, the default null context is used.
Pull Request #105195
This has the effect that the message is cut off at the end of the
first line. I copied the solution from other similar docstrings
elsewhere in the code.
As far as my regex-fu can tell, there are no other occurrences of this
in the codebase.
Issue reported by Joan Pujolar in #43295.
Pull Request #105474
**What are push constants?**
Push constants is a way to quickly provide a small amount of uniform data to shaders.
It should be much quicker than UBOs but a huge limitation is the size of data - spec
requires 128 bytes to be available for a push constant range.
**What are the challenges with push constants?**
The challenge with push constants is that the limited available size. According to
the Vulkan spec each platform should at least have 128 bytes reserved for push
constants. Current Mesa/AMD drivers supports 256 bytes, but Mesa/Intel is only 128
bytes.
**What is our solution?**
Some shaders of Blender uses more than these boundaries. When more data is needed
push constants will not be used, but the shader will be patched to use an uniform
buffer instead. This mechanism will be part of the Vulkan backend and shader
developers should not see any difference on API level.
**Known limitations**
Current state of the vulkan backend does not track resources that are in the
command queue. This patch includes some test cases that identified this issue as
well. See #104771.
Pull Request #104880
The up_axis_update/forward_axis_update was the same logic between
the two, so factor that out.
Also use the same time reporting logic in PLY as in OBJ/USD/Alembic.
When render is triggered from python and the render result is displayed
it isn't being updated as it wasn't tagged as being invalid.
Pull Request #105480
If the texture image path in the MTL is a "quoted" absolute path, the importer will fail to find the
file. It was only attempting to un-quote the path for the relative case. Now we attempt to un-quote
in all cases.
Pull Request #105478
If the texture image path in the MTL is a "quoted" absolute path, the importer will fail to find the
file. It was only attempting to un-quote the path for the relative case. Now we attempt to un-quote
in all cases.
Pull Request #105478
- Add missing braces for if statements
- Tweak variable naming to use snake case
- Use more common name for `MLoop`s of a face
- Use `std::move` when appending an array
- Use const for a few variable declarations
This commit implements three OSL microfacet closures that are needed to support
MaterialX: dielectric_bsdf, conductor_bsdf and generalized_schlick_bsdf.
Internally these map to existing microfacet closures, only the Fresnel term is
different.
Currently, we use the closure type to encode the type of microfacet distribution
(GGX/Beckmann/Sharp/MultiGGX), the lobes we're interested in
(Reflection/Refraction/both) AND the Fresnel type (None or Principled v1).
This results in the mess of dozens of options that we currently have. Since
adding Principled v2 and the MaterialX OSL closures will involve adding more
Fresnel types, this clearly doesn't scale.
But, since the earlier Fresnel rework (D17101), the Fresnel type only matters
in one place now. This allows to significantly clean up the closure type
handling. To do this, MicrofacetBsdfs now separately store their Fresnel type,
and instead of a single MicrofacetExtra we have one struct per Fresnel type
(unless no extra data is needed).
Further, instead of having one _setup() function per combination, the Fresnel
setup is also split into separate functions. This decouples the implementation
of new Fresnel terms from most of the Microfacet logic, and makes it a very
simple and clean operation.
This commit replaces the current Glass approach, where Glass is a virtual closure
that gets replaced with a Glossy and a Refractive closure, with a combined
closure that handles Fresnel after sampling the microfacet. That way, the Fresnel
term is more accurate since it accounts for the microfacet normal, not the
shading normal.
Also updates the BSDF sampling to use a 3D sampler now, since we need two
dimensions to pick the microfacet normal and then a third dimension to pick
reflection/refraction. This can also be used to get rid of the LCG in the
Principled Hair BSDF, which means we can remove it altogether once MultiGGX is
gone.
Also, "sharp" is now supported as a microfacet distribution in OSL, and 2
is supported as the "refract" argument to microfacet() in order to get glass.
Address some issues discussed in PR #104404:
- Vertex color options changed to None/sRGB/Linear, default is sRGB
to match the existing Python addon.
- Change name to "Stanford PLY" from "PLY" in the menu item.
- Default "Export UVs" to on.
- After importing vertex colors, they are set as enabled for render.
New (experimental) Stanford PLY importer and exporter written in C++.
Handles: vertices, faces, edges, vertex colors, normals, UVs. Both
binary and ASCII formats are supported.
Usually 10-20x faster than the existing Python based PLY
importer/exporter.
Additional notes compared to the previous Python addon:
- Importing point clouds with vertex colors now works
- Importing PLY files with non standard line endings
- Exporting multiple objects (previous exporter didn't take the vertex
indices into account)
- The importer has the option to merge vertices
- The exporter supports exporting loose edges and vertices along with
UV map data
This is squashed commit of PR #104404
Reviewed By: Hans Goudey, Aras Pranckevicius
Co-authored-by: Arjan van Diest
Co-authored-by: Lilith Houtjes
Co-authored-by: Bas Hendriks
Co-authored-by: Thomas Feijen
Co-authored-by: Yoran Huzen
msgfmt has a TBB dependency though bf_blenlib, now for a release build
The MSVC linker is smart enough to realize none of the TBB code is
actually used and discards it. In debug mode the linker is a bit more
conservative and doesn't, leaving msgfmt with a runtime dependency
on TBB. The problem here is, we only copy the runtime dlls during
the install phase, and msgfmt runs long long before that.
For this reason when we run msgfmt we should make sure any runtime
needs it could have are met in the path, there already is a handy
variable for that since oslc has similar requirements.
Pull Request #105048
During install all dlls should copy to the blender.shared
folder regardless if the dependency is on or off, creators
CmakeLists.txt already did this correctly, but for boost
the BOOST_POSTFIX and BOOST_DEBUG_POSTFIX variables were
not set causing the boost dll's not to be copied.
This change takes the setting of these variables out of the
WITH_BOOST block, but still guards it with a
WITH_WINDOWS_FIND_MODULES block so we don't break the build
for people building with that on.