This is to make it easy to get the bpy paths to bpy.context.property for
example. Before this change, the easiest way was to use the clipboard
bpy.ops function. However this would make the python scripts using this
fragile as it we do not have full control of what is stored in the
system clipboard.
Pull Request: https://projects.blender.org/blender/blender/pulls/147116
Each name has to be unique within a group, so when renaming an idproperty, one
has to make sure that the parent group does not contain duplicates afterwards.
This patch raises a `NameError` when setting the name to one that exists
already. Alternatively, one could delete the already-existing property, but that
seems unexpected and the user should rather do that explicitly.
This also adds a new unit test for this case.
Pull Request: https://projects.blender.org/blender/blender/pulls/146892
Python API functions would both report an error through CLOG and throw an
exception. However this is a problem when --debug-exit-on-error is used,
as tests and other scripts will not have the opportunity to catch the exception
before Blender exits.
Now don't print such reports through CLOG. Note that these usages of
BKE_reports_init have their print level set to error, so were already not
printing info and warning reports (which is not something we generally
want Python API calls to do, they should be silent).
Ref #146256
Pull Request: https://projects.blender.org/blender/blender/pulls/146801
This adds support for packed linked data. This is a key part of an improved
asset workflow in Blender.
Packed IDs remain considered as linked data (i.e. they cannot be edited),
but they are stored in the current blendfile. This means that they:
* Are not lost in case the library data becomes unavailable.
* Are not changed in case the library data is updated.
These packed IDs are de-duplicated across blend-files, so e.g. if a shot
file and several of its dependencies all use the same util geometry node,
there will be a single copy of that geometry node in the shot file.
In case there are several versions of a same ID (e.g. linked at different
moments from a same library, which has been modified in-between), there
will be several packed IDs.
Name collisions are averted by storing these packed IDs into a new type of
'archive' libraries (and their namespaces). These libraries:
* Only contain packed IDs.
* Are owned and managed by their 'real' library data-block, called an
'archive parent'.
For more in-depth, technical design: #132167
UI/UX design: #140870
Co-authored-by: Bastien Montagne <bastien@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/133801
The naming was confusing as only some selection flushing functions
were intended to be used when elements had been selected or de-selected.
Replace these with a single function that takes a "select" argument.
Adds comprehensive logging system for temp_override context manager to
help developers debug "context is incorrect" operator poll failures.
The logging tracks all context member access during temp_override
sessions and provides detailed summaries to identify context
availability issues.
Features:
- Command-line logging: `./blender --log-level trace --log "context" `
- Python programmatic control: `temp_override_handle.logging_set(True)`
- C-level API: CTX_temp_override_logging_set/get() functions
- Tracks individual member access
- Uses CLOG logging infrastructure
The logging helps identify which context members are accessed during
temp_override sessions and whether they return valid data, making it
easier to debug operator poll functions that fail with context errors.
Ref !144810
Currently, when a python error is encountered when rendering the UI, the
corresponding message is printed to stdout / stderr via `PyErr_Print`,
this patch modifies behavior so that a cursory message is also printed
with CLOG
This has the benefit of allowing for testing via
`--debug-exit-on-error`, which aborts Blender when an error message is
printed.
Pull Request: https://projects.blender.org/blender/blender/pulls/146296
The Vectorcall protocol avoids creating a tuple, and also provides the
number of arguments in advance, providing a ~1.6x speedup for creation
of mathutils types.
Ref !146237
This makes the shader node inlining from #141936 available to external renderers
which use the Python API. Existing external renderer add-ons need to be updated
to get the inlined node tree from a material like below instead of using the
original node tree of the material directly.
The main contribution are these three methods: `Material.inline_shader_nodes()`,
`Light.inline_shader_nodes()` and `World.inline_shader_nodes()`.
In theory, there could be an inlining API for node trees more generally, but
some aspects of the inlining are specific to shader nodes currently. For example
the detection of output nodes and implicit input handling. Furthermore, having
the method on e.g. `Material` instead of on the node tree might be more future
proof for the case when we want to store input properties of the material on the
`Material` which are then passed into the shader node tree.
Example from API docs:
```python
import bpy
# The materials should be retrieved from the evaluated object to make sure that
# e.g. edits of Geometry Nodes are applied.
depsgraph = bpy.context.view_layer.depsgraph
ob = bpy.context.active_object
ob_eval = depsgraph.id_eval_get(ob)
material_eval = ob_eval.material_slots[0].material
# Compute the inlined shader nodes.
# Important: Do not loose the reference to this object while accessing the inlined
# node tree. Otherwise there will be a crash due to a dangling pointer.
inline_shader_nodes = material_eval.inline_shader_nodes()
# Get the actual inlined `bpy.types.NodeTree`.
tree = inline_shader_nodes.node_tree
for node in tree.nodes:
print(node.name)
```
Pull Request: https://projects.blender.org/blender/blender/pulls/145811
Stored undo step data for position changes in sculpt mode are now
automatically compressed. Compression is run in background threads,
reducing memory consumption during sculpting sessions while adding
little performance overhead.
For testing and benchmarks, memory usage is now available through
`bpy.app.undo_memory_info()`. Undo memory usage is now tracked by the
existing automated benchmark tests. Some changes to the web benchmark
visualization present the data a bit better.
ZSTD compression is run asynchronously in a backround task pool.
Compression is only blocking if the data is requested immediately for
undo/redo.
Co-authored-by: Hans Goudey <hans@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/141310
Improve handling of runtime defined python RNA properties. Mainly:
* Add `get_transform` and `set_transform` new callbacks.
These allow to edit the value, while still using the default
(IDProperty-based) storage system.
* Read-only properties should now be defined using a new `options` flag,
`READ_ONLY`.
* `get`/`set` should only be used when storing data outside of the
default system now.
* Having a `get` without a `set` defined forces property to be
read-only (same behavior as before).
* Having a `set` without a `get` is now an error.
* Just like with existing `get/set` callbacks, `get_/set_transform`
callbacks must always generate values matching the constraints defined
by their `bpy.props` property definition (same type, within required
range, same dimensions/sizes for the `Vector` properties, etc.).
* To simplify handling of non-statically sized strings, the relevant
RNA API has been modified, to use `std::string` instead of
(allocated) char arrays.
Relevant unittests and benchmarking have been added or updated as part
of this project.
Note: From initial benchmarking, 'transform' versions of get/set are
several times faster than 'real' get/set.
Implements #141042.
Pull Request: https://projects.blender.org/blender/blender/pulls/141303
Update the call signature in the docstring so that it is clear that the
arguments to `bpy.data.user_map()` are all optional:
```
.. method:: user_map(*, subset=None, key_types=None, value_types=None)
```
This is important for Python stubs that are generated from the
docstrings; see https://github.com/michaelgold/buildbpy/issues/3.
This change feels a bit weird, as the function behaves differently
depending on whether the arguments are there or not. For example,
`subset=None` is not actually valid, as it should be a dictionary and
the function will raise a `TypeError`. But documenting `subset=dict()`
is also incorrect, because explicitly passing that will make the
function behave differently: it reports an empty dictionary, instead of
reporting everything. The same goes for the other parameters.
A better approach (IMO) would be to either add "empty dict/set means
'everything'" logic, or to allow `None` values for these parameters so
that the documented call can actually be made.
Note that this is the case for other calls as well, such as
`bpy.data.file_path_map()`. As such, I don't want to address this in
this commit; this is just for making the docstring reflect the optional
nature of the parameters.
No functional changes.
Pull Request: https://projects.blender.org/blender/blender/pulls/144933
* PROP_COLOR_GAMMA is sRGB, not display space
* Hex colors are always sRGB
* Image byte buffers are in byte_buffer.colorspace
Fixes for sequencer text, image painting, render stamp and tooltips.
The default display space is sRGB, so this change will not be noticed
in most files.
Ref #144911
Pull Request: https://projects.blender.org/blender/blender/pulls/144565