- De-duplicate code used by color management panel drawing,
moved this panel to utility file in bl_ui
- Added support of per-window color management control
which means view transform. exposure and gamma could be
set per window and all spaces would use this settings.
This is default behavior for older files now.
- Added support of color management display of movie clips
in clip editor.
Supported both texture buffer and fallback draw methods.
- Fixed default values for exposure and gamma
- Move space-being settings (such as view transform) into own
DNA and RNA structure to avoid code duplication in some areas
and save some arguments on display buffer acquiring function.
Also added some utility functions to BKE to manipulate this
settings.
- Replace static sized color managed buffer flags array with
dynamically sized array which matches actual number of displays.
Probably this flags better be transfposed so it'll support
any number of view transforms and 32 displays (currently it's
other way around). it's runtime flags only, so would be simple
to change any time.
- Added support of configurable exposure and gamma.
Changing this settings wouldn't generate new item in cache,
it'll affect on buffer with the same color spaces conversion.
It'll also run full color transform from scratch on every run,
this could be changes in a way that it'll re-use color managed
buffer, but from quick glance it doesn't give really noticeable
boost.
Currently this settings are stored as pointer in ImBuf structure
itself. Probably it make sense removing them from ImBuf and make
moviecache be able to store some kind of tags associated with
cached ImBuf.
- Made color management cache safe for situations when one area
requested a display buffer, then some changes were done which
invalidated cache, other area requested display buffer which
changed cached buffer.
Suck case could be fatal for first used of display buffer,
which didn't happen yet because cache is being accessed from
main thread only, but better to keep this things completely
thread save to avoid headache in the future.
- Baked RRT transformations, which gives ~3-4 times boost
hopefully without visible artifacts.
- Added support of partial updates to display buffers.
This would create special context which hols display buffer
which imbuf had to the time of creating this context and
later this context would allow to run a color correction
from given linear buffer within given region.
This is being used by compositor to enable more realtime
display update when compositing.
- Added support of color management backdrop for nodes editor.
There's now special menu called display properties in N-panel
of nodes editor.
Probably this better be de-duplicated somehow, but not sure
yet how. Currently it's not so harmful to have panel for two
spaces which contains only 2 properties.
There's currently one unsolved issue with backdrop:
it's not being updated progressively when just loading the
file -- it's simply because there's no color managed display
buffer for backdrop yet, and compositor doesn't actually
know which color space to use here to generate preview to.
The transform operators in nodes will now use the unselected nodes to generate snapping points. Unlike object snapping, node snapping works for the x/y axes separately and snaps node borders to same borders of unselected nodes. The sensitive area for node borders extends over the whole view2D range, to enable simple alignment of nodes in both x and y direction.
For snap points in the node editor an additional enum value is stored to indicate the type of node border (left/right/top/bottom). This works as a constraint on possible node alignments: only same border types align with each other.
The tool works OK, except it was messing vertices' order of polys, often giving ugly results! Now only using sorted list of vertices indices to find matching polys.
Snapping actually was working already, but grid spacing was set to 1.0, which is basically pixel size in the node editor. Increased this to 1x grid step for fine snapping and 5x grid step for rough snapping.
Grid drawing in node editor now draws 2 levels in slightly different shades to indicate the different snapping modes better.
Node editor also supports the general use_snap tool setting to enable automatic snapping during transform. For now only the incremental snapping is supported, in future could be extended to enable alignment between nodes in a number of ways.
- Cleaned up some files -- seems there were some wrongly resolved
conflicts which resulted into duplicated code in space_image.py
and some build configuration files.
- Store all color space related data (such as display device, view
transform and so) as strings, so it could easily be ported to new
OCIO configuration files and it'll be much more portable between
different configurations.
This required adding some look-ups to RNA associated with such
settings, but it's indeed the only way to do this. If it'll be
figured out such look-ups causes performance issues it's possible
to optimize this further using hash. So far it's only few elements
in list to be looked up.
- Added support of display device transformation from OCIO
configuration files. Display device is setting per-window and
different windows could have different display devices, so it's
possible to have one blender window opened on sRGB monitor and
another one on xyz projector.
Display device is ignored when using ACES ODT Tonecurve view
transform due to it's not an OCIO transformation. Probably it'll
be possible to get rid of this tone curve soon (if it'll be
proved useless or it'll be implemented as a part of OCIO LUT).
- Movie Cache now supports deleter functions for user keys, so
such keys could have some allocated data which would be removed
as soon as element in cache is being removed.
- Movie Cache now support callbacks to check whether current
cache element could be removed from a cache due to it wouldn't
be accessed anymore.
- Re-written cache stuff for display buffers of ImBuf. Now it's
using Movie Cache which is global for all ImBufs.
Probably it's not implemented in fastest way, would be investigated
further and probably changed it performance wouldn't be good enough.
Problem was py code of main texture panel was not doing any check on the pinned id, assuming it managed the textures itself - but this is not the case of the Object datablock...
All work actually done by Sergey, was just missing the Lamp specific case. Checked both in code and with tests, quite sure all cases are now correctly handled!
Near sensors only pick up "actors," but objects with character physics did not have the actor option displayed. By setting the character physics object to actor, it can be picked up by the near sensor. However, it collides with the near sensor, which sounds like bug [#31701]
Issue was caused by some stuff happenign in wm_operator_finish() which uses
to somehow restore changes made by transformation invoke function.
Solved by not calling translation operator directly from duplication operator
(which is in fact really tricky) and use macros instead. This macros calls
duplication operator which simply duplicates strip, and then calls translation
operator.
Patch by Philipp Oeser (lichtwerk), just did style change (better to not define a value twice, so only affecting the three color components, not the alpha, also using the slice syntax makes things much more compact ;) ), thanks!
This commit simply adds view transform option for image editor. This transform
is being applied on original linear color when float buffer is being converted
into sRGB byte buffer.
Currently supports such transformations:
- ACES ODT ToneCurve transform which shall preserve color ranges on such
a conversion.
- OCIO Raw, Log and RRT view transforms
This commit also contains integration of OCIO backends to Blender, so
now there's c-api and configuration file. Most of things were got from
branch where Xavier Thomas and Lukas Toene were working.
NOTE:
This is just for testing our pipeline, please do not bother me with messages
it's done wrong. It is done correct to support our own pipeline for now, and
real design would be created later when current stoppers for the project are
gone.
In contrast to start_frame (which affects on where footage actually
starts to play and also affects on all data associated with a clip
such as motion tracking, reconstruction and so on) this slider only
affects on a way how frame number is mapping to a filename, without
touching any kind of tracking data.
The formula is:
file_name = clip_file_name + frame_offset - (start_frame - 1)
- Display track's reprojection error in dopesheet
- Make sure track is selected when clicking on dopesheet channel
- Attempt to make headers a bit cleaner without long labels which
doesn't actually make sense.