It was really annoying mistake in original support of logarithmic color space
for sequencer which made adjustment layers be working in linear space. Seems
this was only an issue for modifiers in adjustment effect.
Now all modifiers are applying in sequencer's color space (in fact, this was
fixed in svn rev50275).
To preserve compatibility of Mango grading added this option which probably
wouldn't be used by others.
Hide all meaningless for Blender input color spaces and made naming
for rest of them to match Blender's interface better.
Not sure if we'll need to support logarithmic color space as input.
Renamed P3DCI to DCI-P3 which seems to be more common naming.
Also added Film view for DCI-P3 display.
So now non-invertible color spaces can not be used as input color space
for images and as working color space for sequencer.
Currently have got two hard-coded families which are rrt and display.
If color space belong to one of this families, it would be considered
as non-invertible.
Data color spaces are always considered invertible.
If color space has got to_reference transformation, it'll be considered
invertible.
This is need since some images (like normal maps, textures and so) would
want to be viewed without any tone map applied on them. On the same time
it's possible that some images would want to be affected by tone maps,
and renders would always want to be affected by tone maps.
After long discussion with Brecht we decided less painful and most clear
way would be to simply add "View as Render" option to image datablocks.
If this option is enabled for image, both settings from Display and
Render blocks of color management settings would be applied on display.
If this option is disabled, only display transform with default view and
no exposure/gamma/curves would be applied.
Render result and compositor viewers would always have "View as Render"
enabled.
There's separated setting when image is saving which says whether saved
image should be affected by render part of color management settings.
This option is enabled by default for render result/node viewer and
disabled by default for all the rest images. This option wouldn't have
affect when saving to float formats such as EXR.
This commit hopefully finishes color management pipeline changes, implements
some missed functionality and fixes some bugs.
Mainly changes are related on getting rid of old Color Management flag which
became counter-intuitive in conjunction with OpenColorIO.
Now color management is always assuming to be enabled and non-color managed
pipeline is emulated using display device called None. This display has got
single view which is basically NO-OP (raw) transformation, not applying any
tone curve and displays colors AS-IS. In most cases it behaves the same as
disabling Color Management in shading panel, but there is at least one known
difference in behavior: compositor and sequence editors would output images
in linear space, not in sRGB as it used to be before.
It'll be quite tricky to make this behave in exactly the same way as it
used to, and not sure if we really need to do it.
3D viewport is supposed to be working in sRGB space, no tonemaps would be
applied there. This is another case where compatibility breaks in comparison
with old color management pipeline, but supporting display transformation
would be tricky since it'll also be needed to make GLSL shaders, textures
and so be aware of display transform.
Interface is now aware of display transformation, but it only uses default
display view, no exposure, gamma or curve mapping is supported there.
This is so color widgets could apply display transformation in both
directions. Such behavior is a bit counter-intuitive, but it's currently
the only way to make color picking working smoothly. In theory we'll need
to support color picking color space, but it'll be also a bit tricky since
in Blender display transform is configurable from the interface and could
be used for artistics needs and in such design it's not possible to figure
out invertable color space which could be used for color picking.
In other software it's not so big issue since all color spaces, display
transform and so are strictly defined by pipeline and in this case it is
possible to define color picking space which would be close enough to
display space.
Sequencer's color space now could be configured from the interface --
it's settings are situated in Scene buttons, Color Management panel.
Default space is sRGB. It was made configurable because we used vd16
color space during Mango which was close to Film view used by grading
department.
Sequencer will convert float buffers to this color space before operating
with them hopefully giving better results. Byte buffers wouldn't be
converted to this color space an they'll be handled in their own colors[ace.
Converting them to sequencer's working space would lead to precision loss
without much visible benefits. It shouldn't be an issue since float and
byte images would never be blended together -- byte image would be converted
to float first if it's needed to be blended with float image.
Byte buffers now allowed to be color managed. This was needed to make code
in such areas as baking and rendering don't have hardcoded linear to sRGB
conversions, making things more clear from code point of view.
Input color space is now assigning on image/movie clip load and default
roles are used for this. So for float images default space would be rec709
and for byte images default space would be sRGB.
Added Non-Color color space which is aimed to be used for such things as
normal/heights maps. It's currently the same as raw colorspace, just has
got more clear naming for users. Probably we'll also need to make it not
affected by display transformation.
Think this is all main pipeline-related changes, more details would be there:
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
Other changes and fixes:
- Lots of internal code clean up in color management module.
- Made OpenColorIO module using guarded memory allocation. This allowed to
fix couple of memory leaks and would help preventing leaks in the future.
- Made sure color unpremultiply and dither are supported by all OpenColorIO
defined color transformations.
- Made compositor's preview images be aware of display transformation.
Legacy compositor still uses old Color Management flags, but likely we'll
disable compositor for the release and remove legacy code soon, so don't
think we'll need to spend time on porting that code to new color management
system.
- Made OpenGL rendering be aware of display transform when saving render
result. Now it behaves in the same way as regular rendering.
TODO:
- HSV widgets are using linear rgb/sRGB conversions for single channel,
not sure how this should be ported to new color pipeline.
- Image stamp would use hardcoded linear rgb to sRGB conversion for
filling rectangles. Probably it should use default display view
for this instead, would check this with Brecht.
- Get rid of None color space which was only used for compatibility reasons.
- Made it more clear which color spaces could be used as input space.
- There're also some remained TODO's in the code marked as OCIO_TODO,
but wouldn't consider them as stoppers for at least this commit.
This replaces per-image editor curve mapping which didn't behave properly
(it was possible to open the same image in two image editors and setup
different curves in this editors, but only last changed curve was applied
on image)
After discussion with Brecht decided to have something which works reliable
and predictable and ended up with adding RGB curves as a part of display
transform, which is applied before OCIO processor (to match old behavior).
Setting white/black values from image editor (Ctrl/Shift + LMB) would
affect on scene settings.
This could break compatibility, but there's no reliable way to convert
old semi-working settings into new one.
Mainly behaves in the same way as legacy color transformation, but it'll
give different result on over and under exposured areas.
Not sure if it's indeed issue -- seems this behaves crappy in both of
current stable release and OCIO branch.
This gives some percentage of speedup, which compensates slowdown
caused by converting image buffer into display space.
Used OpenMP for this. Still feel skeptic about this, discussed with
Brecht and we decided this approach actually could be used since
seems all the platforms has got OpenMP issues solved.
Waveform and vector scopes are still single-threaded since they're
a bit tricker to be done multi-threaded and probably not so commonly
used.
This avoids calculation of scopes on every redraw, so such tools as panning
and zoom wouldn't imply re-calculating scopes.
Implemented as a structure inside of SpaceSeq, juts like it's done for clip
and image spaces.
Also fixed zebra display to work in display space.
Added utility function to apply display transformation on image buffer's
float array which is currently only used by sequencer's scopes.
This function is multithreaded, but scopes should be improved further
since currently they're being recalculated from scratch on every draw.
- Color managed RGB values wouldn't be displayed anymore for
byte images (which are currently unsupported to be managed).
- Color rectangle would now be color managed
- Sequencer was passing non-linear float to information line,
now it'll pass linear float.
This tonemap was added as a temporary option only and if it'll be
needed again, it'll be better to implement is as either a spline
in OCIO or as a film response curve (as some of such curves were
added as a presets for RGB curves in Mango production SVN).
Also revert changes made to IMB_buffer_byte_from_float since it's
not actually needed anymore and makes it's clearer changes against
trunk.
Avoid using tricks with ibuf->profile to check whether image buffer is
in sequencer or linear space. Assume the whole sequencer works in non
linear float space and do transformation to linear where it;s needed
only.
This removes confusion from the code, fixes wrong behavior of some
effects.