Currently float buffers are always linear, space of byte buffer is defined by
rect_colorspace property.
Replaced logic of IMB_rect_from_float and IMB_float_from_rect to use this
assumptions (before it was special function used in some areas only, now it's
default behavior).
Almost all functions from ImBuf module which are actually used are got rid from
profile flag. Only remained issue is IMB_float_profile_ensure which only used by
CIneon/DPX exporter which is broken for a while already. Need to be fixed
separately.
This also fixed clone brush when cloning byte image on top of float, before
this result would be gamma-corrected twice.
Before this color space settings in image/movie clip data blocks
defined space in which loaded image buffer is, based on whether
this buffer contains float or byte buffer. This didn't work well
for formats like 16bit PNG, which are in fact non-linear formats
but were represented in Blender as float buffers.
Now image buffer loader is responsible to set up input default
color space for image/movie clip data blocks, which could be
based on format itself only or on particular file properties.
This means image/movie clip data blocks' input colorspace will
be initialized at time first image buffer is loaded.
This also resolves old confusing thing with image buffer's profile
flag, which in could have been non-linear for image buffer which
contained float buffer only -- this happened in mentioned case of
16 bit PNG format, i.e. Now float buffer would always be linear
which should make it easier to get rid of image buffer's profile.
In this case sequencer would allocate empty image buffer which used to not to
have assigned color spaces but it was marked as non-linear float.
Assuming black would always be black in sequencer's color space no additional
transformation for display is needed.
Solved by removing NOLINEAR_FLOAT flag from image buffers and using check
image buffer's float_colorspace for NULL when needed to check whether float
buffer is linear or not.
It was really annoying mistake in original support of logarithmic color space
for sequencer which made adjustment layers be working in linear space. Seems
this was only an issue for modifiers in adjustment effect.
Now all modifiers are applying in sequencer's color space (in fact, this was
fixed in svn rev50275).
To preserve compatibility of Mango grading added this option which probably
wouldn't be used by others.
This is need since some images (like normal maps, textures and so) would
want to be viewed without any tone map applied on them. On the same time
it's possible that some images would want to be affected by tone maps,
and renders would always want to be affected by tone maps.
After long discussion with Brecht we decided less painful and most clear
way would be to simply add "View as Render" option to image datablocks.
If this option is enabled for image, both settings from Display and
Render blocks of color management settings would be applied on display.
If this option is disabled, only display transform with default view and
no exposure/gamma/curves would be applied.
Render result and compositor viewers would always have "View as Render"
enabled.
There's separated setting when image is saving which says whether saved
image should be affected by render part of color management settings.
This option is enabled by default for render result/node viewer and
disabled by default for all the rest images. This option wouldn't have
affect when saving to float formats such as EXR.
This commit hopefully finishes color management pipeline changes, implements
some missed functionality and fixes some bugs.
Mainly changes are related on getting rid of old Color Management flag which
became counter-intuitive in conjunction with OpenColorIO.
Now color management is always assuming to be enabled and non-color managed
pipeline is emulated using display device called None. This display has got
single view which is basically NO-OP (raw) transformation, not applying any
tone curve and displays colors AS-IS. In most cases it behaves the same as
disabling Color Management in shading panel, but there is at least one known
difference in behavior: compositor and sequence editors would output images
in linear space, not in sRGB as it used to be before.
It'll be quite tricky to make this behave in exactly the same way as it
used to, and not sure if we really need to do it.
3D viewport is supposed to be working in sRGB space, no tonemaps would be
applied there. This is another case where compatibility breaks in comparison
with old color management pipeline, but supporting display transformation
would be tricky since it'll also be needed to make GLSL shaders, textures
and so be aware of display transform.
Interface is now aware of display transformation, but it only uses default
display view, no exposure, gamma or curve mapping is supported there.
This is so color widgets could apply display transformation in both
directions. Such behavior is a bit counter-intuitive, but it's currently
the only way to make color picking working smoothly. In theory we'll need
to support color picking color space, but it'll be also a bit tricky since
in Blender display transform is configurable from the interface and could
be used for artistics needs and in such design it's not possible to figure
out invertable color space which could be used for color picking.
In other software it's not so big issue since all color spaces, display
transform and so are strictly defined by pipeline and in this case it is
possible to define color picking space which would be close enough to
display space.
Sequencer's color space now could be configured from the interface --
it's settings are situated in Scene buttons, Color Management panel.
Default space is sRGB. It was made configurable because we used vd16
color space during Mango which was close to Film view used by grading
department.
Sequencer will convert float buffers to this color space before operating
with them hopefully giving better results. Byte buffers wouldn't be
converted to this color space an they'll be handled in their own colors[ace.
Converting them to sequencer's working space would lead to precision loss
without much visible benefits. It shouldn't be an issue since float and
byte images would never be blended together -- byte image would be converted
to float first if it's needed to be blended with float image.
Byte buffers now allowed to be color managed. This was needed to make code
in such areas as baking and rendering don't have hardcoded linear to sRGB
conversions, making things more clear from code point of view.
Input color space is now assigning on image/movie clip load and default
roles are used for this. So for float images default space would be rec709
and for byte images default space would be sRGB.
Added Non-Color color space which is aimed to be used for such things as
normal/heights maps. It's currently the same as raw colorspace, just has
got more clear naming for users. Probably we'll also need to make it not
affected by display transformation.
Think this is all main pipeline-related changes, more details would be there:
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
Other changes and fixes:
- Lots of internal code clean up in color management module.
- Made OpenColorIO module using guarded memory allocation. This allowed to
fix couple of memory leaks and would help preventing leaks in the future.
- Made sure color unpremultiply and dither are supported by all OpenColorIO
defined color transformations.
- Made compositor's preview images be aware of display transformation.
Legacy compositor still uses old Color Management flags, but likely we'll
disable compositor for the release and remove legacy code soon, so don't
think we'll need to spend time on porting that code to new color management
system.
- Made OpenGL rendering be aware of display transform when saving render
result. Now it behaves in the same way as regular rendering.
TODO:
- HSV widgets are using linear rgb/sRGB conversions for single channel,
not sure how this should be ported to new color pipeline.
- Image stamp would use hardcoded linear rgb to sRGB conversion for
filling rectangles. Probably it should use default display view
for this instead, would check this with Brecht.
- Get rid of None color space which was only used for compatibility reasons.
- Made it more clear which color spaces could be used as input space.
- There're also some remained TODO's in the code marked as OCIO_TODO,
but wouldn't consider them as stoppers for at least this commit.
This replaces per-image editor curve mapping which didn't behave properly
(it was possible to open the same image in two image editors and setup
different curves in this editors, but only last changed curve was applied
on image)
After discussion with Brecht decided to have something which works reliable
and predictable and ended up with adding RGB curves as a part of display
transform, which is applied before OCIO processor (to match old behavior).
Setting white/black values from image editor (Ctrl/Shift + LMB) would
affect on scene settings.
This could break compatibility, but there's no reliable way to convert
old semi-working settings into new one.
Avoid using tricks with ibuf->profile to check whether image buffer is
in sequencer or linear space. Assume the whole sequencer works in non
linear float space and do transformation to linear where it;s needed
only.
This removes confusion from the code, fixes wrong behavior of some
effects.
- Even preserves thickness but can give unsightly loops
- Smooth gives nicer shape but can give unsightly feather/spline mismatch for 'S' shapes created by beziers.
This is an example where smooth works much nicer.
http://www.graphicall.org/ftp/ideasman42/mask_compare.png
- Make FFmpeg initialization called from creator, not from functions
which requires FFmpeg. Makes it easier to follow when initialization
should happen.
- Enable DNxHD codec. It was commented a while ago due to some strange
behavior on some platforms. Re-tested it on Linux and Windows and
it seemd to be working quite nice. Would let it be tested further,
if it wouldn't be stable enough, easy to comment it again.
- Make non-error messages from writeffmpeg.c printed only if ffmpeg
debug argument was passed to blender. Reduces console pollution
with messages which are not useful for general troubleshooting.
Error messages would still be printed to the console.
- Show FFmpeg error message when video stream failed to allocate.
makes it easier to understand what exactly is wrong from Blender
interface, no need to restart blender with FFmpeg debug flag and
check for console messages.
Used custom log callback for this which stores last error message
in static variable. This is not thread safe, but with current
design FFmpeg routines could not be called form several threads
anyway, so think it's fine solution/
Its unlikely you want to do short -> int, int -> float etc, conversion during swapping (if its needed we could have a non type checking macro).
Double that the optimized assembler outbut using SWAP() remains unchanged from before.
This exposed quite a few places where redundant type conversion was going on.
Also remove curve.c's swapdata() and replace its use with swap_v3_v3()
* Added a Brick Texture Node to Cycles.
* Based on the Blender Internal Brick Texture with some modifications.
* Tested on CPU and GPU (CUDA & OpenCL)
Documentation: http://wiki.blender.org/index.php/User:DingTo/CyclesBrickTexture
ToDo: Only works correct on flat surfaces, like a Plane. If you attach the shader to 3D objects like a cube, the mapping is not correct on the Y/Z vector.
Thanks to Lukas Toenne for fixing a issue I had with the Node code! :)
now customdata is interpolated into a temp variable and applied at the end of each layer interpolation function.
So this now works for CDDM customdata interpolation and avoids duplicating the customdata when the source and destination overlap.