like driving integrator seed with #frame.
The scene drivers are evaluated continuously, which would be nice to fix but
complicated, now it compares the RNA value to see if it actually changed, and
avoids the update in that case, which is a useful optimization by itself.
(merged from tomato branch)
This replaces per-image editor curve mapping which didn't behave properly
(it was possible to open the same image in two image editors and setup
different curves in this editors, but only last changed curve was applied
on image)
After discussion with Brecht decided to have something which works reliable
and predictable and ended up with adding RGB curves as a part of display
transform, which is applied before OCIO processor (to match old behavior).
Setting white/black values from image editor (Ctrl/Shift + LMB) would
affect on scene settings.
This could break compatibility, but there's no reliable way to convert
old semi-working settings into new one.
Mainly behaves in the same way as legacy color transformation, but it'll
give different result on over and under exposured areas.
Not sure if it's indeed issue -- seems this behaves crappy in both of
current stable release and OCIO branch.
This gives some percentage of speedup, which compensates slowdown
caused by converting image buffer into display space.
Used OpenMP for this. Still feel skeptic about this, discussed with
Brecht and we decided this approach actually could be used since
seems all the platforms has got OpenMP issues solved.
Waveform and vector scopes are still single-threaded since they're
a bit tricker to be done multi-threaded and probably not so commonly
used.
This avoids calculation of scopes on every redraw, so such tools as panning
and zoom wouldn't imply re-calculating scopes.
Implemented as a structure inside of SpaceSeq, juts like it's done for clip
and image spaces.
Also fixed zebra display to work in display space.
Added utility function to apply display transformation on image buffer's
float array which is currently only used by sequencer's scopes.
This function is multithreaded, but scopes should be improved further
since currently they're being recalculated from scratch on every draw.
- when renderlayers could not be found in save_render_result_tile() blender would crash.
- RE_engine_end_result() / rna end_result() didn't set result argument as required.
... also some style cleanup.
This means that modifier would operate on buffer which was passed to it,
without creating copy of image buffer and operating on it.
All current modifiers fit into this model and if it would need to have
original buffer on modifier calculation, that particular modifier can
create copy.
Gives some percentage of boost.
Initial idea of this input was re-designed in a bit more flexible
way using modifiers.
Also since Color Balance (which was the only thing using effect
mask input) was moved to the modifiers, this input field became
rudiment.
It's pretty tricky to write versioning code to prevent possible
data in cases this field was used, but hope it wouldn't be difficult
to switch to modifiers masks.