Most of the changes are related to adding support for motion data throughout
the code. There's some code for actual camera/object motion blur raytracing
but it's unfinished (it badly slows down the raytracing kernel even when the
option is turned off), so that code it disabled still.
Motion vector export from Blender tries to avoid computing derived meshes
when the mesh does not have a deforming modifier, and it also won't store
motion vectors for every vertex if only the object or camera is moving.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
but this makes it more reliable for now.
Also add an integrator "Clamp" option, to clamp very light samples to a maximum
value. This will reduce accuracy but may help reducing noise and speed up
convergence.
emitting objects or world lighting do not contribute to the shadow pass.
Consider this more as a pass useful for some compositing tricks, unlike
other lighting passes this pass can't be used to exactly reconstruct the
combined pass.
existing "Equirectangular". This projection is useful to create light probes
from a chrome ball placed in a real scene. It expects as input a photograph of
the chrome ball, cropped so the ball just fits inside the image boundaries.
Example setup with panorama camera and mixing two (poor quality) photographs
from different viewpoints to avoid stretching and hide the photographer:
http://www.pasteall.org/pic/28036
environment map, by enabling the Panorama option in the camera.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Camera#Panorama
The focal length or sensor settings are not used, the UI can be tweaked still to
communicate this, also panorama should probably become a proper camera type like
perspective or ortho.
Currently supported passes:
* Combined, Z, Normal, Object Index, Material Index, Emission, Environment,
Diffuse/Glossy/Transmission x Direct/Indirect/Color
Not supported yet:
* UV, Vector, Mist
Only enabled for CPU devices at the moment, will do GPU tweaks tommorrow,
also for environment importance sampling.
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Passes
Contrast helps to adjust IBL (HDR images used for background lighting).
Note: In the UI we are caling it Bright instead of Brightness. This copy what Blender composite is doing.
Note2: the algorithm we are using produces pure black when contrast is 100. I'm not a fan of that, but it's a division by zero. I would like to look at other algorithms (what gimp does for example). But that would be only after 2.62.