* Space: volume density and step size in object or world space
* Step Size: override automatic step size
* Clipping: values below this are ignored for tighter volume bounds
The last two are Cycles only currently.
Ref T73201
By default it will now set the step size to the voxel size for smoke and
volume objects, and 1/10th the bounding box for procedural volume shaders.
New settings are:
* Scene render/preview step rate: to globally adjust detail and performance
* Material step rate: multiplied with auto detected per-object step size
* World step size: distance to steo for world shader
Differential Revision: https://developer.blender.org/D1777
This patch adds a new Vertex Color node. The node also returns the alpha
of the vertex color layer as an output.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D5767
The object color property is added as an additional output in
the Object Info node.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D5554
The issue was in the optimization code path for opaque shadow rays
which was wrongly considering all primitives in the node to have
same visibility flags.
This never really worked as it was supposed to. The main goal of this is to
turn noise from sampling tiny hairs into multiple layers of transparency that
do not need to be sampled stochastically. However the implementation of this
worked by randomly discarding hair intersections in BVH traversal, which
defeats the purpose.
If it ever comes back, it's best implemented outside the kernel as a preprocess
that changes hair radius before BVH building. This would also make it work with
Embree, where it's not supported now. But it's not so clear anymore that with
many AA samples and GPU rendering this feature is as helpful as it once was for
CPU raytracers with few AA samples.
The benefit of removing this feature is improved hair ray tracing performance,
tested on NVIDIA Titan Xp:
bmw27: +0.37%
classroom: +0.26%
fishy_cat: -7.36%
koro: -12.98%
pabellon: -0.12%
Differential Revision: https://developer.blender.org/D4532
Float2 are now a new type for attributes in Cycles. Before, the choices
for attribute storage were float and float3, the latter padded to
float4. This meant that UV maps were inflated to twice the size
necessary.
Reviewers: brecht, sergey
Reviewed By: brecht
Subscribers: #cycles
Tags: #cycles
Differential Revision: https://developer.blender.org/D4409
There is a generic function to retrieve float and float3 attributes
`primitive_attribute_float` and primitive_attribute_float3`. Inside
these functions an prioritised if-else construction checked where
the attribute is stored and then retrieved from that location.
Actually the calling function most of the time already knows where
the data is stored. So we could simplify this by splitting these
functions and remove the check logic.
This patch splits the `primitive_attribute_float?` functions into
`primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`.
What leads to less branching and more optimum kernels.
The original function is still being used by OSL and `svm_node_attr`.
This will reduce the compilation time and render time for kernels.
Especially in production scenes there is a lot of benefit.
Impact in compilation times
job | scene_name | previous | new | percentage
-------+-----------------+----------+-------+------------
t61513 | empty | 10.63 | 10.66 | 0%
t61513 | bmw | 17.91 | 17.65 | 1%
t61513 | fishycat | 19.57 | 17.68 | 10%
t61513 | barbershop | 54.10 | 24.41 | 55%
t61513 | classroom | 17.55 | 16.29 | 7%
t61513 | koro | 18.92 | 18.05 | 5%
t61513 | pavillion | 17.43 | 16.52 | 5%
t61513 | splash279 | 16.48 | 14.91 | 10%
t61513 | volume_emission | 36.22 | 21.60 | 40%
Impact in render times
job | scene_name | previous | new | percentage
-------+-----------------+----------+--------+------------
61513 | empty | 21.06 | 20.35 | 3%
61513 | bmw | 198.44 | 190.05 | 4%
61513 | fishycat | 394.20 | 401.25 | -2%
61513 | barbershop | 1188.16 | 912.39 | 23%
61513 | classroom | 341.08 | 340.38 | 0%
61513 | koro | 472.43 | 471.80 | 0%
61513 | pavillion | 905.77 | 899.80 | 1%
61513 | splash279 | 55.26 | 54.86 | 1%
61513 | volume_emission | 62.59 | 61.70 | 1%
There is also a possitive impact when using CPU and CUDA, but they are small.
I didn't split the hair logic from the surface logic due to:
* Hair and surface use same attribute types. It was not clear if it could be
splitted when looking at the code only.
* Hair and surface are quick to compile and to read. So the benefit is quite
small.
Differential Revision: https://developer.blender.org/D4375
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
Note that this is turned off by default and must be enabled at build time with the CMake WITH_CYCLES_EMBREE flag.
Embree must be built as a static library with ray masking turned on, the `make deps` scripts have been updated accordingly.
There, Embree is off by default too and must be enabled with the WITH_EMBREE flag.
Using Embree allows for much faster rendering of deformation motion blur while reducing the memory footprint.
TODO: GPU implementation, deduplication of data, leveraging more of Embrees features (e.g. tessellation cache).
Differential Revision: https://developer.blender.org/D3682
This allows for extra output passes that encode automatic object and material masks
for the entire scene. It is an implementation of the Cryptomatte standard as
introduced by Psyop. A good future extension would be to add a manifest to the
export and to do plenty of testing to ensure that it is fully compatible with other
renderers and compositing programs that use Cryptomatte.
Internally, it adds the ability for Cycles to have several passes of the same type
that are distinguished by their name.
Differential Revision: https://developer.blender.org/D3538
This is an initial implementation of BVH8 optimization structure
and packated triangle intersection. The aim is to get faster ray
to scene intersection checks.
Scene BVH4 BVH8
barbershop_interior 10:24.94 10:10.74
bmw27 02:41.25 02:38.83
classroom 08:16.49 07:56.15
fishy_cat 04:24.56 04:17.29
koro 06:03.06 06:01.45
pavillon_barcelona 09:21.26 09:02.98
victor 23:39.65 22:53.71
As memory goes, peak usage raises by about 4.7% in a complex
scenes.
Note that BVH8 is disabled when using OSL, this is because OSL
kernel does not get per-microarchitecture optimizations and
hence always considers BVH3 is used.
Original BVH8 patch from Anton Gavrikov.
Batched triangles intersection from Victoria Zhislina.
Extra work and tests and fixes from Maxym Dmytrychenko.
This is a physically-based, easy-to-use shader for rendering hair and fur,
with controls for melanin, roughness and randomization.
Based on the paper "A Practical and Controllable Hair and Fur Model for
Production Path Tracing".
Implemented by Leonardo E. Segovia and Lukas Stockner, part of Google
Summer of Code 2018.
This means the shader can now be used for procedural texturing. New
settings on the node are Samples, Inside, Local Only and Distance.
Original patch by Lukas with further changes by Brecht.
Differential Revision: https://developer.blender.org/D3479
This save a little memory and copying in the kernel by storing only a 4x3
matrix instead of a 4x4 matrix. We already did this in a few places, and
those don't need to be special exceptions anymore now.
This is in preparation of making Transform affine only, and also gives us
a little extra type safety so we don't accidentally treat it as a regular
4x4 matrix.
The purpose of the previous code refactoring is to make the code more readable,
but combined with this change benchmarks also render about 2-3% faster with an
NVIDIA Titan Xp.
It seems to be useful still in cases where the particle are distributed in
a particular order or pattern, to colorize them along with that. This isn't
really well defined, but might as well avoid breaking backwards compatibility
for now.