This required allocating some memory related on object transform needed
by ShaderData and currently it is done for all the platforms. Since we're
targeting full feature-complete platforms this is rather acceptable at
this point and in the future we'll do selective NO_HAIR/NO_SSS/NO_BLUR
kernels.
This is experimental still and in fact there're some major issues on
NVidia platform and it's not really clear if it's a bug in compiler,
some uninitizlied variable or other kind of issue.
Some stupid fixes like spaces around operator and missing semicolon,
plus fix for wrong detecting of ShaderData SOA size. Thar was harmless
since there's only one closure array, but still better to fix this.
This file was actually checking for features enabled on CPU and surely all
of them were enabled, so removing them does not cause any difference.
ideally we'll need to do runtime feature detection and just pass some stuff
as NULL to the kernel, or maybe also have variadic kernel entry points which
is also possible quite easily.
It's good for testing and seems to work quite reliably here.
This probably not totally cheap in terms of performance, but this we
could solve quite easily by selective kernel compilation once other
things are tested/proved to be reliable.
The actual character motion actuator triggers every frame the jump method.
Adding an edge detection to trigger the jump method.
Reviewers: lordloki, sybren, moguri
Reviewed By: moguri
Differential Revision: https://developer.blender.org/D1220
No need to store them in the class, they're unlikely to be changed
and if they do change we're in big trouble anyway.
More appropriate approach would be then to typedef this things in
kernel_types.h, but still use inlined sizeof(),
This resolves a problem where selected items edited for multi-value-editig
could include objects not in any visible views (unlocked layers, local view... etc).
Optimize the full rebuild case for now (though same code can be adapted to
partial redraws)
Main changes here:
* Calculate bounding centroid for faces only once (instead of every intermediate node)
* Faces do not get added to GSets immediately, instead we track a face
array which has faces that belong in a node in consecutive order.
Nodes just keep accounting of start and length in the array.
* Due to faces not being added to GSets, we can skip doing cleanup of GSets
and readdition for each intermediate node and instead only
add the faces to the final leafs node GSets when those nodes are created.
Results:
For a 1.9 million face test model, PBVH generation time (roughly measured by undoing) is
dropped from 6 seconds to about 4 seconds. Still too high, but still a nice improvement.
TODO:
Thread some parts. Unfortunately threading the GSet assignment part might not help much since
we'd need a lot of locking to avoid collisions with node assignments, especially for unique vertices.