It works similarly to Vortex, but it's a more controllable - it
doesn't send the particles off accelerating into space, but
keeps them spinning around the Z axis of the force field object.
This is safe in the branch, but Jahka if you have any feedback,
I'd be curious to hear :)
added gameObject.replaceMesh(meshname) - needed this for an automatically generated scene where 100's of objects would have needed logic bricks automatically added. Quicker to run replace mesh on all of them from 1 script.
conversion was reading the mtex array in a library linked material. It
is however not guaranteed that direct_link_* was called on the material
yet, so the array pointer is not always valid and it can crash.
* #17900 - IK Constraint was not included regardless of what Visual-Keying method was used
* Deleting a Bone Group now corrects indices of those groups that occurred after the one that was deleted
* No more click-a-mania - Delete all vertex groups from a Mesh (Ctrl-Shift-G menu)
Previously when using light cache, there could be artifacts caused when
voxel points that were sampled outside the volume object's geometry got
interpolated into the rest of the volume. This commit adds a (similar
to a dilate) filter pass after creating the light cache, that fills
these empty areas with the average of their surrounding voxels.
http://mke3.net/blender/devel/rendering/volumetrics/vol_lightcache_filter.jpg
that was preventing light cache from working on some
people's systems (but went just fine on both my windows pc
and mac). I have no idea how the original code even worked
at all, it really shouldn't have.
But fixed now anyway! Thanks a bunch to Zanqdo for patience
in helping me pinpoint this.
This is an interesting bug since it is likely the cause of many other suspicious python crashes in blender.
sys.last_traceback would store references to PyObjects at the point of the crash.
it would only free these when sys.last_traceback was set again or on exit.
This caused many crashes in the BGE while testing since python would end up freeing invalid game objects -
When running scripts with errors, Blender would crash every 2-5 runs - in my test just now it crashed after 4 trys.
It could also segfault blender, when (for eg) you run a script that has objects referenced. then load a new file and run another script that raises an error.
In this case all the invalid Blender-Object's user counts would be decremented, even though none of the pointers were still valid.
This fixes an issue which darkened the render from inside a
volume with sky or premul on. Still need to find a good way to
get an alpha value back into the shader (for compositing etc)
without getting the render distorted by premul.
Replaced 'Sharp' falloff with 'Soft'. This falloff type has
a variable softness, and can get some quite smooth results.
It can be useful to get smooth transitions in density when
you're using particles on a large scale:
http://mke3.net/blender/devel/rendering/volumetrics/pd_falloff_soft.jpg
Also removed 'angular velocity' turbulence source - it
wasn't doing anything useful atm
"Warning: binarysearch_bezt_index encountered invalid array" errors were being displayed in the console. Was caused by 3d-view show-keyframe for infostring stuff, when an IPO being checked had no keyframes.
- Adding constraint using button in panel still didn't update Armature Editing buttons properly.
- Minor code tidying of earlier bugfix for armatures
- 'For Transform' option for Limit constraints is now only taken into account for constraints that are enabled.
Second attempt at fixing this bug. Previous fix caused segfault when all bones in a chain are selected. Now it should segments which are selected (i.e. get swapped) will get unparented from segments that aren't (i.e. aren't swapped, so are still in old orientation)
After some discussion with Campbell, changed the way cstruct sizeof is fetched.
Moved DataSize() to Blender.Types.CSizeof(Blendertype). Supported types return sizeof(data struct), otherwise -1.
To quickly check what types are supported:
import Blender.Types as bt
x = dir(bt)
for t in x:
if t[0] != '_':
s = 'bt.CSizeof(bt.' + t + ')'
print t,"=", eval(s)
This was a bit complicated to do, but is working pretty well now, and can make shading significantly faster to render.
This option pre-calculates self-shading information into a
3d voxel grid before rendering, then uses and interpolates
that data during the main rendering phase, rather than
calculating shading for each sample. It's an approximation
and isn't as accurate as getting the lighting directly,
but in many cases it looks very similar and renders much faster.
The voxel grid covers the object's 3D screen-aligned bounding box
so this may not be that useful for large volume regions like a
big range of cloud cover, since you'll need a lot of resolution.
The render time speaks for itself here:
http://mke3.net/blender/devel/rendering/volumetrics/vol_light_cache_interpolation.jpg
The resolution is set in the volume panel - it's the resolution
of one edge of the voxel grid. Keep in mind that the higher the
resolution, the more memory needed, like in fluid sim. The
memory requirements increase with the cube of the edge
resolution so be careful. I might try and add a little memory
calculator thing like fluid sim has there later.
The voxels are interpolated using trilinear interpolation -
here's a comparison image I made during testing:
http://mke3.net/blender/devel/rendering/volumetrics/vol_light_cache_compare.jpg
There might still be a couple of little tweaks I can do to
improve the visual quality, I'll see.