After a couple of experiments with variable blur filters, I tried
a more interesting, and who knows... original approach. :)
First watch results here:
http://www.blender.org/bf/rt0001_0030.avihttp://www.blender.org/bf/hand0001_0060.avi
These are the steps in producing such results:
- In preprocess, the speed vectors to previous and next frame are
calculated. Speed vectors are screen-aligned and in pixel size.
- while rendering, these vectors get calculated per sample, and
accumulated in the vector buffer checking for "minimum speed".
(on start the vector buffer is initialized on max speed).
- After render:
- The entire image, all pixels, then is converted to quad polygons.
- Also the z value of the pixels is assigned to the polygons
- The vertices for the quads use averaged speed vectors (of the 4
corner faces), using a 'minimum but non-zero' speed rule.
This minimal speed trick works very well to prevent 'tearing' apart
when multiple faces move in different directions in a pixel, or to
be able to separate moving pixels clearly from non-moving ones
- So, now we have a sort of 'mask' of quad polygons. The previous steps
guaranteed that this mask doesn't have antialias color info, and has
speed vectors that ensure individual parts to move nicely without
tearing effects. The Z allows multiple layers of moving masks.
- Then, in temporal buffer, faces get tagged if they move or not
- These tags then go to an anti-alias routine, which assigns alpha
values to edge faces, based on the method we used in past to antialias
bitmaps (still in our code, check the antialias.c in imbuf!)
- finally, the tag buffer is used to tag which z values of the original
image have to be included (to allow blur go behind stuff).
- OK, now we're ready for accumulating! In a loop, all faces then get
drawn (with zbuffer) with increasing influence of their speed vectors.
The resulting image then is accumulated on top of the original with a
decreasing weighting value.
It sounds all quite complex... but the speed is still encouraging. Above
images have 64 mblur steps, which takes about 1-3 seconds per frame.
Usage notes:
- Make sure the render-layer has passes 'Vector' and 'Z' on.
- add in Compositor the VectorBlur node, and connect the image, Z and
speed to the inputs.
- The node allows to set amount of steps (10 steps = 10 forward, 10 back).
and to set a maximum speed in pixels... to prevent extreme moving things
to blur too wide.
- Improved splitting of quads, which helps to avoid some degenerate triangles.
- Also improvements to choosing pins to preserve symmetry better in a few
typical cases.
- Mesh objects split by material- many 3ds objects used more then 16 per mesh. and when a face looses its image texture its tedious to set again.
- Removed a lot of unneeded variable creation.
- append group: appends group + puts objects in scene
- link group: only links group, doesn't put objects in scene
- append particle system with group: appends group + objects in scene
- link particle system with group: only links group
+ Added note about using the config files. I repeat it here: a user NEVER
should edit config/(platform)-config.py directly. Instead, make a copy of
config/(platform)-config.py to user-config.py, and change that instead.
/Nathan
PS. now I can say "I told you", and be sure I will :P
If "export NAN_USE_FFMPEG_CONFIG=true" is added to user-def.mk,
the system executes the ffmeg-config program to set values
for NAN_FFMPEG (--prefix), NAN_FFMPEGLIBS (--libs avcodec avformat),
and NAN_FFMPEGCFLAGS (--cflags). Only one used so far is the
NAN_FFMPEGLIBS for linking on linux (if requested to do so).
Current default is not to do this.
always statically (you have to force it to build a dynamic library) the
resulting binary is redistributable.
The code is made ffmpeg-version independent using #ifdef's.
* Large sequencer rewrite to support:
- Audio-tracks, which are not completely loaded into memory (hdaudio) but
kept on disk instead.
- A dependency tree, that builds only the Imbufs, that are really needed
- Cleaner sequencer code
- Per instance data in sequencer plugins (without this, the Dynamic
Noise Reduction plugin would be impossible)
- A Luma Waveform display
- A U/V scatter plot display
- Memcache limiting in sequencer
- Buttons changed according to the boosted framecount limit
* Add ffmpeg-read support in anim.c and util.c
* Makes ImBufs refcountable. You can now increase an internal refcounter
in ImBufs (using IMB_refImBuf) which is decreased by freeImBuf.
This makes it possible to simply pass ImBuf pointers around in the
sequencer saving a few memcopies.
* Boosts the blender frame limit by changing the type of the frame number
from short to int everywhere. Without this, timelines longer than a few
minutes are impossible to handle.
* Adds several types for ffmpeg input/output, hdaudio-tracks in sequencer
integrating a mini-webserver (around 300 lines of code) into blender.
Using the VFAPI-plugin in contrib/windows it enables blender to
directly feed its output into TMPGEnc, a commercial high quality MPEG-Encoder.
Since it is a mini-webserver, you can probably easily use it for other
interfacing purposes.
- color-correction-hsv & color-correction-yuv
Do color correction in HSV or YUV-space, rather sophisticated but slow.
You can control setup, gain and gamma and saturation (seperated by
shadows, midtones and highlights).
- gamma
a simple RGB-Gamma plugin, but very fast.
- dnr
Dynamic Noise Reduction (plugin ported from VirtualDub).
This helps mpeg encoding alot, by ignoring noise /movement
below a given threshold between frames.
It is also a lot faster than the original VirtualDub-plugin preserving
it's quality.