before, but wasn't playing nice with the other force fields (was
overwriting the velocity rather than acting as a force). This version is
slightly different, but will work a lot better with other force fields too.
This is the first code for the Data API, also known as RNA system in the
2.5 Branch. It does not include a user interface, and only wraps some
Scene properties for testing. It is integrated with Scons and Makefiles,
and compiles a 'makesrna' program that generates an RNA.c file.
http://wiki.blender.org/index.php/BlenderDev/Blender2.5/DataAPIhttp://wiki.blender.org/index.php/BlenderDev/Blender2.5/RNA
The changes are quite local, basically adding a makesrna module which
works similar to the makesdna module. The one external change is in
moving genfile.c from blenloader to the makesdna module, so that it can
be reused by the RNA code. This also meant changing the DNA makefiles.
It seems to be doing dependencies correct still in my tests, but if
there is an issue with the DNA not being rebuilt this commit might be
the one causing it. Also it seems for scons the makesdna and makesrna
modules are compiling without warnings. Not a new issue but this should
be fixed.
The RNA code supports all types as defined in the Data API design, so
in that sense it is fairly complete and I hope that aspect will not
have to change much. Some obviously missing parts are context related
code, notify() functions for updates and user defined / ID properties.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blendhttp://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
Rename PHY_GetActiveScene() to KX_GetActiveScene(): more logical name
Add KX_GetActiveEngine()
new KX_KetsjiEngine::GetClockTime(void) to return current
render frame time: if the CPU does not keep up with the
frame rate, up to 5 consecutive logic frames are processed
between each render frame, so that the logic system stays
accurate even if the graphic system is slow. For the video
texture module, it is important to stay in sync with the
render frame: no need to update the texture for logic frame.
BL_Texture::swapTexture(): texture id manipulation
BL_Texture::getTex() : return material texture
Enable video support in ffmpeg for Linux.
does not use any opengl for calculations (2.5 friendly!) and free from screen pixel artifacts. raytracing is used for occlusion.
Details
* Uses a 2D screenspace bucket grid that store intersecting faces and a list of UV pixels to optimize lookups between the brush rectangle UV pixels
* Buckets and faces are initialized when a brush first touches them.
* on initializing all the faces used have their UV pixels are converted into screenspace and copied into the buckets for painting (if the pixel is not occluded by a ray cast).
still a lot to do - blend modes, float buffer, image wrapping, image texture filtering.
Add feature requests here
http://wiki.blender.org/index.php/Wahooney_re/Paint_Branch_Proposals
- Code has been changed to reflect this (ie. deprecated functions are not anymore used)
* clean up the C and C++ compiler flags mess.
- in the environment construction of BlenderLib all the compile flag governing options have been split in the *C*, *CC* and *CXX* containing equivalents.
C is for C compiler only flags. CC is for C and C++ compiler flags and CXX is for C++ compiler only flags.
All the platform default config files need to be double checked and fixed wherever it looks necessary. Either DIY, or send me a note with needed changes.
- a start for the BlenderLib parameter list has been made - all the SConscripts need to be checked and modified to hand in flags properly.
* A theeth request: make -jN settable in the config file.
- I give you BF_NUMJOBS, which is set to 1 by default. In your user-config.py, set BF_NUMJOBS=4 to have 4 parallel jobs handled. Yay.
It works similarly to Vortex, but it's a more controllable - it
doesn't send the particles off accelerating into space, but
keeps them spinning around the Z axis of the force field object.
This is safe in the branch, but Jahka if you have any feedback,
I'd be curious to hear :)
Basically the code was referencing var[-1] it wasn't using it
but also did not need to be set in those cases. So I moved
the assignments so it skips the -1 case.
Kent
added gameObject.replaceMesh(meshname) - needed this for an automatically generated scene where 100's of objects would have needed logic bricks automatically added. Quicker to run replace mesh on all of them from 1 script.
conversion was reading the mtex array in a library linked material. It
is however not guaranteed that direct_link_* was called on the material
yet, so the array pointer is not always valid and it can crash.
* #17900 - IK Constraint was not included regardless of what Visual-Keying method was used
* Deleting a Bone Group now corrects indices of those groups that occurred after the one that was deleted
* No more click-a-mania - Delete all vertex groups from a Mesh (Ctrl-Shift-G menu)
Previously when using light cache, there could be artifacts caused when
voxel points that were sampled outside the volume object's geometry got
interpolated into the rest of the volume. This commit adds a (similar
to a dilate) filter pass after creating the light cache, that fills
these empty areas with the average of their surrounding voxels.
http://mke3.net/blender/devel/rendering/volumetrics/vol_lightcache_filter.jpg
that was preventing light cache from working on some
people's systems (but went just fine on both my windows pc
and mac). I have no idea how the original code even worked
at all, it really shouldn't have.
But fixed now anyway! Thanks a bunch to Zanqdo for patience
in helping me pinpoint this.