We are using OpenCV which is an open source vision library to support
Webcam based interaction with kids. OpenCV supports face tracking and
with some cleaning up, it should be possible to use the face as a
texture for the avatar. However, transferring the image data in real
time is another story.
However, I think face mapping on avatar is more suitable for games
like Second Life in which the avatar takes strictly human forms.
Croquet's avatars are more playful and it would be harder to map a
human face to a bird.
We are taking the path of gesture recognition to let kids to control
avatar using the body language. Right now, just simple run, jump and
flip wings. Just simple things to get started.
There is Europe IST project COGAIN (Communication by Gaze
interaction) that's working on web cam based Eye Tracking. I think
it's more natural integration with Croquet like environment with body
gesture.
OpenCV
http://sourceforge.net/projects/opencvlibrary/COGAIN
http://www.cogain.org/On May 1, 2006, at 5:25 AM, Mats wrote:
> Communication through avatar-based virtual reality can, because of
> loss
> of nonverbal communication, be more ambiguous and limited than talking
> to someone in person or via web camera. A way to leverage this
> would be
> to map facial expressions captured by a web camera to an avatar, for
> more realistic communication.
>
> The company neven vision has a patent, Wavelet-Based Facial Motion
> Capture for Avatar Animation, that describes a technique for doing
> this. Are there any non-patented alternatives that are more
> appropriate
> for open source projects like Croquet?
>
> Mats
>