Has anyone tried putting the vertex animations (from Minnesota's
SkeletalAnimation package) into a shader program? Could that work? -Howard |
That IS how it works. That's why they released the changes to the
OpenGL packages along with it. On 11/15/07, Howard Stearns <[hidden email]> wrote: > Has anyone tried putting the vertex animations (from Minnesota's > SkeletalAnimation package) into a shader program? > > Could that work? > > -Howard > > > |
There are two kinds of animations in the "SkeletalAnimations" package:
TSkeletalAnimation - which moves bones in Squeak and uploads the bone matrices to the graphics card, on which there is a shader program that applies the matrices to the vertices with which they are associated. These are used for things like "walking." and TVertexAnimation - also known as "facial animation" or "morphing" - which moves vertices around entirely in Squeak before uploading them to the graphics card for skeletal adjustment per the TSkeletonAnimation. These are used for things like "talking" or "bulging muscles". (At least, that's the case for the most recent version, SkeletalAnimation-pbm.69 at http://jabberwocky.croquetproject.org: 8889/Jabberwocky) I believe your excellent tutorials are only using TSkeletalAnimation, yes? The processing of TVertexAnimation>>updateVertices:originalVerticies:withPoses:at: is expensive. I am asking about uploading the concatenated pose vertices and the concatenate pose indices just once, and then having the updateVertices method compute interpolated weights in Squeak and uploading the results to the card in the same way that the boneMatrices are uploaded, for final application to the mesh vertices on the card itself. I haven't thought through exactly what weight data would be uploaded on each update and how that would be processed on the card. I'm also worried about blowing limits on the graphics cards. In addition to the unknown update data, the vertexbuffer of concatenated poses and the floatarray of concatenated pose indices would typically be a couple thousand each. But my real worry is card limits on program size. I have no feel for whether we're close to that. -H On Nov 16, 2007, at 7:18 AM, David Faught wrote: > That IS how it works. That's why they released the changes to the > OpenGL packages along with it. > > On 11/15/07, Howard Stearns <[hidden email]> wrote: >> Has anyone tried putting the vertex animations (from Minnesota's >> SkeletalAnimation package) into a shader program? >> >> Could that work? >> >> -Howard >> >> >> |
In reply to this post by Howard Stearns-3
On Nov 15, 2007, at 1:49 PM, Howard Stearns wrote:
> Has anyone tried putting the vertex animations (from Minnesota's > SkeletalAnimation package) into a shader program? > > Could that work? > Yes, this approach is used by NVIDIA in their demos. Eg: http:// download.nvidia.com/developer/GPU_Gems_2/CD/Resources/Chapter_3.pdf Josh > -Howard > > |
In reply to this post by Howard Stearns-3
I haven't looked at the code seriously in a few months but I'll
describe what feelings I can remember. When I built the animation system for Minnesota I worked under the assumption that animating well on older cards was more important than animating really fast on hot shader graphics cards we didn't even have in the office. Besides, I'd have to write a rendering path for the low end shader cards anyways, so it just made sense to focus on that. The way I currently deal with vertex animations is handle any morphing on the CPU, and then if changes occur update the entire vertex buffer object (so if you animate the face it sends the entire mesh to the graphics card, I recall something about updating just a segment being tricky or something I didn't have time for). While I would have used some fancy new GL features on modern shader cards to write the most appropriate rendering paths, I was targeting a lower set of cards. The only way I know of passing in other per-vertex data is by storing that in the texture coordinates (and you only get a few on the Geforce 2mx, the baseline card I used). From what I can tell that's what the nvidia demo is doing, which has the limitation of only allowing eight morph targets on modern cards. Problem with that is I already store bone weights, indices, and normals (due to an OSX bug) in the texcoord units. If you can limit the number of morph targets to four (and keep this old rendering path for cards that don't have that many units) then it'd work - otherwise I don't know of any better solutions then what the code currently does. Cheers, Derek On Nov 15, 2007, at 3:49 PM, Howard Stearns wrote: > Has anyone tried putting the vertex animations (from Minnesota's > SkeletalAnimation package) into a shader program? > > Could that work? > > -Howard > > |
Thanks, Derek! The insight on limits is just what I was looking for.
Indeed, I have had problems on some cards in extending your fragment shader to work with more lights. -H On Nov 19, 2007, at 7:44 PM, Derek Arndt wrote: > I haven't looked at the code seriously in a few months but I'll > describe what feelings I can remember. > > When I built the animation system for Minnesota I worked under the > assumption that animating well on older cards was more important > than animating really fast on hot shader graphics cards we didn't > even have in the office. Besides, I'd have to write a rendering > path for the low end shader cards anyways, so it just made sense to > focus on that. > > The way I currently deal with vertex animations is handle any > morphing on the CPU, and then if changes occur update the entire > vertex buffer object (so if you animate the face it sends the > entire mesh to the graphics card, I recall something about updating > just a segment being tricky or something I didn't have time for). > While I would have used some fancy new GL features on modern shader > cards to write the most appropriate rendering paths, I was > targeting a lower set of cards. > > The only way I know of passing in other per-vertex data is by > storing that in the texture coordinates (and you only get a few on > the Geforce 2mx, the baseline card I used). From what I can tell > that's what the nvidia demo is doing, which has the limitation of > only allowing eight morph targets on modern cards. Problem with > that is I already store bone weights, indices, and normals (due to > an OSX bug) in the texcoord units. If you can limit the number of > morph targets to four (and keep this old rendering path for cards > that don't have that many units) then it'd work - otherwise I don't > know of any better solutions then what the code currently does. > > Cheers, > Derek > > On Nov 15, 2007, at 3:49 PM, Howard Stearns wrote: > >> Has anyone tried putting the vertex animations (from Minnesota's >> SkeletalAnimation package) into a shader program? >> >> Could that work? >> >> -Howard >> >> > |
Free forum by Nabble | Edit this page |