[vwnc] opengl / 3D engines

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

[vwnc] opengl / 3D engines

Richard Kulisz
Hello,

is anyone with an interest in OpenGL or 3D engines available to bounce
ideas and questions off of? My question of the moment concerns 3D
engines' overall architecture.

At the top there's a scene graph with geometric and caching relations.
Though I bet caching is usually not handled.

Then each node has a mesh that determines what it will render. Again,
I don't think typical 3D engines have nodes that know how to render
themselves but doing it any other way violates OO.

At the bottom /I think/ there's a Renderer that actually deals with
OpenGL. I'm not sure what the Renderer does exactly and I may be
missing something entirely.

Any corrections or filling in the missing details would be most welcome.

The simplest way to answer my question would be to see how ST3D does
it but I don't want to be sued for copyright violation in 5 years'
time.

Regards,

Richard
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2
Richard Kulisz wrote:
> Hello,
>
> is anyone with an interest in OpenGL or 3D engines available to bounce
> ideas and questions off of? My question of the moment concerns 3D
> engines' overall architecture.
>  
I'm interested, since you contacted me first anyway :) I think if you
want to understand a 3D engines architectural overview, you need to look
at a few of them. OGRE 3D is a fairly big one in the open source world
and covers a lot more than just the basics.

My goal with VisualWorks OpenGL support is not to provide a framework
for making games or other kinds of apps, but to provide the capability
for people to make games and frameworks on top of it without too much
difficulty.

Having said that, I do intend to make game(s) with it, which will
inevitably mean I'll end up making a framework for making games using
it. I'm explicitly making the engine in Smalltalk and not linking to an
engine like OGRE for the purposes of demonstrating what VisualWorks is
capable of and how/if it is a viable platform for building games.

If I were purely interested in just making games, I might be inclined to
make an interface to OGRE and go from there. I'm not sure, I've only
used OGRE a little bit. It does, however, have some plugins to physics
engines which of course makes it much more interesting.
> At the top there's a scene graph with geometric and caching relations.
> Though I bet caching is usually not handled.
>  
What sort of relations and caching are you thinking of? Basically, a
scene graph is usually split up in to infinite quadrants which allow
collision detection to be fast. Then you do your scene culling - which
is different depending on whether you're "outside" or "inside" an
environment (eg: out in a desert or space versus inside a room).
> Then each node has a mesh that determines what it will render. Again,
> I don't think typical 3D engines have nodes that know how to render
> themselves but doing it any other way violates OO.
>  
They most certainly do have this concept. However, node quickly becomes
Model, which usually has a skeleton for skeletal animation and ragdoll
physics when you get adventerous. The scene becomes a smattering of
models and boundaries.
> At the bottom /I think/ there's a Renderer that actually deals with
> OpenGL. I'm not sure what the Renderer does exactly and I may be
> missing something entirely.
>  
Yes, you need to deal with actually doing the rendering at some point,
though the scene graph is independent of the rendering technology, such
that it is reusable for other purposes, like physics simulations,
editing, topological rendering, minimum bounds computation, etc...

It is worth noting that models are usually static in nature, composed of
lots of triangles, textures and GLSL programs that are flipped and
switched constantly to render the scene. The data for the bits of the
models are arrays of floats and they're pushed to the video card for
rendering.
> Any corrections or filling in the missing details would be most welcome.
>
> The simplest way to answer my question would be to see how ST3D does
> it but I don't want to be sued for copyright violation in 5 years'
> time.
>  
I don't think ST3D would have many revelations for 3d graphics or
Smalltalk programming with 3d graphics. Most of what you'll ever need to
know about making games with opengl can be found in books, like GPU Gems
and 3D Games Programming and the OpenGL Red Book, the GLSL spec, so on
and so forth.. the lighthouse tutorials and if you're desperate, the
NeHe tutorials. Applying their techniques in to the Smalltalk
environment is not a big deal, but knowing the techniques to apply is a
big deal.

Cheers,
Michael
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Richard Kulisz
> I'm interested, since you contacted me first anyway :) I think if you want
> to understand a 3D engines architectural overview, you need to look at a few
> of them. OGRE 3D is a fairly big one in the open source world and covers a
> lot more than just the basics.

That's the reliable and very long way of doing it. Rendered much less
reliable in my case since I don't know C++ and have no intention of
learning it. I don't learn things that can't be understood and
mastered. I can't - I learn things BY understanding them.

> My goal with VisualWorks OpenGL support is not to provide a framework for
> making games or other kinds of apps, but to provide the capability for
> people to make games and frameworks on top of it without too much
> difficulty.

If I've got all the levels right, I am actually interested in building
a 3D GUI on top of a naked objects HCI layer on top of a 3D engine.
And you know what? I only want the 3D GUI to develop specific
applications. I really think it's one regression too many, and if I
weren't in such a cheerful mood from coming out of years of
depression, I might slit my wrists. ;)

> Having said that, I do intend to make game(s) with it, which will inevitably
> mean I'll end up making a framework for making games using it.

What kind of timeframe are we talking about here?  :D

> What sort of relations and caching are you thinking of? Basically, a scene
> graph is usually split up in to infinite quadrants which allow collision
> detection to be fast. Then you do your scene culling - which is different
> depending on whether you're "outside" or "inside" an environment (eg: out in
> a desert or space versus inside a room).

There are no collisions in my GUI so it doesn't matter whether or not it's fast.

There is also no culling to be done. The mouse pointer is always in
front of or between any object, and all objects are in front of the
background. The background is spherical and the field of view starts
at its center.

What DOES matter is the dynamic generation and destruction of lots and
lots of objects. Using dynamic content that has to be constantly
prefetched from disk or some other, usually slow, source.

This means the scene-graph has to have a VERY good idea of every
object's relation to its first, second and third generation logical
neighbours and what resources are available to prefetch and prerender
them to any of several possible levels of detail. Possibly adjusting
for recorded traffic patterns.

The user will be able to scan through 100+ objects in a few seconds.
That's 100 out of 1000s in the near vicinity. Imagine those thousands
of objects are all player avatars in an MMORPG. Any one of them may
have to be rendered visible instantly to a greater or lesser level of
detail. Every one of them has a custom texture that has to be
*generated* (not loaded) at runtime.

And it must all be done smoothly with no perceptible pauses or
flickers. The only saving grace is that all those objects are chunked
and static. It's only the user that moves through them. Oh and also, I
control the object format so I can ease the 'generation' of textures.

I know that Dungeon Siege has continuous preloading so the above ought
to be possible.
http://www.drizzle.com/~scottb/gdc/continuous-world.htm

> They most certainly do have this concept. However, node quickly becomes
> Model, which usually has a skeleton for skeletal animation and ragdoll
> physics when you get adventerous. The scene becomes a smattering of models
> and boundaries.

Hmmm. I remember that ST3D's scene-graph nodes didn't know how to
render themselves or even what a renderer was. Well, it's good to know
most 3D engines aren't that badly designed.

> Yes, you need to deal with actually doing the rendering at some point,
> though the scene graph is independent of the rendering technology, such that
> it is reusable for other purposes, like physics simulations, editing,
> topological rendering, minimum bounds computation, etc...

Topological rendering? That *sounds* like what I'm trying to do.

> It is worth noting that models are usually static in nature, composed of
> lots of triangles, textures and GLSL programs that are flipped and switched
> constantly to render the scene. The data for the bits of the models are
> arrays of floats and they're pushed to the video card for rendering.

That's another thing. At some point I'm going to want to display video
on some of those objects. Though I expect my models themselves to be
extremely static. Individually anyways.

> I don't think ST3D would have many revelations for 3d graphics or Smalltalk
> programming with 3d graphics. Most of what you'll ever need to know about
> making games with opengl can be found in books, like GPU Gems and 3D Games
> Programming and the OpenGL Red Book, the GLSL spec, so on and so forth.. the
> lighthouse tutorials and if you're desperate, the NeHe tutorials. Applying
> their techniques in to the Smalltalk environment is not a big deal, but
> knowing the techniques to apply is a big deal.

Any idea of how long this would all take?
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Andre Schnoor
In reply to this post by Michael Lucas-Smith-2

Am 09.10.2008 um 02:41 schrieb Michael Lucas-Smith:

> I'm explicitly making the engine in Smalltalk and not linking to an
> engine like OGRE for the purposes of demonstrating what VisualWorks is
> capable of and how/if it is a viable platform for building games.

Wow. That's quite an interesting challenge. I always though that at  
least the AI would largely benefit from Smalltalk. I however doubt it  
can handle the preemtive multi-threading required to smoothly animate  
a populated game world, let alone the physics.

You should keep in mind that the gaming "industry" actually is a scene  
of small developer studios and a couple of international publishers.  
Their most pressing concern isn't about technology rather than  
compelling game ideas, storylines, artwork, voice acting and -- drum  
roll --- copy protection.

There are countless incredibly capable engines on the market for  
licensing (also cross-platform), so if you really want to address this  
market (as opposed to doing this as a hobby), you should probably  
interface to these engines and not OpenGL, which is considered  
outdated in the game industry.

In any case I recommend you meet some of the key players in the  
industry and talk to them before making any investment.

Good luck!

Andre

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2
Andre Schnoor wrote:

>
> Am 09.10.2008 um 02:41 schrieb Michael Lucas-Smith:
>
>> I'm explicitly making the engine in Smalltalk and not linking to an
>> engine like OGRE for the purposes of demonstrating what VisualWorks is
>> capable of and how/if it is a viable platform for building games.
>
> Wow. That's quite an interesting challenge. I always though that at
> least the AI would largely benefit from Smalltalk. I however doubt it
> can handle the preemtive multi-threading required to smoothly animate
> a populated game world, let alone the physics.
>
> You should keep in mind that the gaming "industry" actually is a scene
> of small developer studios and a couple of international publishers.
> Their most pressing concern isn't about technology rather than
> compelling game ideas, storylines, artwork, voice acting and -- drum
> roll --- copy protection.
>
> There are countless incredibly capable engines on the market for
> licensing (also cross-platform), so if you really want to address this
> market (as opposed to doing this as a hobby), you should probably
> interface to these engines and not OpenGL, which is considered
> outdated in the game industry.
>
> In any case I recommend you meet some of the key players in the
> industry and talk to them before making any investment.
>
> Good luck!
>
You're correct, which is why it's in the hobby bin. Doing this over time
will let me learn new things, try out new things and in the process
benefit Cincom Smalltalk.

OpenGL, though considered out of date, does in fact run the heart of
most graphics in MacOSX, is the graphics engine on Playstation 3, is the
only reliable 3D engine on Linux and OpenGL ES is used heavily on mobile
devices, including the iPhone and most of its competitors.

The Kronos group knows they've dropped the ball, 3.0 was barely a
pickup, but we can be hopeful that they will progress forward. So if you
really want the most advanced graphics engine, you have to use DirectX,
which will give you the Windows and Xbox world, but little else... which
is why OpenGL is still around. I really hope they pick up their game and
advance it up to the state of DirectX.

On AI.. it turns out that while Smalltalk is good at decision making,
it's bad at gathering the data to make the decisions. For example, in a
3d world AI often needs to know how far away (in time) it is from
'stuff' to know if it should dodge, shoot, go there, avoid, duck behind,
so on and so forth... doing all those distance calculations is expensive
and not something our VM does well. I've yet to come up with a good
answer on that one without calling an external C library to do the math
for me.

Cheers,
Michael
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2
In reply to this post by Richard Kulisz

>> Having said that, I do intend to make game(s) with it, which will inevitably
>> mean I'll end up making a framework for making games using it.
>>    
>
> What kind of timeframe are we talking about here?  :D
>
>  
Well I made one game. I'll keep making games to test our the technology
I've built - otherwise it's all pie in the sky and I could have it all
wrong. Each timeframe will, rightly, focus on what I need for my game..
the idea being that eventually the right bits of abstraction will bubble
up to the surface and become common code. So.. no timeframe.. years?

>> What sort of relations and caching are you thinking of? Basically, a scene
>> graph is usually split up in to infinite quadrants which allow collision
>> detection to be fast. Then you do your scene culling - which is different
>> depending on whether you're "outside" or "inside" an environment (eg: out in
>> a desert or space versus inside a room).
>>    
>
> There are no collisions in my GUI so it doesn't matter whether or not it's fast.
>
>  
Good.
> There is also no culling to be done. The mouse pointer is always in
> front of or between any object, and all objects are in front of the
> background. The background is spherical and the field of view starts
> at its center.
>
>  
Interesting, but the culling is mostly done by the video card anyway
unless you're talking about rendering a very small portion of a very
large world.
> What DOES matter is the dynamic generation and destruction of lots and
> lots of objects. Using dynamic content that has to be constantly
> prefetched from disk or some other, usually slow, source.
>  
Yeah that's not too slow actually. Generating models can be as fast or
as slow as you want it to be.. if it's all just boxy stuff, very easy -
even the normals are easy. When you add curves, you have to decide just
how much you're going to break it up in to small triangles across three
control points to get as smooth a surface as you desire. Or you can get
smart and have a flat triangle area and use the control formulas to
drive the fragment shader to make it /look/ 3d. It all depends on what
you want to do.
> This means the scene-graph has to have a VERY good idea of every
> object's relation to its first, second and third generation logical
> neighbours and what resources are available to prefetch and prerender
> them to any of several possible levels of detail. Possibly adjusting
> for recorded traffic patterns.
>  
Usually most 3d frameworks come with an optimistic resource manager. If
something is near visibility and there's memory free, it'll load it up.
If something is visible, it'll keep it loaded, if something isn't
visible and hasn't been hit on the cache in a while, it's a candidate
for being freed if memory is required. I've yet to build that kind of
engine yet. It'll be an interesting subproject. The same has to be done
for audio too.
> The user will be able to scan through 100+ objects in a few seconds.
> That's 100 out of 1000s in the near vicinity. Imagine those thousands
> of objects are all player avatars in an MMORPG. Any one of them may
> have to be rendered visible instantly to a greater or lesser level of
> detail. Every one of them has a custom texture that has to be
> *generated* (not loaded) at runtime.
>  
That's not that much data actually. Smalltalk systems regularly deal
with millions of objects without breaking a sweat - especially since you
rarely graphically reveal millions or thousands, usually only hundreds
or less to the user at any one time.
> And it must all be done smoothly with no perceptible pauses or
> flickers. The only saving grace is that all those objects are chunked
> and static. It's only the user that moves through them. Oh and also, I
> control the object format so I can ease the 'generation' of textures.
>
>  
There shouldn't be any flickers since it's all double buffered anyway.
Pauses would only occur if you're generating lots of garbage, which can
usually be avoided by changing your approach or caching more shtuff.
> I know that Dungeon Siege has continuous preloading so the above ought
> to be possible.
> http://www.drizzle.com/~scottb/gdc/continuous-world.htm
>
>  
Yes, very fun technique. GTA3 did that too, you could drive around the
city without ever waiting for it to load stuff, it'd load things you're
heading toward. Nice idea.

>> They most certainly do have this concept. However, node quickly becomes
>> Model, which usually has a skeleton for skeletal animation and ragdoll
>> physics when you get adventerous. The scene becomes a smattering of models
>> and boundaries.
>>    
>
> Hmmm. I remember that ST3D's scene-graph nodes didn't know how to
> render themselves or even what a renderer was. Well, it's good to know
> most 3D engines aren't that badly designed.
>
>  
I can't comment on ST3D, but most model rendering boils down to this:
a) set the binding array
b) loop:
    1) set the GLSL program
    2) set the texture array
    3) render a subset of the array data

Often you group them together by program and texture too so you can
avoid swapping programs and textures constantly. But that's an
optimization and not that necessary to begin with. So given that that's
pretty much the technique, your code ends up looking like this:

Scene>>render
    models do: [:each | each render]

>> Yes, you need to deal with actually doing the rendering at some point,
>> though the scene graph is independent of the rendering technology, such that
>> it is reusable for other purposes, like physics simulations, editing,
>> topological rendering, minimum bounds computation, etc...
>>    
>
> Topological rendering? That *sounds* like what I'm trying to do.
>
>  
Yep, that's just rendering the scene with a specific transformation
matrix in mind - such as parallel and looking down the Y-axis.

>> It is worth noting that models are usually static in nature, composed of
>> lots of triangles, textures and GLSL programs that are flipped and switched
>> constantly to render the scene. The data for the bits of the models are
>> arrays of floats and they're pushed to the video card for rendering.
>>    
>
> That's another thing. At some point I'm going to want to display video
> on some of those objects. Though I expect my models themselves to be
> extremely static. Individually anyways.
>
>  
I had a demonstration of rendering live textures that changed every
frame in the OpenGL 2.x packages and using CairoGraphics to render the
texture. Grabbing the frames from a video is no different. You are
limited by the bandwidth between motherboard and GPU - AGP4x or higher
is really recommended.

>> I don't think ST3D would have many revelations for 3d graphics or Smalltalk
>> programming with 3d graphics. Most of what you'll ever need to know about
>> making games with opengl can be found in books, like GPU Gems and 3D Games
>> Programming and the OpenGL Red Book, the GLSL spec, so on and so forth.. the
>> lighthouse tutorials and if you're desperate, the NeHe tutorials. Applying
>> their techniques in to the Smalltalk environment is not a big deal, but
>> knowing the techniques to apply is a big deal.
>>    
>
> Any idea of how long this would all take?
>  
As fast as you can read, experiment and absorb :)

Michael
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Andres Fortier-2
Just a few comments:

>> There is also no culling to be done. The mouse pointer is always in
>> front of or between any object, and all objects are in front of the
>> background. The background is spherical and the field of view starts
>> at its center.
>>  
> Interesting, but the culling is mostly done by the video card anyway
> unless you're talking about rendering a very small portion of a very
> large world.

We've been doing some experiments with OpenGl and learning as needed.
We started with the naive "render all objects" approach and letting
OpenGl doing the culling. However in most cases this approach turned
out to be too slow. To solve this we ended up with an RTree
implementation for culling, which speeded things up considerably (and
it also has the nice feature of supporting both 2D and 3D objects).

>> This means the scene-graph has to have a VERY good idea of every
>> object's relation to its first, second and third generation logical
>> neighbours and what resources are available to prefetch and prerender
>> them to any of several possible levels of detail. Possibly adjusting
>> for recorded traffic patterns.
>>  
> Usually most 3d frameworks come with an optimistic resource manager. If
> something is near visibility and there's memory free, it'll load it up.
> If something is visible, it'll keep it loaded, if something isn't
> visible and hasn't been hit on the cache in a while, it's a candidate
> for being freed if memory is required. I've yet to build that kind of
> engine yet. It'll be an interesting subproject. The same has to be done
> for audio too.

Well, that's actually my next step, basically adding "smart caching"
to the RTree. The idea is to add a generic support for chaching that
can be plugged afterwards with different persistent mediums (my first
choice would be Omnibase, since it is almost zero-configuration and
requires no external support). However, I still have to decide if it
should handle a model's update (i.e. an object changed its
coordinates) and, it case it does, how to manage it.

  > I can't comment on ST3D, but most model rendering boils down to this:
> a) set the binding array
> b) loop:
>     1) set the GLSL program
>     2) set the texture array
>     3) render a subset of the array data
>
> Often you group them together by program and texture too so you can
> avoid swapping programs and textures constantly. But that's an
> optimization and not that necessary to begin with.

And that's the second item in the to-do-list :). Once we get all the
objects that we actually need to display, sort them in a way that the
final rendering time is the best possible. Here we have these opposite
forces of fast rendering vs. not braking encapsulation (i.e. I would
like to delegate the render the object, but I would also like to know
everything about it so that I can avoid state changes in the
underlying OpenGL state machinery). Of course this needs trying
different approaches and lots of reading :).

Mi 2 cts.

Andrés
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2

> I can't comment on ST3D, but most model rendering boils down to this:
>  
>> a) set the binding array
>> b) loop:
>>     1) set the GLSL program
>>     2) set the texture array
>>     3) render a subset of the array data
>>
>> Often you group them together by program and texture too so you can
>> avoid swapping programs and textures constantly. But that's an
>> optimization and not that necessary to begin with.
>>    
>
> And that's the second item in the to-do-list :). Once we get all the
> objects that we actually need to display, sort them in a way that the
> final rendering time is the best possible. Here we have these opposite
> forces of fast rendering vs. not braking encapsulation (i.e. I would
> like to delegate the render the object, but I would also like to know
> everything about it so that I can avoid state changes in the
> underlying OpenGL state machinery). Of course this needs trying
> different approaches and lots of reading :).
>
>  
If you're following along with my OpenGL 3.0 adventures, you may notice
that the 'binding array' stuff is only done once. You can put all your
models in to one big blob of memory (unless you plan to change things
around a lot) so the actual rendering instructions are sent to the
graphics card as "use this program, this variables for the program and
this start..stop for the data", which is very very light weight.

Michael
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Richard Kulisz
In reply to this post by Michael Lucas-Smith-2
> Interesting, but the culling is mostly done by the video card anyway unless
> you're talking about rendering a very small portion of a very large world.

That's one way to think of what I'm doing, but the very large world is
N dimensional. Even the local neighbourhood has 6 dimensions and has
to be projected down. I hate that because I can't figure out sane
interactions for the user to navigate in 6 dimensions so I've put off
thinking about it.

> points to get as smooth a surface as you desire. Or you can get smart and
> have a flat triangle area and use the control formulas to drive the fragment
> shader to make it /look/ 3d. It all depends on what you want to do.

I considered that. It seems to be the way to go. Actually, my objects
don't need to be 3D at all, they just have to exist in 3D.

> Usually most 3d frameworks come with an optimistic resource manager. If
> something is near visibility and there's memory free, it'll load it up. If
> something is visible, it'll keep it loaded, if something isn't visible and
> hasn't been hit on the cache in a while, it's a candidate for being freed if
> memory is required. I've yet to build that kind of engine yet. It'll be an
> interesting subproject. The same has to be done for audio too.

You mean, for distance?

I have several resources I have to manage - CPU, RAM and I/O. Just
saturating the RAM and deprecating objects at random seems to be
somewhere between grossly inelegant and insanely stupid. Possibly
both.

The objects I have are far too heterogeneous for any kind of
engineering to succeed at making them all have the same resource
bottlenecks. It's not like a game where the only thing that matters is
to display something and I/O is only performed with a single device.

It will be impossible for objects to be managed. Rather, objects will
have to manage themselves given the resources they've been given and
the resource quotas (timeslices of CPU, I/O and RAM) up for auction at
any point in time. I don't plan to set communism in concrete (eg,
using a round-robin scheduler) because the communists never understood
the importance of trade.

Also, objects don't care if they die so they can't be enslaved against
their will. This is a necessary precondition for capitalism to make
the least lick of sense and it's fulfilled inside of a computer. Which
makes sense since Thatcherism was invented by a moron with Asperger's
who couldn't grasp that humans aren't machines. An ideology invented
by a man who thinks like a machine makes sense for machines.

If you thought political science has no place in software design, that
was a big mistake. Security for instance is nothing BUT politics. And
the same mostly applies to groupware.

> That's not that much data actually. Smalltalk systems regularly deal with
> millions of objects without breaking a sweat - especially since you rarely
> graphically reveal millions or thousands, usually only hundreds or less to
> the user at any one time.

Yeah, but Smalltalk gets the benefit of running close to the hardware.
I have to run on top of an object system implemented on top of
Smalltalk's object system. And in fact ... I may have to run on top of
an object system on top of an object system on top of Smalltalk's
object system. I'm pushing this off but eventually I'll have to either
do it or fake it convincingly or else push the middle object system
down into Smalltalk.

> Scene>>render
>   models do: [:each | each render]

I assume somewhere in there you pass the GL window you're rendering
on. :) Thanks for the explanation though.

>> Topological rendering? That *sounds* like what I'm trying to do.
>
> Yep, that's just rendering the scene with a specific transformation matrix
> in mind - such as parallel and looking down the Y-axis.

Oh, that really doesn't sound like what I'm trying to do.

> I had a demonstration of rendering live textures that changed every frame in
> the OpenGL 2.x packages and using CairoGraphics to render the texture.
> Grabbing the frames from a video is no different. You are limited by the
> bandwidth between motherboard and GPU - AGP4x or higher is really
> recommended.

Thanks for reminding me. You're right, it shouldn't be a problem. Once
they're decoded. :/

>> Any idea of how long this would all take?
>
> As fast as you can read, experiment and absorb :)

Umm, I was really hoping for an answer in the x man-months format. ;)
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2

>> Usually most 3d frameworks come with an optimistic resource manager. If
>> something is near visibility and there's memory free, it'll load it up. If
>> something is visible, it'll keep it loaded, if something isn't visible and
>> hasn't been hit on the cache in a while, it's a candidate for being freed if
>> memory is required. I've yet to build that kind of engine yet. It'll be an
>> interesting subproject. The same has to be done for audio too.
>>    
>
> You mean, for distance?
>
>  
Nope, I mean actual memory on the video card. When you push over a
vertex buffer array or a texture, you're taking up real memory which
you'll need to release at some point. Sending the data to the card every
frame is inefficient and .. well.. stupid :) So basically you send data
over as required and release the data you currently no longer need on
the video card. This becomes a balancing act.
> If you thought political science has no place in software design, that
> was a big mistake. Security for instance is nothing BUT politics. And
> the same mostly applies to groupware.
>  
I respectfully disagree :)

>  
>> That's not that much data actually. Smalltalk systems regularly deal with
>> millions of objects without breaking a sweat - especially since you rarely
>> graphically reveal millions or thousands, usually only hundreds or less to
>> the user at any one time.
>>    
>
> Yeah, but Smalltalk gets the benefit of running close to the hardware.
> I have to run on top of an object system implemented on top of
> Smalltalk's object system. And in fact ... I may have to run on top of
> an object system on top of an object system on top of Smalltalk's
> object system. I'm pushing this off but eventually I'll have to either
> do it or fake it convincingly or else push the middle object system
> down into Smalltalk.
>
>  
>> Scene>>render
>>   models do: [:each | each render]
>>    
>
> I assume somewhere in there you pass the GL window you're rendering
> on. :) Thanks for the explanation though.
>  
At some point you begin the rendering phase, you build a surface off of
your window (and keep it around) and activate it when you're going to do
some OpenGL drawing. OpenGL is a C program, it's global - so you set the
target, do some work, release the target.

>  
>>> Topological rendering? That *sounds* like what I'm trying to do.
>>>      
>> Yep, that's just rendering the scene with a specific transformation matrix
>> in mind - such as parallel and looking down the Y-axis.
>>    
>
> Oh, that really doesn't sound like what I'm trying to do.
>
>  
>> I had a demonstration of rendering live textures that changed every frame in
>> the OpenGL 2.x packages and using CairoGraphics to render the texture.
>> Grabbing the frames from a video is no different. You are limited by the
>> bandwidth between motherboard and GPU - AGP4x or higher is really
>> recommended.
>>    
>
> Thanks for reminding me. You're right, it shouldn't be a problem. Once
> they're decoded. :/
>
>  
If you use the systems decoders (windows and mac are both very good at
this) they'll drop a bitmap directly in to a memory buffer of your
choosing, which can then upload directly to the card asynchronously with
AGP and get the most out of the bandwidth.
>>> Any idea of how long this would all take?
>>>      
>> As fast as you can read, experiment and absorb :)
>>    
>
> Umm, I was really hoping for an answer in the x man-months format. ;)
>  
I'm sorry I can only give you estimates in mythical man months. Will
that do?
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Richard Kulisz
> At some point you begin the rendering phase, you build a surface off of your
> window (and keep it around) and activate it when you're going to do some
> OpenGL drawing. OpenGL is a C program, it's global - so you set the target,
> do some work, release the target.

Groan. I really should have known this. The possibility flickered
through my mind but it vanished.

> If you use the systems decoders (windows and mac are both very good at this)
> they'll drop a bitmap directly in to a memory buffer of your choosing, which
> can then upload directly to the card asynchronously with AGP and get the
> most out of the bandwidth.

Good. And good to know. Thanks. Actually I appreciate all you've
explained but some stuff I never worried about.

> I'm sorry I can only give you estimates in mythical man months. Will that
> do?

Another fan of Fred Brooks? :) Yes please, an answer in mythical
man-months would be greatly appreciated.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Victor-67
In reply to this post by Richard Kulisz
>     ... the very large world is N dimensional. Even the local neighborhood
> has 6 dimensions and has
>     to be projected down. I hate that because I can't figure out sane N
> dimensional.
>     Even the local neighborhood has 6 dimensions and has x
>     to navigate in 6 dimensions so I've put off thinking about it.

I would suggest to think of your universe as a big box, containing many
contiguous non-overlapping (orthogonal) smaller boxes.  Each would have a
small amount of dimensions.  In principle each of the smaller boxes could
contain other boxes.  However, the hierarchy should be as flat as possible.
Within each hierarchical level the access should be random, in the sense
that each box should be equally accessible.  For practical reasons it may be
profitable to consider direct access across hierarchy levels too.

The idea being that this way the user at each decision step is comfronted
with only a small amount of clear choices.  The user could also have the
opportunity to expand or limit the number of choices presented to her.

Hope this is useful.

VĂ­ctor

========================================

----- Original Message -----
From: "Richard Kulisz" <[hidden email]>
To: <[hidden email]>
Sent: Friday, October 10, 2008 12:12 AM
Subject: Re: [vwnc] opengl / 3D engines


>> Interesting, but the culling is mostly done by the video card anyway
>> unless
>> you're talking about rendering a very small portion of a very large
>> world.
>
> That's one way to think of what I'm doing, but the very large world is
> N dimensional. Even the local neighbourhood has 6 dimensions and has
> to be projected down. I hate that because I can't figure out sane
> interactions for the user to navigate in 6 dimensions so I've put off
> thinking about it.
>
>> points to get as smooth a surface as you desire. Or you can get smart and
>> have a flat triangle area and use the control formulas to drive the
>> fragment
>> shader to make it /look/ 3d. It all depends on what you want to do.
>
> I considered that. It seems to be the way to go. Actually, my objects
> don't need to be 3D at all, they just have to exist in 3D.
>
>> Usually most 3d frameworks come with an optimistic resource manager. If
>> something is near visibility and there's memory free, it'll load it up.
>> If
>> something is visible, it'll keep it loaded, if something isn't visible
>> and
>> hasn't been hit on the cache in a while, it's a candidate for being freed
>> if
>> memory is required. I've yet to build that kind of engine yet. It'll be
>> an
>> interesting subproject. The same has to be done for audio too.
>
> You mean, for distance?
>
> I have several resources I have to manage - CPU, RAM and I/O. Just
> saturating the RAM and deprecating objects at random seems to be
> somewhere between grossly inelegant and insanely stupid. Possibly
> both.
>
> The objects I have are far too heterogeneous for any kind of
> engineering to succeed at making them all have the same resource
> bottlenecks. It's not like a game where the only thing that matters is
> to display something and I/O is only performed with a single device.
>
> It will be impossible for objects to be managed. Rather, objects will
> have to manage themselves given the resources they've been given and
> the resource quotas (timeslices of CPU, I/O and RAM) up for auction at
> any point in time. I don't plan to set communism in concrete (eg,
> using a round-robin scheduler) because the communists never understood
> the importance of trade.
>
> Also, objects don't care if they die so they can't be enslaved against
> their will. This is a necessary precondition for capitalism to make
> the least lick of sense and it's fulfilled inside of a computer. Which
> makes sense since Thatcherism was invented by a moron with Asperger's
> who couldn't grasp that humans aren't machines. An ideology invented
> by a man who thinks like a machine makes sense for machines.
>
> If you thought political science has no place in software design, that
> was a big mistake. Security for instance is nothing BUT politics. And
> the same mostly applies to groupware.
>
>> That's not that much data actually. Smalltalk systems regularly deal with
>> millions of objects without breaking a sweat - especially since you
>> rarely
>> graphically reveal millions or thousands, usually only hundreds or less
>> to
>> the user at any one time.
>
> Yeah, but Smalltalk gets the benefit of running close to the hardware.
> I have to run on top of an object system implemented on top of
> Smalltalk's object system. And in fact ... I may have to run on top of
> an object system on top of an object system on top of Smalltalk's
> object system. I'm pushing this off but eventually I'll have to either
> do it or fake it convincingly or else push the middle object system
> down into Smalltalk.
>
>> Scene>>render
>>   models do: [:each | each render]
>
> I assume somewhere in there you pass the GL window you're rendering
> on. :) Thanks for the explanation though.
>
>>> Topological rendering? That *sounds* like what I'm trying to do.
>>
>> Yep, that's just rendering the scene with a specific transformation
>> matrix
>> in mind - such as parallel and looking down the Y-axis.
>
> Oh, that really doesn't sound like what I'm trying to do.
>
>> I had a demonstration of rendering live textures that changed every frame
>> in
>> the OpenGL 2.x packages and using CairoGraphics to render the texture.
>> Grabbing the frames from a video is no different. You are limited by the
>> bandwidth between motherboard and GPU - AGP4x or higher is really
>> recommended.
>
> Thanks for reminding me. You're right, it shouldn't be a problem. Once
> they're decoded. :/
>
>>> Any idea of how long this would all take?
>>
>> As fast as you can read, experiment and absorb :)
>
> Umm, I was really hoping for an answer in the x man-months format. ;)
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
>

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Andres Fortier-2
In reply to this post by Michael Lucas-Smith-2
> If you're following along with my OpenGL 3.0 adventures, you may notice
> that the 'binding array' stuff is only done once. You can put all your
> models in to one big blob of memory (unless you plan to change things
> around a lot) so the actual rendering instructions are sent to the
> graphics card as "use this program, this variables for the program and
> this start..stop for the data", which is very very light weight.

Well, this project is pretty much a spare-time one and its been a
couple of months since the last time I did any work on it (however I
managed to use it in a phd course, so I guess I'll finally get some
time to work on it on November :)). As soon as I get the chance to
work in this part I'll definitely take a look at that.

Cheers,
             AndrĂ©s
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Richard Kulisz
In reply to this post by Victor-67
On Fri, Oct 10, 2008 at 5:17 AM, Victor <[hidden email]> wrote:
>>     Even the local neighborhood has 6 dimensions and has x
>
> I would suggest to think of your universe as a big box, containing many
> contiguous non-overlapping (orthogonal) smaller boxes.  Each would have a
> small amount of dimensions.

Umm, no. You didn't understand what I wrote. Each of the smaller boxes
has 6 dimensions. That's what "local neighbourhood" means.

> However, the hierarchy should be as flat as possible.

This is just gibberish. You're stringing together words without caring
what they might mean in the context of what I wrote. What the hell
does "hierarchy" even mean for an object graph?! It means nothing.

> Hope this is useful.

I don't believe you. I don't believe you're trying to be helpful since
you haven't even paid attention to what I wrote. I think you're trying
to pat yourself on the back. And when I see that, my instincts are to
cut off the hand you're using to do it.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2
Richard Kulisz wrote:

> On Fri, Oct 10, 2008 at 5:17 AM, Victor <[hidden email]> wrote:
>  
>>>     Even the local neighborhood has 6 dimensions and has x
>>>      
>> I would suggest to think of your universe as a big box, containing many
>> contiguous non-overlapping (orthogonal) smaller boxes.  Each would have a
>> small amount of dimensions.
>>    
>
> Umm, no. You didn't understand what I wrote. Each of the smaller boxes
> has 6 dimensions. That's what "local neighbourhood" means.
>
>  
What are the three extra dimensions?
>> However, the hierarchy should be as flat as possible.
>>    
>
> This is just gibberish. You're stringing together words without caring
> what they might mean in the context of what I wrote. What the hell
> does "hierarchy" even mean for an object graph?! It means nothing.
>
>  
It might help if you described your coordinate system some more. I take
it that each object can contain any other object and that the objects it
contains are positioned relative to the containing object. Such that you
might be able to position a camera in a way to see an object twice on
the same screen?
>> Hope this is useful.
>>    
>
> I don't believe you. I don't believe you're trying to be helpful since
> you haven't even paid attention to what I wrote. I think you're trying
> to pat yourself on the back. And when I see that, my instincts are to
> cut off the hand you're using to do it.
>  
I don't know... I find it hard to believe that Victor sat down to reply
to your email /just/ to screw with you and or other readers. Seems a tad
far fetched :)

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Richard Kulisz
> What are the three extra dimensions?

You need one dimension just to show an aggregation or stream. You need
another one to show security relations between aggregations. You need
a third one to show versioning through time. And a fourth one to show
branching of the universe. Then a fifth dimension is used up by
putting aggregations on a circle instead of a line segment, otherwise
users can't find the ends of the line segment or judge their position
between those ends. If you put branchings of the universe on a circle
then this takes up a 6th dimension, but that is mostly aesthetic. So
5/6 dimensions in total which I have to project down to 2.5D. The two
natural projections are aggregation / security / circle, and
versioning / branching / circle, but restricting the system to those
projections is arbitrary and I hate that. And I still don't have a way
of switching between projections that's natural.

> It might help if you described your coordinate system some more. I take
> it that each object can contain any other object and that the objects it
> contains are positioned relative to the containing object. Such that you
> might be able to position a camera in a way to see an object twice on
> the same screen?

That would violate object unity, which is a big no-no in Human
Computer Interaction. The closest thing I have to 'contain' is 'own'
which is not quite the same thing.

Yes, objects are placed automatically since manual placement of
objects is an abomination. So are interface artifacts, which is why I
don't have any. I don't have any coordinate system beyond the sphere I
already mentioned. I place objects on a 2-sphere embedded in a
Euclidean 3-space - that's pretty much it.

If there are two links to the same object in the current aggregation
then it shows up as one object with two names and two permission
blocks, and a placeholder in the other position. I'm not sure what to
do in the case of an object that simultaneously owns and is owned by
the current aggregation, but I'll figure out something.

Maybe placing it in the middle of the owning area (north) and the
owned area (south), and rendering it as a double-image. It reinforces
object unity, it's somewhat useable, and it tells the user in
unambiguous terms that while possible, it's still a bad idea. And hey,
it gets them acquainted with quantum mechanics.

> I don't know... I find it hard to believe that Victor sat down to reply
> to your email /just/ to screw with you and or other readers. Seems a tad
> far fetched :)

I'm quite a bit more cynical than you are. :D But actually, if he'd
wanted to screw with me, he would have read what I'd written. I'm a
non-entity to him.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Michael Lucas-Smith-2
Richard Kulisz wrote:

>> What are the three extra dimensions?
>>    
>
> You need one dimension just to show an aggregation or stream. You need
> another one to show security relations between aggregations. You need
> a third one to show versioning through time. And a fourth one to show
> branching of the universe. Then a fifth dimension is used up by
> putting aggregations on a circle instead of a line segment, otherwise
> users can't find the ends of the line segment or judge their position
> between those ends. If you put branchings of the universe on a circle
> then this takes up a 6th dimension, but that is mostly aesthetic. So
> 5/6 dimensions in total which I have to project down to 2.5D. The two
> natural projections are aggregation / security / circle, and
> versioning / branching / circle, but restricting the system to those
> projections is arbitrary and I hate that. And I still don't have a way
> of switching between projections that's natural.
>
>  
What Victor was talking about is projected coordinate systems. Your 6D
space is not how you're going to represent it on screen. In fact, if you
grab any reasonably mature ontology, it'll be N-Dimensional immediately.
It's necessarily useful to look at it that way.

>> It might help if you described your coordinate system some more. I take
>> it that each object can contain any other object and that the objects it
>> contains are positioned relative to the containing object. Such that you
>> might be able to position a camera in a way to see an object twice on
>> the same screen?
>>    
>
> That would violate object unity, which is a big no-no in Human
> Computer Interaction. The closest thing I have to 'contain' is 'own'
> which is not quite the same thing.
>  
Well it depends on your interface. Portal, the computer game,
successfully did this in a very neat way. There are some other examples
of this phenomena in computing in general. Even in 2D graphical systems,
you can end up with multiple representations on the same underlying
object/data. There's nothing wrong with that, so long as it fits the
metaphor.
> Yes, objects are placed automatically since manual placement of
> objects is an abomination. So are interface artifacts, which is why I
> don't have any. I don't have any coordinate system beyond the sphere I
> already mentioned. I place objects on a 2-sphere embedded in a
> Euclidean 3-space - that's pretty much it.
>  
Sounds neat. The horizon transform used in Civilization: Revolutions
looks particularly good as a way ot representing a 3d sphere when you
want to get down and close to it. At least, I think it looks good :)

> If there are two links to the same object in the current aggregation
> then it shows up as one object with two names and two permission
> blocks, and a placeholder in the other position. I'm not sure what to
> do in the case of an object that simultaneously owns and is owned by
> the current aggregation, but I'll figure out something.
>
> Maybe placing it in the middle of the owning area (north) and the
> owned area (south), and rendering it as a double-image. It reinforces
> object unity, it's somewhat useable, and it tells the user in
> unambiguous terms that while possible, it's still a bad idea. And hey,
> it gets them acquainted with quantum mechanics.
>
>  
>> I don't know... I find it hard to believe that Victor sat down to reply
>> to your email /just/ to screw with you and or other readers. Seems a tad
>> far fetched :)
>>    
>
> I'm quite a bit more cynical than you are. :D But actually, if he'd
> wanted to screw with me, he would have read what I'd written. I'm a
> non-entity to him.
Well that would be putting words in his mouth, but I imagine he's
reluctant to try and help you again - which to me is the opposite of a
forum / group, where you want inclusion, not exclusion.

Cheers,
Michael
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [vwnc] opengl / 3D engines

Richard Kulisz
> What Victor was talking about is projected coordinate systems. Your 6D
> space is not how you're going to represent it on screen. In fact, if you
> grab any reasonably mature ontology, it'll be N-Dimensional immediately.
> It's necessarily useful to look at it that way.

I already said as much, way back, when I mentioned that my world is
N-dimensional and that the local neighbourhood is 6D. Another thing
I've said is that dividing up the neighbourhood into 3Dx3D doesn't
help because it's arbitrary. I can think of any number of reasons why
someone would want to divide the 6D space against the grain. And I
still have no good way to control it.

> Well it depends on your interface. Portal, the computer game,
> successfully did this in a very neat way.

Exactly, the computer *game*. Portal made a game of playing with a
non-Euclidean space. It took navigation, which is supposed to be easy,
and made it difficult. Which is something that, if the user needs to
get any actual work done, is an abomination. The fact that a computer
game deliberately uses a violation of HCI canon as a barrier which the
player must surmount in no way invalidates that canon.

> There are some other examples of this phenomena in computing in general.

I'd like to hear them. But if you're thinking of WIMP then I'm going
to preempt you by saying that WIMP is crap. Or perhaps you're thinking
of orthogonal view in CAD/CAM software? But CAD/CAM software is crap
too. I'm curious what's left.

> Even in 2D graphical systems
> you can end up with multiple representations on the same underlying
> object/data. There's nothing wrong with that, so long as it fits the
> metaphor.

Of course there's something wrong with it. Every place and every time
it happens represents a flaw. And maybe the flaw is unavoidable, much
like flaws in quasiperiodic tilings are unavoidable otherwise they
wouldn't be *quasi* periodic, but it's still a flaw. And now that you
brought it up, I can see that I have a big flaw in my design that I'm
going to have to fix.

> Sounds neat. The horizon transform used in Civilization: Revolutions
> looks particularly good as a way ot representing a 3d sphere when you
> want to get down and close to it. At least, I think it looks good :)

Oh, I agree it looks good. It's the same thing Spore and Populous 3
use. But I can't use it since it makes wayfinding very difficult. The
games only use it for realism. If they wanted to make it easy on
players or make it really realistic then they'd use the insides of
Bernal spheres instead of the outsides of planetoids.

In my UI, the user's perspective is from inside. The sphere is a
configuration of objects, not an object in itself. Zooming to an
object is handled automatically, and only triggered manually. And the
only spherical objects I planned to have were miniatures of the
spherical configuration aggregates or perhaps a very translucent shell
to represent their zoomed out form.

You know, you really are an excellent source of inspiration. Can I run
an HCI conundrum by you? I'm having a conflict between the principles
of liveness and 'closeness conveys power'. I think it boils down to a
zoom level inversion (far away objects are zoomed in and the closer
object that contains them all is zoomed out) and which the liveness
exacerbates.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc