Igor coded with me one hour to show me step by step how to produce a rotating pyramid in Pharo :)
Have fun (igor will update the configurationOfNBOpenGL to load the latest file). NBOpenGL-Stef.st (8K) Download Attachment Screen Shot 2012-02-22 at 10.08.22 PM.pdf (73K) Download Attachment |
Thanks for that code, Stéphane. I'll work up a video tutorial on it once
Igor gets his next configuration* set. L. On 2/22/12 2:11 PM, Stéphane Ducasse wrote: > Igor coded with me one hour to show me step by step how to produce a rotating pyramid in Pharo :) > > Have fun (igor will update the configurationOfNBOpenGL to load the latest file). > > > |
On Feb 22, 2012, at 11:22 PM, Lawson English wrote: > Thanks for that code, Stéphane. I'll work up a video tutorial on it once Igor gets his next configuration* set. Excellent i was planning to do it but I'm flooded by work What I want is that every single person with zero knowledge can do it by following the videos. So if you can do that this is great. Then I would like to have a 3 min videos showing live modification of openGL objects on the fly. You write in the browser and boom you see it on the screen. Here are my notes. MCHttpRepository location: 'http://www.squeaksource.com/NBOpenGL' user: '' password: '' ConfigurationOfNBOpenGL project lastVersion load GLTTRenderingDemo new openInWorld GLViewportMorph subclass: #GLStef instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'NBOpenGL-Stef' GLStef new openInWorld render | gl | self checkSession. gl := display gl. display clear: Color white. + refresh gluperspective glfrustum http://nehe.gamedev.net/article/replacement_for_gluperspective/21002/ perspectiveFovY: fovY aspect: aspect zNear: zNear zFar: zFar "Replaces gluPerspective. Sets the frustum to perspective mode. // fovY - Field of vision in degrees in the y direction // aspect - Aspect ratio of the viewport // zNear - The near clipping distance // zFar - The far clipping distance " perspectiveFovY: fovY aspect: aspect zNear: zNear zFar: zFar "Replaces gluPerspective. Sets the frustum to perspective mode. // fovY - Field of vision in degrees in the y direction // aspect - Aspect ratio of the viewport // zNear - The near clipping distance // zFar - The far clipping distance " | fW fH | fH := (fovY / (360 * Float pi)) tan * zNear. "fH = tan( fovY / 360 * pi ) * zNear;" fW := fH * aspect. display gl frustum_left: fW negated right: fW bottom: fH negated top: fH zNear: zNear zFar: zFar openGL cube tutorial http://nehe.gamedev.net/tutorial/3d_shapes/10035/ glMatrixMode(GL_PROJECTION); // Select The Projection Matrix glLoadIdentity(); // Reset The Projection Matrix // Calculate The Aspect Ratio Of The Window gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f); glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix glLoadIdentity(); render | gl | self checkSession. gl := display gl. display clear: Color white. gl matrixMode: GL_PROJECTION; loadIdentity. self perspectiveFovY: 45.0 aspect: (self width /self height) asFloat zNear: 0.1 zFar: 100.0. gl matrixMode: GL_MODELVIEW. gl loadIdentity. glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer glLoadIdentity(); // Reset The Current Modelview Matrix glTranslatef(-1.5f,0.0f,-6.0f); // Move Left 1.5 Units And Into The Screen 6.0 glRotatef(rtri,0.0f,1.0f,0.0f); // Rotate The Triangle On The Y axis ( NEW ) glBegin(GL_TRIANGLES); // Start Drawing A Triangle glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Front) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f(-1.0f,-1.0f, 1.0f); // Left Of Triangle (Front) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f( 1.0f,-1.0f, 1.0f); // Right Of Triangle (Front) glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Right) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f( 1.0f,-1.0f, 1.0f); // Left Of Triangle (Right) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f( 1.0f,-1.0f, -1.0f); // Right Of Triangle (Right) glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Back) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f( 1.0f,-1.0f, -1.0f); // Left Of Triangle (Back) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f(-1.0f,-1.0f, -1.0f); // Right Of Triangle (Back) glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Left) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f(-1.0f,-1.0f,-1.0f); // Left Of Triangle (Left) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f(-1.0f,-1.0f, 1.0f); // Right Of Triangle (Left) glEnd(); // Done Drawing The Pyramid glLoadIdentity(); // Reset The Current Modelview Matrix glTranslatef(1.5f,0.0f,-7.0f); // Move Right 1.5 Units And Into The Screen 7.0 glRotatef(rquad,1.0f,1.0f,1.0f); // Rotate The Quad On The X axis ( NEW ) glBegin(GL_QUADS); // Draw A Quad glColor3f(0.0f,1.0f,0.0f); // Set The Color To Green glVertex3f( 1.0f, 1.0f,-1.0f); // Top Right Of The Quad (Top) glVertex3f(-1.0f, 1.0f,-1.0f); // Top Left Of The Quad (Top) glVertex3f(-1.0f, 1.0f, 1.0f); // Bottom Left Of The Quad (Top) glVertex3f( 1.0f, 1.0f, 1.0f); // Bottom Right Of The Quad (Top) glColor3f(1.0f,0.5f,0.0f); // Set The Color To Orange glVertex3f( 1.0f,-1.0f, 1.0f); // Top Right Of The Quad (Bottom) glVertex3f(-1.0f,-1.0f, 1.0f); // Top Left Of The Quad (Bottom) glVertex3f(-1.0f,-1.0f,-1.0f); // Bottom Left Of The Quad (Bottom) glVertex3f( 1.0f,-1.0f,-1.0f); // Bottom Right Of The Quad (Bottom) glColor3f(1.0f,0.0f,0.0f); // Set The Color To Red glVertex3f( 1.0f, 1.0f, 1.0f); // Top Right Of The Quad (Front) glVertex3f(-1.0f, 1.0f, 1.0f); // Top Left Of The Quad (Front) glVertex3f(-1.0f,-1.0f, 1.0f); // Bottom Left Of The Quad (Front) glVertex3f( 1.0f,-1.0f, 1.0f); // Bottom Right Of The Quad (Front) glColor3f(1.0f,1.0f,0.0f); // Set The Color To Yellow glVertex3f( 1.0f,-1.0f,-1.0f); // Top Right Of The Quad (Back) glVertex3f(-1.0f,-1.0f,-1.0f); // Top Left Of The Quad (Back) glVertex3f(-1.0f, 1.0f,-1.0f); // Bottom Left Of The Quad (Back) glVertex3f( 1.0f, 1.0f,-1.0f); // Bottom Right Of The Quad (Back) glColor3f(0.0f,0.0f,1.0f); // Set The Color To Blue glVertex3f(-1.0f, 1.0f, 1.0f); // Top Right Of The Quad (Left) glVertex3f(-1.0f, 1.0f,-1.0f); // Top Left Of The Quad (Left) glVertex3f(-1.0f,-1.0f,-1.0f); // Bottom Left Of The Quad (Left) glVertex3f(-1.0f,-1.0f, 1.0f); // Bottom Right Of The Quad (Left) glColor3f(1.0f,0.0f,1.0f); // Set The Color To Violet glVertex3f( 1.0f, 1.0f,-1.0f); // Top Right Of The Quad (Right) glVertex3f( 1.0f, 1.0f, 1.0f); // Top Left Of The Quad (Right) glVertex3f( 1.0f,-1.0f, 1.0f); // Bottom Left Of The Quad (Right) glVertex3f( 1.0f,-1.0f,-1.0f); // Bottom Right Of The Quad (Right) glEnd(); // Done Drawing The Quad rtri+=0.2f; // Increase The Rotation Variable For The Triangle ( NEW ) rquad-=0.15f; // Decrease The Rotation Variable For The Quad ( NEW ) return TRUE; // Keep Going > > L. > > On 2/22/12 2:11 PM, Stéphane Ducasse wrote: >> Igor coded with me one hour to show me step by step how to produce a rotating pyramid in Pharo :) >> >> Have fun (igor will update the configurationOfNBOpenGL to load the latest file). >> >> >> > > |
In reply to this post by LawsonEnglish
On 22 February 2012 23:22, Lawson English <[hidden email]> wrote:
> Thanks for that code, Stéphane. I'll work up a video tutorial on it once > Igor gets his next configuration* set. > I made new config for NativeBoost. So, in order to make things run on latest 1.4. after loading config of NBOpenGL, load last version of NB: NBInstaller install. because in 1.4. there are some methods were deprecated, so if you won't update you will be faced with clicking 'continue' dozens of times. I'd like to thank Stephane, who extracting the knowledge from my by pincers :) I usually forgetting that something which is completely trivial to me may be not-so trivial to others, and needs explanations and examples. We actually should try and translate more NeHe tutorials[1]. I hope, Stephane didn't found it hard to follow, because it is merely copy-pasting C code and rewriting the syntax. http://nehe.gamedev.net/tutorial/lessons_01__05/22004/ http://nehe.gamedev.net/tutorial/lessons_06__10/17010/ http://nehe.gamedev.net/tutorial/lessons_11__15/28001/ ... -- Best regards, Igor Stasenko. |
My videos on Croquet OpenGL discuss the first 6. In fact, that GUI
texture thing was done only using the techniques used in NeHe 6. I'll go back and recreate the NeHe code for 2-6 using NBOpenGL as soon as I get your updated code installed. L On 2/23/12 12:13 PM, Igor Stasenko wrote: > We actually should try and translate more NeHe tutorials[1]. I hope, > Stephane didn't found it hard to follow, because it is merely > copy-pasting C code and rewriting the syntax. > > http://nehe.gamedev.net/tutorial/lessons_01__05/22004/ > http://nehe.gamedev.net/tutorial/lessons_06__10/17010/ > http://nehe.gamedev.net/tutorial/lessons_11__15/28001/ > ... > |
In reply to this post by Stéphane Ducasse
My smorgasbord of Smalltalk OpenGL videos:
http://www.youtube.com/playlist?list=PLD60480623B5B1382&feature=view_all my live refactoring demo using OpenGL, which usually gets non-Smalltalk programmers going "huh? how did you do that?": http://www.youtube.com/watch?v=_QGAAOPC0kE&list=PLD60480623B5B1382&index=7&feature=plpp_video L. On 2/23/12 9:42 AM, Stéphane Ducasse wrote: > On Feb 22, 2012, at 11:22 PM, Lawson English wrote: > >> Thanks for that code, Stéphane. I'll work up a video tutorial on it once Igor gets his next configuration* set. > Excellent i was planning to do it but I'm flooded by work > What I want is that every single person with zero knowledge can do it by following the videos. > So if you can do that this is great. > > Then I would like to have a 3 min videos showing live modification of openGL objects on the fly. > You write in the browser and boom you see it on the screen. > > |
I haven't seen this video before:
http://www.youtube.com/watch?v=yQHAoH8t8aM you doing crazy things, which i wouldn't recommend to do at all, but for the purpose of explaining the point it is right way :) Yes, it is really hard to explain to people that what they looking at is a tip of iceberg. -- Best regards, Igor Stasenko. |
In reply to this post by LawsonEnglish
Excellent.
Could you use large fonts :) The idea is that we should be able to see the code and to follow. Stef On Feb 23, 2012, at 8:38 PM, Lawson English wrote: > My videos on Croquet OpenGL discuss the first 6. In fact, that GUI texture thing was done only using the techniques used in NeHe 6. > > I'll go back and recreate the NeHe code for 2-6 using NBOpenGL as soon as I get your updated code installed. > > L > > On 2/23/12 12:13 PM, Igor Stasenko wrote: >> We actually should try and translate more NeHe tutorials[1]. I hope, >> Stephane didn't found it hard to follow, because it is merely >> copy-pasting C code and rewriting the syntax. >> >> http://nehe.gamedev.net/tutorial/lessons_01__05/22004/ >> http://nehe.gamedev.net/tutorial/lessons_06__10/17010/ >> http://nehe.gamedev.net/tutorial/lessons_11__15/28001/ >> ... >> > > > |
In reply to this post by LawsonEnglish
Great to see 3D on Pharo again!
Just a remark, somewhere there should be stated that directly loading vertex one by one with glVertex3f(..); is deprecated in the latest OpenGL. In favor of mechanisms that load the geometry to the renderer as a whole, as for example vertex buffer objects. In short, in the examples is ok to use, for learning the inner workings of OpenGL. Newer examples should make use of current practices of OpenGL. I think that Nehe examples web page states that. I think that a framework such as Lumiere, would be needed to provide higher abstractions and more efficient handling of geometry. Maybe when i finish my Phd, i can continue to work on it. Fernando On Thu, Feb 23, 2012 at 10:13 PM, Stéphane Ducasse <[hidden email]> wrote: > Excellent. > Could you use large fonts :) > The idea is that we should be able to see the code and to follow. > > Stef > > On Feb 23, 2012, at 8:38 PM, Lawson English wrote: > >> My videos on Croquet OpenGL discuss the first 6. In fact, that GUI texture thing was done only using the techniques used in NeHe 6. >> >> I'll go back and recreate the NeHe code for 2-6 using NBOpenGL as soon as I get your updated code installed. >> >> L >> >> On 2/23/12 12:13 PM, Igor Stasenko wrote: >>> We actually should try and translate more NeHe tutorials[1]. I hope, >>> Stephane didn't found it hard to follow, because it is merely >>> copy-pasting C code and rewriting the syntax. >>> >>> http://nehe.gamedev.net/tutorial/lessons_01__05/22004/ >>> http://nehe.gamedev.net/tutorial/lessons_06__10/17010/ >>> http://nehe.gamedev.net/tutorial/lessons_11__15/28001/ >>> ... >>> >> >> >> > > |
In reply to this post by Stéphane Ducasse
Hi, is this ready for Linux ? I tried loading NBOpenGL-X after but I got
This package depends on the following classes: Also, I remember in OpenCroquet Squeak could parse positional arguments of the form ogl glThis(x,y,x); glThat(x,y,z); -- doesn't it make sense to bring this back, since most of the OpenGL code examples on the internet are in this form ? On an unrelated note, I'm confused by this ConfigurationOf/Gofer/Montecello system. How do people new to Pharo find out what these URLs and special configuration loading code are ? Seems like everyone is just copying and pasting code from forums into workspaces. Couldn't these code fragments for loading configurations be loaded automatically via a URL or something ? On Fri, Feb 24, 2012 at 3:42 AM, Stéphane Ducasse <[hidden email]> wrote:
|
Hi, quick answer: almost, probably tomorrow.
long answer: I have the changes on NBOpenGL-X needed to make it work on Linux in my image, but they are quite a few and I have to dedicate some time to upload them. Tomorrow I'll try to commit to the ss repo. Before loading NBOpenGL-X you have to load NBXLib package, which lives in www.squeaksource.com/NBXLib, and contains the wrappers for XLib which is needed by glX. And I also have to commit changes there too. Maybe I can even try to write a configuration to load it, but I have to learn of metacello first.
On Thu, Feb 23, 2012 at 8:15 PM, chadwick <[hidden email]> wrote: Hi, is this ready for Linux ? I tried loading NBOpenGL-X after but I gotThis package depends on the following classes: Yes, I look at the code needed to load the package I want on the mailing list or the pharo book, paste it in a workspace and doit. It's a bit embarrassing but that's what we've got for now, I hope in the future we'll have an UI that let people browse packages and install them by clicking a button.
Lic. Javier Pimás Ciudad de Buenos Aires |
In reply to this post by Igor Stasenko
I was experimenting a little bit. What is the best way, using code, to
put an GLViewportMorph into a scrollpane? the things I have tried so far seem to either crash Pharo or not work as I expect. L. On 2/23/12 1:09 PM, Igor Stasenko wrote: > I haven't seen this video before: > http://www.youtube.com/watch?v=yQHAoH8t8aM > > you doing crazy things, which i wouldn't recommend to do at all, > but for the purpose of explaining the point it is right way :) > > Yes, it is really hard to explain to people that what they looking at > is a tip of iceberg. > > > |
On 24 February 2012 01:26, Lawson English <[hidden email]> wrote:
> I was experimenting a little bit. What is the best way, using code, to put > an GLViewportMorph into a scrollpane? the things I have tried so far seem to > either crash Pharo or not work as I expect. > By default GLViewportMorph copying a rendered stuff from gl buffer directly to Display form to avoid extra copying. This trick is not fully compatible with morphic, since it ignoring clipping bounds etc. If you want to have full integration with morphic, first turn on "useOwnForm" flag first then viewport will copy rendered pixels into own buffer, which is then can be used for subsequent drawing on morphic canvas. See GLViewportMorph>>drawOn: aCanvas Probably we should add a bounds checking in NBGLDisplay>>updateForm: aForm bounds: aBounds because if you go outside of form's bounds, you have imminent crash due to memory corruption. What can be done, code can be rewritten to take into account a canvas clipping + origin, to get more correct (and safe) copying of rendered pixels, but still avoiding extra copy to intermediate form. Anyways, i don't like the fact that it copying the rendered stuff from video memory back to main memory, and then copying stuff back to video memory (but this time by morphic + VM). But this was a cheap way to get demo working and morphic integration. If i would be making app which using opengl for rendering, i would never copy things like that, because it just a waste of cycles causing a big impact to frame rate. What works for demo, is not really applicable in serious application. :) In serious app, i'd rather copy morphic Display form into separate texture and then render it along with other stuff, which is already in video memory. But this approach needs more time investments, which currently i don't have, and actually heavily depends on what you want from your app. Anyways, if there's someone who having spare time to fix NBGLDisplay>>updateForm:bounds: implementation to make it safer, i would appreciate that. -- Best regards, Igor Stasenko. |
In reply to this post by Fernando olivero-2
On 23 February 2012 23:23, Fernando Olivero <[hidden email]> wrote:
> Great to see 3D on Pharo again! > > Just a remark, somewhere there should be stated that directly loading > vertex one by one with glVertex3f(..); is deprecated in the latest > OpenGL. In favor of mechanisms that load the geometry to the renderer > as a whole, as for example vertex buffer objects. > I know it. But as said before in this topic, it is "tip of the iceberg". People who really know how to properly deal with modern OpenGL implementation should know it by themselves. The purpose of NBOpenGL is give them a way to do it in smalltalk. Not less but not more. > In short, in the examples is ok to use, for learning the inner > workings of OpenGL. > Newer examples should make use of current practices of OpenGL. > I think that Nehe examples web page states that. > > I think that a framework such as Lumiere, would be needed to provide > higher abstractions and more efficient handling of geometry. > Maybe when i finish my Phd, i can continue to work on it. > > Fernando > > > On Thu, Feb 23, 2012 at 10:13 PM, Stéphane Ducasse > <[hidden email]> wrote: >> Excellent. >> Could you use large fonts :) >> The idea is that we should be able to see the code and to follow. >> >> Stef >> >> On Feb 23, 2012, at 8:38 PM, Lawson English wrote: >> >>> My videos on Croquet OpenGL discuss the first 6. In fact, that GUI texture thing was done only using the techniques used in NeHe 6. >>> >>> I'll go back and recreate the NeHe code for 2-6 using NBOpenGL as soon as I get your updated code installed. >>> >>> L >>> >>> On 2/23/12 12:13 PM, Igor Stasenko wrote: >>>> We actually should try and translate more NeHe tutorials[1]. I hope, >>>> Stephane didn't found it hard to follow, because it is merely >>>> copy-pasting C code and rewriting the syntax. >>>> >>>> http://nehe.gamedev.net/tutorial/lessons_01__05/22004/ >>>> http://nehe.gamedev.net/tutorial/lessons_06__10/17010/ >>>> http://nehe.gamedev.net/tutorial/lessons_11__15/28001/ >>>> ... >>>> >>> >>> >>> >> >> > -- Best regards, Igor Stasenko. |
In reply to this post by Igor Stasenko
On Feb 24, 2012, at 4:05 AM, Igor Stasenko wrote:
> > Anyways, i don't like the fact that it copying the rendered stuff from > video memory back to main memory, > and then copying stuff back to video memory (but this time by morphic + VM). > But this was a cheap way to get demo working and morphic integration. > > If i would be making app which using opengl for rendering, i would > never copy things like that, > because it just a waste of cycles causing a big impact to frame rate. > What works for demo, is not really applicable in serious application. :) > In serious app, i'd rather copy morphic Display form into separate > texture and then render it along with other stuff, which is already > in video memory. > But this approach needs more time investments, which currently i don't > have, and actually heavily depends > on what you want from your app. I think you'd have to ditch the idea of rendering the Display bitmap entirely, if you want to write GL windows to screen directly. Otherwise, you would _never_ get compositing correct, think translucent morphs on top of the GL window for example. As such, you'd need to rewrite the entire compositing window manager using GL buffers, and only as a side-effect flush the rendering results to a Display there for purely backwards-compatability reasons. Cheers, Henry |
On 24 February 2012 11:22, Henrik Johansen <[hidden email]> wrote:
> On Feb 24, 2012, at 4:05 AM, Igor Stasenko wrote: > >> >> Anyways, i don't like the fact that it copying the rendered stuff from >> video memory back to main memory, >> and then copying stuff back to video memory (but this time by morphic + VM). >> But this was a cheap way to get demo working and morphic integration. >> >> If i would be making app which using opengl for rendering, i would >> never copy things like that, >> because it just a waste of cycles causing a big impact to frame rate. >> What works for demo, is not really applicable in serious application. :) >> In serious app, i'd rather copy morphic Display form into separate >> texture and then render it along with other stuff, which is already >> in video memory. >> But this approach needs more time investments, which currently i don't >> have, and actually heavily depends >> on what you want from your app. > > I think you'd have to ditch the idea of rendering the Display bitmap entirely, if you want to write GL windows to screen directly. > Otherwise, you would _never_ get compositing correct, think translucent morphs on top of the GL window for example. > into morphic world. But if i don't want it, i don't need to care :) > As such, you'd need to rewrite the entire compositing window manager using GL buffers, and only as a side-effect flush the rendering results to a Display there for purely backwards-compatability reasons. > If we would have an easy control of host windows, i.e. a way to create new windows and create GL contexts for them in a way we want. But today's VMs having little for that. What needs to be done first is to make image to be in direct control of host windows, so then we can control GL & Display in a way we want. Because right now, VM doing "magic" with main window by binding Display form to it , and the code which updates it also there, so you simply cannot do any compositing et all , since the code you need to modify is on VM side. > Cheers, > Henry -- Best regards, Igor Stasenko. |
On Feb 24, 2012, at 2:04 PM, Igor Stasenko wrote: > On 24 February 2012 11:22, Henrik Johansen <[hidden email]> wrote: >> On Feb 24, 2012, at 4:05 AM, Igor Stasenko wrote: >> >>> >>> Anyways, i don't like the fact that it copying the rendered stuff from >>> video memory back to main memory, >>> and then copying stuff back to video memory (but this time by morphic + VM). >>> But this was a cheap way to get demo working and morphic integration. >>> >>> If i would be making app which using opengl for rendering, i would >>> never copy things like that, >>> because it just a waste of cycles causing a big impact to frame rate. >>> What works for demo, is not really applicable in serious application. :) >>> In serious app, i'd rather copy morphic Display form into separate >>> texture and then render it along with other stuff, which is already >>> in video memory. >>> But this approach needs more time investments, which currently i don't >>> have, and actually heavily depends >>> on what you want from your app. >> >> I think you'd have to ditch the idea of rendering the Display bitmap entirely, if you want to write GL windows to screen directly. >> Otherwise, you would _never_ get compositing correct, think translucent morphs on top of the GL window for example. >> > Yes, but this is for cases if i want to make opengl stuff embedded > into morphic world. But if i don't want it, i don't need to care :) > >> As such, you'd need to rewrite the entire compositing window manager using GL buffers, and only as a side-effect flush the rendering results to a Display there for purely backwards-compatability reasons. >> > > If we would have an easy control of host windows, i.e. a way to create > new windows and create GL contexts for them in a way we want. > But today's VMs having little for that. What needs to be done first is > to make image to be in direct control of host windows, so then > we can control GL & Display in a way we want. > Because right now, VM doing "magic" with main window by binding > Display form to it , and the code which updates it also there, so you > simply > cannot do any compositing et all , since the code you need to modify > is on VM side. Multiple window support is orthogonal to compositing OGL/Morphic correctly through writing a window manager (badly worded, I meant composition manager) for the current way display is handled. You'd theoretically only need two things from the VM to do so; a handle to the context of the current window; and a way to modify the Display that does not automatically trigger a "built-in" redisplay. The compositing in that window could all use VBO/FBO's (bound to said context of course), and display done directly by the (written in smalltalk, using OGL to display) composition manager, sidestepping all VM logic. (well, and ensuring you redraw *after* VM logic triggers automatically, like when window is resized) As long as the Display bitmap is updated accordingly as well, external users shouldn't even be affected. Cheers, Henry |
On 24 February 2012 14:26, Henrik Johansen <[hidden email]> wrote:
> > On Feb 24, 2012, at 2:04 PM, Igor Stasenko wrote: > >> On 24 February 2012 11:22, Henrik Johansen <[hidden email]> wrote: >>> On Feb 24, 2012, at 4:05 AM, Igor Stasenko wrote: >>> >>>> >>>> Anyways, i don't like the fact that it copying the rendered stuff from >>>> video memory back to main memory, >>>> and then copying stuff back to video memory (but this time by morphic + VM). >>>> But this was a cheap way to get demo working and morphic integration. >>>> >>>> If i would be making app which using opengl for rendering, i would >>>> never copy things like that, >>>> because it just a waste of cycles causing a big impact to frame rate. >>>> What works for demo, is not really applicable in serious application. :) >>>> In serious app, i'd rather copy morphic Display form into separate >>>> texture and then render it along with other stuff, which is already >>>> in video memory. >>>> But this approach needs more time investments, which currently i don't >>>> have, and actually heavily depends >>>> on what you want from your app. >>> >>> I think you'd have to ditch the idea of rendering the Display bitmap entirely, if you want to write GL windows to screen directly. >>> Otherwise, you would _never_ get compositing correct, think translucent morphs on top of the GL window for example. >>> >> Yes, but this is for cases if i want to make opengl stuff embedded >> into morphic world. But if i don't want it, i don't need to care :) >> >>> As such, you'd need to rewrite the entire compositing window manager using GL buffers, and only as a side-effect flush the rendering results to a Display there for purely backwards-compatability reasons. >>> >> >> If we would have an easy control of host windows, i.e. a way to create >> new windows and create GL contexts for them in a way we want. >> But today's VMs having little for that. What needs to be done first is >> to make image to be in direct control of host windows, so then >> we can control GL & Display in a way we want. >> Because right now, VM doing "magic" with main window by binding >> Display form to it , and the code which updates it also there, so you >> simply >> cannot do any compositing et all , since the code you need to modify >> is on VM side. > > Multiple window support is orthogonal to compositing OGL/Morphic correctly through writing a window manager (badly worded, I meant composition manager) for the current way display is handled. > Unfortunately, if i would control creation of main window (and hence creation of GL context for it ), then it would be much easier to sidestep all VM functionality and use flicker-free things like swapBuffers, when update to window is required. So, it is not quite orthogonal, because if you look at Mac VMs, its using opengl by its own and it means that it creating own gl context and managing it behind the scenes (see sqSqueakOSXNSView.m in platforms/iOS/vm). > You'd theoretically only need two things from the VM to do so; a handle to the context of the current window; and a way to modify the Display that does not automatically trigger a "built-in" redisplay. > The compositing in that window could all use VBO/FBO's (bound to said context of course), and display done directly by the (written in smalltalk, using OGL to display) composition manager, sidestepping all VM logic. (well, and ensuring you redraw *after* VM logic triggers automatically, like when window is resized) > As long as the Display bitmap is updated accordingly as well, external users shouldn't even be affected. > what would be really nice is to have callbacks to language side when request to update is received. Then i can decide what to do in order to update the window. But right now i'd rather avoid investing too much effort in "seamless" integration of morphic and GL, because without changing the VM interface the implementation will be cumbersome and will contain many different bells and whistles... The central part of OpenGL game is controlling creation of context and its various parameters. Without such control you often cannot ensure that you will have certain hardware functionality accessible, because depending on the way how you creating a context OS may block it and provide oldish or even software-based renderer for you. -- Best regards, Igor Stasenko. |
In reply to this post by Fernando olivero-2
> Great to see 3D on Pharo again! > > Just a remark, somewhere there should be stated that directly loading > vertex one by one with glVertex3f(..); is deprecated in the latest > OpenGL. In favor of mechanisms that load the geometry to the renderer > as a whole, as for example vertex buffer objects. > > In short, in the examples is ok to use, for learning the inner > workings of OpenGL. > Newer examples should make use of current practices of OpenGL. > I think that Nehe examples web page states that. > > I think that a framework such as Lumiere, would be needed to provide > higher abstractions and more efficient handling of geometry. > Maybe when i finish my Phd, i can continue to work on it. When you visit us I would like you to sit and pair program with erwan because we would like have a small code city Stef |
In reply to this post by chadwick
>
> > Also, I remember in OpenCroquet Squeak could parse positional arguments of the form ogl glThis(x,y,x); glThat(x,y,z); -- doesn't it make sense to bring this back, since most of the OpenGL code examples on the internet are in this form ? I guess that you want to know since you ask: the answer is NO! It makes no sense to do that. > On an unrelated note, I'm confused by this ConfigurationOf/Gofer/Montecello system. How do people new to Pharo find out what these URLs and special configuration loading code are ? Read the monticello and metacello chapters on pharo by example 2 and look at MetacelloRepository. > Seems like everyone is just copying and pasting code from forums into workspaces. Couldn't these code fragments for loading configurations be loaded automatically via a URL or something ? |
Free forum by Nabble | Edit this page |