Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
29 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Eliot Miranda-2


On Sun, Aug 29, 2010 at 2:45 PM, Ken G. Brown <[hidden email]> wrote:
Might it be desirable at this point to change the extension to .cogimage or some such thing to avoid confusion in the future?

IMO, no.  It's still a Squeak image.
 
Isn't this a bit like requiring a different program to open a certain file type?

IMO its much more like a different Word version, and no one considers changing .doc to .doc1 .doc2 etc....
 

Ken G. Brown

At 1:22 PM -0700 8/29/10, Eliot Miranda apparently wrote:
>On Sun, Aug 29, 2010 at 12:56 PM, Ken G. Brown <<mailto:[hidden email]>[hidden email]> wrote:
>
>Running a fresh Squeak4.2-10382-alpha on your latest 5.8b4.
>
>Open image, do Save As new version. then quit.
>Attempt to open saved new version image on Squeak 4.2.4beta1U.app by dragging and dropping on VM, it does not open.
>Console message:
>10/08/29 1:51:57 PM     [0x0-0x2fe2fe].org.squeak.Squeak[8275]  This interpreter (vers. 6502) cannot read image file (vers. 6505).
>
>
>John's  5.8b4 is a Cog VM.  Once an image is saved on Cog it will only run on a Cog VM.
>
>cheers
>Eliot
>
>
>Ken G. Brown
>
>> >
>>>
>>>At 5:39 PM -0700 8/28/10, John M McIntosh apparently wrote:
>>>>I've stuck a version (5.8b4) of the cocoa based os-x squeak cog JIT based VM in my experimental folder.
>>>>
>>>><http://homepage.mac.com/johnmci/.Public/experimental/Squeak%205.8b4.app.zip>http://homepage.mac.com/johnmci/.Public/experimental/Squeak%205.8b4.app.zip
>>>>
>>>>I spent the last two days becoming very familiar with Open/GL and rewrote the display logic to use
>>>>Open/GL.  I am still doing some further optimization, but people should test this version and let
>>>>me know what they find.
>>>>
>>>>Other Fixes.
>>>>I think the control-arrow keys should work now, someone test this and let me know.
>>>>
>>>>
>>>>--
>>>>===========================================================================
>>>>John M. McIntosh <<mailto:[hidden email]>[hidden email]>   Twitter:  squeaker68882
>>>>Corporate Smalltalk Consulting Ltd.  <http://www.smalltalkconsulting.com>http://www.smalltalkconsulting.com
>>>>===========================================================================
> >
>
> >Attachment converted: MacProHD0:SqueakDebug 51.log (TEXT/R*ch) (079F0B79)





Reply | Threaded
Open this post in threaded view
|

Re: Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Eliot Miranda-2
In reply to this post by Bert Freudenberg


On Sun, Aug 29, 2010 at 3:35 PM, Bert Freudenberg <[hidden email]> wrote:

On 29.08.2010, at 23:02, Eliot Miranda wrote:



On Sun, Aug 29, 2010 at 1:31 PM, Bert Freudenberg <[hidden email]> wrote:

On 29.08.2010, at 22:22, Eliot Miranda wrote:



On Sun, Aug 29, 2010 at 12:56 PM, Ken G. Brown <[hidden email]> wrote:
Running a fresh Squeak4.2-10382-alpha on your latest 5.8b4.

Open image, do Save As new version. then quit.
Attempt to open saved new version image on Squeak 4.2.4beta1U.app by dragging and dropping on VM, it does not open.
Console message:
10/08/29 1:51:57 PM     [0x0-0x2fe2fe].org.squeak.Squeak[8275]  This interpreter (vers. 6502) cannot read image file (vers. 6505).

John's  5.8b4 is a Cog VM.  Once an image is saved on Cog it will only run on a Cog VM.

cheers
Eliot

I'm sure you mentioned it before, but where can I read about the image format changes?

Lazily I've yet to write this up.  But I think the only change above and beyond support for the closure bytecodes (and not using BlockContext at all) is that image float order now depends on platform, so on x86 it is little-endian; this was forced by the JIT implementation of floating-point arithmetic (its hard to byte swap efficiently given the lack of sophistication of the code generator), and by my not wanting to waste cycles converting to/from big-endian byte order on image load/save or image segment export.  Bit 1 of the image header flags word (which used to contain only the full screen flag) is 1 if the image's floats are little-endian.  So the existing VMs would need to read and write this bit and either convert back to big-endian or copy the Cog VMs in keeping floats in platform-specific order.  For me throwing performance away on each floating-point op on x86 is a heinous sin, so at least internally floats should be little-endian.  The basicAt: & basicAt:put: primitives 38 & 39 need to be implemented to present floats as big endian to the image level.

There are other changes to the image header, using unused bits and fields to store Cog-specific info and it would be convenient if the standard VMs preserve these fields.  But they don't prevent loading on a standard VM; IIRC only the float-order changes would cause errors running a Cog image on a closure-enabled Interpreter VM.

If people think this is important enough I could put together a change set for VMMaker that includes the relevant changes (to image load/save, floating-point arithmetic, image segment load/store and float at:/at:put:).

best
Eliot

If it's really so easy to make the regular VM work with that image, couldn't we just do it and not even bump the format?

But reason for the change in image format number is not to distinguish the format of the file.  The point is that the standard interpreter can run both BlockContext and BlockClosure images but Cog can only run BlockClosure images.  So the bump of image format is to mark images that use only BlockClosures.  So for me its definitely the right thing to do to have a different image version.  If Cog could run images containing BlockContext blocks it would be different, but it can't.

cheers,
Eliot
 

- Bert -








Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-project] [squeak-dev] Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Tudor Girba
In reply to this post by johnmci
Ok, let me explain better.

On 4.2.x (Mac OS X 10.5.8):
- ctrl+arrow = jump between words
- ctrl+shift+arrow = select up to the next word

On 5.8 (Mac OS X 10.5.8):
- ctrl+arrow = nothing happens
- ctrl+shift+arrow = nothing happens

I only mentioned regular Mac widgets for reference.

Cheers,
Doru


On 29 Aug 2010, at 23:16, John M McIntosh wrote:

> Ok, you are mixing two issues here
>
> (a) I need to make the 5.8 behaviour the same as the 4.2.x VM  
> behaviour.
>
> (b) Once we have the same behaviour then you are welcome to propose  
> changing the smalltalk text editor code to make the cursor and word
> selection dance however you or the community would like it to...
>
>
> On 2010-08-29, at 11:55 AM, Tudor Girba wrote:
>
>> Ahh, I see.
>>
>> I expect it to jump between words. On regular Mac applications, you  
>> get this behavior by pressing alt-arrow.
>>
>> And when I press shift-ctrl-arrow, I expect it to select up to the  
>> end of the word. On regular Mac applications, you get this behavior  
>> by pressing alt-shift-arrow.
>>
>> Of course, I would not mind if instead of ctrl we would have alt :).
>>
>> Cheers,
>> Doru
>>
>
> --
> =
> =
> =
> =
> =
> ======================================================================
> John M. McIntosh <[hidden email]>   Twitter:  
> squeaker68882
> Corporate Smalltalk Consulting Ltd.  http://
> www.smalltalkconsulting.com
> =
> =
> =
> =
> =
> ======================================================================
>
>
>
>

--
www.tudorgirba.com

"Reasonable is what we are accustomed with."


Reply | Threaded
Open this post in threaded view
|

Re: Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Bert Freudenberg
In reply to this post by Eliot Miranda-2

On 30.08.2010, at 03:17, Eliot Miranda wrote:



On Sun, Aug 29, 2010 at 3:35 PM, Bert Freudenberg <[hidden email]> wrote:

On 29.08.2010, at 23:02, Eliot Miranda wrote:



On Sun, Aug 29, 2010 at 1:31 PM, Bert Freudenberg <[hidden email]> wrote:

On 29.08.2010, at 22:22, Eliot Miranda wrote:



On Sun, Aug 29, 2010 at 12:56 PM, Ken G. Brown <[hidden email]> wrote:
Running a fresh Squeak4.2-10382-alpha on your latest 5.8b4.

Open image, do Save As new version. then quit.
Attempt to open saved new version image on Squeak 4.2.4beta1U.app by dragging and dropping on VM, it does not open.
Console message:
10/08/29 1:51:57 PM     [0x0-0x2fe2fe].org.squeak.Squeak[8275]  This interpreter (vers. 6502) cannot read image file (vers. 6505).

John's  5.8b4 is a Cog VM.  Once an image is saved on Cog it will only run on a Cog VM.

cheers
Eliot

I'm sure you mentioned it before, but where can I read about the image format changes?

Lazily I've yet to write this up.  But I think the only change above and beyond support for the closure bytecodes (and not using BlockContext at all) is that image float order now depends on platform, so on x86 it is little-endian; this was forced by the JIT implementation of floating-point arithmetic (its hard to byte swap efficiently given the lack of sophistication of the code generator), and by my not wanting to waste cycles converting to/from big-endian byte order on image load/save or image segment export.  Bit 1 of the image header flags word (which used to contain only the full screen flag) is 1 if the image's floats are little-endian.  So the existing VMs would need to read and write this bit and either convert back to big-endian or copy the Cog VMs in keeping floats in platform-specific order.  For me throwing performance away on each floating-point op on x86 is a heinous sin, so at least internally floats should be little-endian.  The basicAt: & basicAt:put: primitives 38 & 39 need to be implemented to present floats as big endian to the image level.

There are other changes to the image header, using unused bits and fields to store Cog-specific info and it would be convenient if the standard VMs preserve these fields.  But they don't prevent loading on a standard VM; IIRC only the float-order changes would cause errors running a Cog image on a closure-enabled Interpreter VM.

If people think this is important enough I could put together a change set for VMMaker that includes the relevant changes (to image load/save, floating-point arithmetic, image segment load/store and float at:/at:put:).

best
Eliot

If it's really so easy to make the regular VM work with that image, couldn't we just do it and not even bump the format?

But reason for the change in image format number is not to distinguish the format of the file.  The point is that the standard interpreter can run both BlockContext and BlockClosure images but Cog can only run BlockClosure images.  So the bump of image format is to mark images that use only BlockClosures.  So for me its definitely the right thing to do to have a different image version.  If Cog could run images containing BlockContext blocks it would be different, but it can't.

cheers,
Eliot

Well, we had intended some more cleanups to be done when we decide to change the image format.

In any case it would be nice to be able to open cog-saved images in a regular VM - I'm thinking of the ARM systems that might not get a Cog version soon.

- Bert -




Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-project] [squeak-dev] Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

johnmci
In reply to this post by johnmci

On 2010-08-29, at 11:10 AM, stephane ducasse wrote:

> John
>
> What I want to understand is what does it means to use open/GL.
> This means that you use open/GL to implement the primitive?
> Now I thought that opengl was more vector graphics than bitblt so how does it fit together.
> Is it because the mac UI is opengl based too?
>
> Stef

Ok let me give you some background, then talk about open/GL

The Squeak drawing logic invokes:
http://isqueak.org/displayioShowDisplay

Which is to say copy this rectangle of data from the Squeak Display Oops data pointer to something that visually shows the user what is going on,
since drawing can require a few Smalltalk based calculations resulting in draw events to compose a final image then we also have,

http://isqueak.org/ioForceDisplayUpdate

To help the process a bit by allowing the drawing subsystem to compose the bits until we are done, then perform the expensive step of visualization.

In general the displayioShowDisplay is really fast but the ioForceDisplayUpdate is slow, displayioShowDisplay may not show the bits, but displayioShowDisplay may now (or later... )

Depending on which VM source code (version and platform dependent) you may find that ioForceDisplayUpdate does nothing, or a operating system flush is
done to the hardware on every displayioShowDisplay.

Where you can see this issue is if you try.

Transcript cr;show:
        [| b |
                b _ Pen new.
                Display fillWhite.
                b place:(Display boundingBox bottomLeft).
                b hilbert: 9 side: 2
        ] timeToRun.
        Display restore.

If this crawls, like taking 30 seconds, then the implementation is flushing every bit draw to the operating system.
Well or if you don't see the bits then maybe neither displayioShowDisplay or ioForceDisplayUpdate does any flushing, and all you see is the Display restore results.

Now just to make life harder for the VM implementor the Smaltalk code might not call ioForceDisplayUpdate.  Then the VM has to do the ioForceDisplayUpdate
internally in order not to leave bit's dangling.  How this is done  again is different from VM to VM.  Usually this shows up as a double menu selection highlight.

For the 4.2.x series of Macintosh VM we would trigger a operating system flush if more than 20 ms (a settable value) had elapsed, and I did nothing for ioForceDisplayUpdate.

But I change this in 5.x and for the iPhone to make ioForceDisplayUpdate the trigger, with a timer that pops if a ioForceDisplayUpdate is not done within 20ms of the last executed displayioShowDisplay.

Now about Open/GL, in the past for os-x carbon VM 4.2.X and earlier we used System 7.5.x technology to draw bits, which was quickdraw and quickdraw quartz.

When I moved the logic to the iPhone that is not supported technology, because of the interesting drawing logic on the iPhone it took 6 attempts and a long
chat with a graphics engineer at Apple. That resulted in using Core Animation to divide the screen into 16 tiles so when a draw happens we note which Tile(s) are dirty
based on the rectangle intersections, then on the ioForceDisplayUpdate we generate images for each of the dirty tiles from the Display Oops and tell Core Animation
redraw the new tiles.  

Bert said this seemed slow on OS-X.

At this point the next step in our OS-X/iOS drawing path is drop one step lower down and consider Open/GL.

I must admit I've not done any open/GL work before so it was a learning opportunity.

Although you think of Open/GL as a vector based graphic language it does support what is know as Textures.

So instead of providing vectors, you supply bits. So a chunk of data, stating it's RGB, at this depth and pixel layout and size, etc make a chunk of screen glow that is showing the open/GL viewport.

Now there are lots of restrictions but the GPUs and drivers have become more friendly so you can supply arbitrary sized rectangles, this was at one time slow, but GPUs have become extremely fast, so slow is fast...

In fact on the mac you can supply a arbitrary sized rectangle taken from a much larger rectangle of data, which fits perfectly into the displayioShowDisplay logic.

The only hassle is that you need to figure out how the flush should work.  So after 3 days of intense effort I can say the algorithm is...


displayioShowDisplay

        collects the union of the rectangles that are being drawn.  Nothing more happens... It's pointless to do the glTexImage2D here because 'b hilbert: 9 side: 2' will kill you.
        Mind we do still watch for a missing ioForceDisplayUpdate.

ioForceDisplayUpdate

        Then takes the union of the rectangles, set a GL viewpoint, and does the glTexImage2D based on figuring out the start point of the top/left pixel of the rectangle to draw,
        then setups up the coordinate system and finally flush the data.

        glViewport( subRect.origin.x,subRect.origin.y, subRect.size.width,subRect.size.height );

        char *subimg = ((char*)lastBitsIndex) + (unsigned int)(subRect.origin.x + (r.size.height-subRect.origin.y-subRect.size.height)*r.size.width)*4;
        glTexImage2D( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA, subRect.size.width, subRect.size.height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, subimg );

         glBegin(GL_QUADS);   // The -1 is so we flip the coordinate system as os-x and squeak have different views of where (0,0) is...
         glTexCoord2f(0.0f, 0.0f);
        glVertex2f(-1.0f, 1.0f);
         glTexCoord2f(0.0f, subRect.origin.x.size.height);
        glVertex2f(-1.0f, -1.0f);
         glTexCoord2f(subRect.origin.x.size.width, subRect.origin.x.size.height);  
        glVertex2f(1.0f, -1.0f);
         glTexCoord2f(subRect.origin.x.size.width, 0.0f);
        glVertex2f(1.0f, 1.0f);
         glEnd();

        glFlush() // Ask the hardware to draw this stuff, don't use the more aggressive glFinish()
        reset the union of draw rectangles.

        Hint
        (a) Oddly some people feel they should re-configure the open/GL graphic context on *every* draw cycle, why?
        (b) People don't read the Apple guidebooks for best practices for doing glTexImage2D, this I know based on Google searches using certain Apple open/GL extension keywords.

Notes:

This assumes the Squeak display is 32 bits, other resolutions are an exercise for the reader.
Actually if the row width is a multiple of 32 bytes then on the Mac things are *much* faster and no copy is made.  This is enforced in 5.7b5 by ensuring window size is divisible by 8.

When the graphic context is setup when the window is built there is a bunch of cmds to execute, one important one is
glPixelStorei( GL_UNPACK_ROW_LENGTH, self.frame.size.width );
and when the window is resized there are a few things to do to indicate how the context has changed.

Some platforms like the iPhone might need to do this instead because it doesn't support the GL extension GL_TEXTURE_RECTANGLE_ARB

 glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, subRect.size.width, subRect.size.height, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, NULL );
 
 for( int y = 0; y < subRect.size.height; y++ )
 {
         char *row =  ((char*)lastBitsIndex)  + ((y + subRect.origin.y)*subRect.size.width + subRect.origin.x) * 4;
         glTexSubImage2D( GL_TEXTURE_2D, 0, 0, y, subRect.size.width, 1, GL_RGBA, GL_UNSIGNED_BYTE, row );
 }



--
===========================================================================
John M. McIntosh <[hidden email]>   Twitter:  squeaker68882
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
===========================================================================







smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-project] [squeak-dev] Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Tudor Girba
Hi,

Interesting read, but I am still not sure to understand the  
implications, so let me ask you another question: is there a way to  
make use of OpenGL by generating vector graphics from within Pharo?

I am particularly interested if there are ways to improve  
visualization tools like Mondrian to make it use the hardware (and  
thus maybe have reasonable speed when drawing complex and maybe  
aliased graphs).

Cheers,
Doru


On 2 Sep 2010, at 01:19, John M McIntosh wrote:

>
> On 2010-08-29, at 11:10 AM, stephane ducasse wrote:
>
>> John
>>
>> What I want to understand is what does it means to use open/GL.
>> This means that you use open/GL to implement the primitive?
>> Now I thought that opengl was more vector graphics than bitblt so  
>> how does it fit together.
>> Is it because the mac UI is opengl based too?
>>
>> Stef
>
> Ok let me give you some background, then talk about open/GL
>
> The Squeak drawing logic invokes:
> http://isqueak.org/displayioShowDisplay
>
> Which is to say copy this rectangle of data from the Squeak Display  
> Oops data pointer to something that visually shows the user what is  
> going on,
> since drawing can require a few Smalltalk based calculations  
> resulting in draw events to compose a final image then we also have,
>
> http://isqueak.org/ioForceDisplayUpdate
>
> To help the process a bit by allowing the drawing subsystem to  
> compose the bits until we are done, then perform the expensive step  
> of visualization.
>
> In general the displayioShowDisplay is really fast but the  
> ioForceDisplayUpdate is slow, displayioShowDisplay may not show the  
> bits, but displayioShowDisplay may now (or later... )
>
> Depending on which VM source code (version and platform dependent)  
> you may find that ioForceDisplayUpdate does nothing, or a operating  
> system flush is
> done to the hardware on every displayioShowDisplay.
>
> Where you can see this issue is if you try.
>
> Transcript cr;show:
> [| b |
> b _ Pen new.
> Display fillWhite.
> b place:(Display boundingBox bottomLeft).
> b hilbert: 9 side: 2
> ] timeToRun.
> Display restore.
>
> If this crawls, like taking 30 seconds, then the implementation is  
> flushing every bit draw to the operating system.
> Well or if you don't see the bits then maybe neither  
> displayioShowDisplay or ioForceDisplayUpdate does any flushing, and  
> all you see is the Display restore results.
>
> Now just to make life harder for the VM implementor the Smaltalk  
> code might not call ioForceDisplayUpdate.  Then the VM has to do the  
> ioForceDisplayUpdate
> internally in order not to leave bit's dangling.  How this is done  
> again is different from VM to VM.  Usually this shows up as a double  
> menu selection highlight.
>
> For the 4.2.x series of Macintosh VM we would trigger a operating  
> system flush if more than 20 ms (a settable value) had elapsed, and  
> I did nothing for ioForceDisplayUpdate.
>
> But I change this in 5.x and for the iPhone to make  
> ioForceDisplayUpdate the trigger, with a timer that pops if a  
> ioForceDisplayUpdate is not done within 20ms of the last executed  
> displayioShowDisplay.
>
> Now about Open/GL, in the past for os-x carbon VM 4.2.X and earlier  
> we used System 7.5.x technology to draw bits, which was quickdraw  
> and quickdraw quartz.
>
> When I moved the logic to the iPhone that is not supported  
> technology, because of the interesting drawing logic on the iPhone  
> it took 6 attempts and a long
> chat with a graphics engineer at Apple. That resulted in using Core  
> Animation to divide the screen into 16 tiles so when a draw happens  
> we note which Tile(s) are dirty
> based on the rectangle intersections, then on the  
> ioForceDisplayUpdate we generate images for each of the dirty tiles  
> from the Display Oops and tell Core Animation
> redraw the new tiles.
>
> Bert said this seemed slow on OS-X.
>
> At this point the next step in our OS-X/iOS drawing path is drop one  
> step lower down and consider Open/GL.
>
> I must admit I've not done any open/GL work before so it was a  
> learning opportunity.
>
> Although you think of Open/GL as a vector based graphic language it  
> does support what is know as Textures.
>
> So instead of providing vectors, you supply bits. So a chunk of  
> data, stating it's RGB, at this depth and pixel layout and size, etc  
> make a chunk of screen glow that is showing the open/GL viewport.
>
> Now there are lots of restrictions but the GPUs and drivers have  
> become more friendly so you can supply arbitrary sized rectangles,  
> this was at one time slow, but GPUs have become extremely fast, so  
> slow is fast...
>
> In fact on the mac you can supply a arbitrary sized rectangle taken  
> from a much larger rectangle of data, which fits perfectly into the  
> displayioShowDisplay logic.
>
> The only hassle is that you need to figure out how the flush should  
> work.  So after 3 days of intense effort I can say the algorithm is...
>
>
> displayioShowDisplay
>
> collects the union of the rectangles that are being drawn.  Nothing  
> more happens... It's pointless to do the glTexImage2D here because  
> 'b hilbert: 9 side: 2' will kill you.
> Mind we do still watch for a missing ioForceDisplayUpdate.
>
> ioForceDisplayUpdate
>
> Then takes the union of the rectangles, set a GL viewpoint, and  
> does the glTexImage2D based on figuring out the start point of the  
> top/left pixel of the rectangle to draw,
> then setups up the coordinate system and finally flush the data.
>
> glViewport( subRect.origin.x,subRect.origin.y,  
> subRect.size.width,subRect.size.height );
>
> char *subimg = ((char*)lastBitsIndex) + (unsigned int)
> (subRect.origin.x + (r.size.height-subRect.origin.y-
> subRect.size.height)*r.size.width)*4;
> glTexImage2D( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA,  
> subRect.size.width, subRect.size.height, 0, GL_BGRA,  
> GL_UNSIGNED_INT_8_8_8_8_REV, subimg );
>
> glBegin(GL_QUADS);   // The -1 is so we flip the coordinate system  
> as os-x and squeak have different views of where (0,0) is...
> glTexCoord2f(0.0f, 0.0f);
> glVertex2f(-1.0f, 1.0f);
> glTexCoord2f(0.0f, subRect.origin.x.size.height);
> glVertex2f(-1.0f, -1.0f);
> glTexCoord2f(subRect.origin.x.size.width,  
> subRect.origin.x.size.height);
> glVertex2f(1.0f, -1.0f);
> glTexCoord2f(subRect.origin.x.size.width, 0.0f);
> glVertex2f(1.0f, 1.0f);
> glEnd();
>
> glFlush() // Ask the hardware to draw this stuff, don't use the  
> more aggressive glFinish()
> reset the union of draw rectangles.
>
> Hint
> (a) Oddly some people feel they should re-configure the open/GL  
> graphic context on *every* draw cycle, why?
> (b) People don't read the Apple guidebooks for best practices for  
> doing glTexImage2D, this I know based on Google searches using  
> certain Apple open/GL extension keywords.
>
> Notes:
>
> This assumes the Squeak display is 32 bits, other resolutions are an  
> exercise for the reader.
> Actually if the row width is a multiple of 32 bytes then on the Mac  
> things are *much* faster and no copy is made.  This is enforced in  
> 5.7b5 by ensuring window size is divisible by 8.
>
> When the graphic context is setup when the window is built there is  
> a bunch of cmds to execute, one important one is
> glPixelStorei( GL_UNPACK_ROW_LENGTH, self.frame.size.width );
> and when the window is resized there are a few things to do to  
> indicate how the context has changed.
>
> Some platforms like the iPhone might need to do this instead because  
> it doesn't support the GL extension GL_TEXTURE_RECTANGLE_ARB
>
> glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, subRect.size.width,  
> subRect.size.height, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, NULL );
>
> for( int y = 0; y < subRect.size.height; y++ )
> {
> char *row =  ((char*)lastBitsIndex)  + ((y +  
> subRect.origin.y)*subRect.size.width + subRect.origin.x) * 4;
> glTexSubImage2D( GL_TEXTURE_2D, 0, 0, y, subRect.size.width, 1,  
> GL_RGBA, GL_UNSIGNED_BYTE, row );
> }
>
>
>
> --
> =
> =
> =
> =
> =
> ======================================================================
> John M. McIntosh <[hidden email]>   Twitter:  
> squeaker68882
> Corporate Smalltalk Consulting Ltd.  http://
> www.smalltalkconsulting.com
> =
> =
> =
> =
> =
> ======================================================================
>
>
>
>
> _______________________________________________
> Pharo-project mailing list
> [hidden email]
> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

--
www.tudorgirba.com

"One cannot do more than one can do."




Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-project] [squeak-dev] Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

johnmci
I think if you hunt in the archives you'll find people have attempted to replace Morphic with Open/GL calls via FFI

Oh like http://www.squeaksource.com/AlienOpenGL

On 2010-09-02, at 8:48 AM, Tudor Girba wrote:

> Hi,
>
> Interesting read, but I am still not sure to understand the implications, so let me ask you another question: is there a way to make use of OpenGL by generating vector graphics from within Pharo?
>
> I am particularly interested if there are ways to improve visualization tools like Mondrian to make it use the hardware (and thus maybe have reasonable speed when drawing complex and maybe aliased graphs).
>
> Cheers,
> Doru

--
===========================================================================
John M. McIntosh <[hidden email]>   Twitter:  squeaker68882
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
===========================================================================







smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-project] Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Andreas.Raab
On 9/2/2010 11:30 AM, John M McIntosh wrote:
> I think if you hunt in the archives you'll find people have attempted to replace Morphic with Open/GL calls via FFI
>
> Oh like http://www.squeaksource.com/AlienOpenGL

Or http://www.squeaksource.com/CroquetGL (much more complete w/
extensions etc). Something like this might just work:

        (OpenGL new)
                glClearColor: 1.0 with: 0.0 with: 0.0 with: 1.0;
                swapBuffers.

Cheers,
   - Andreas

>
> On 2010-09-02, at 8:48 AM, Tudor Girba wrote:
>
>> Hi,
>>
>> Interesting read, but I am still not sure to understand the implications, so let me ask you another question: is there a way to make use of OpenGL by generating vector graphics from within Pharo?
>>
>> I am particularly interested if there are ways to improve visualization tools like Mondrian to make it use the hardware (and thus maybe have reasonable speed when drawing complex and maybe aliased graphs).
>>
>> Cheers,
>> Doru
>
> --
> ===========================================================================
> John M. McIntosh<[hidden email]>    Twitter:  squeaker68882
> Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
> ===========================================================================
>
>
>
>
>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-project] Experimental Cocoa OS-X based Squeak Cog JIT VM 5.8b4.

Tudor Girba
Thanks, it looks interesting.

Cheers,
Doru


On 2 Sep 2010, at 20:54, Andreas Raab wrote:

> On 9/2/2010 11:30 AM, John M McIntosh wrote:
>> I think if you hunt in the archives you'll find people have  
>> attempted to replace Morphic with Open/GL calls via FFI
>>
>> Oh like http://www.squeaksource.com/AlienOpenGL
>
> Or http://www.squeaksource.com/CroquetGL (much more complete w/  
> extensions etc). Something like this might just work:
>
> (OpenGL new)
> glClearColor: 1.0 with: 0.0 with: 0.0 with: 1.0;
> swapBuffers.
>
> Cheers,
>  - Andreas
>
>>
>> On 2010-09-02, at 8:48 AM, Tudor Girba wrote:
>>
>>> Hi,
>>>
>>> Interesting read, but I am still not sure to understand the  
>>> implications, so let me ask you another question: is there a way  
>>> to make use of OpenGL by generating vector graphics from within  
>>> Pharo?
>>>
>>> I am particularly interested if there are ways to improve  
>>> visualization tools like Mondrian to make it use the hardware (and  
>>> thus maybe have reasonable speed when drawing complex and maybe  
>>> aliased graphs).
>>>
>>> Cheers,
>>> Doru
>>
>> --
>> =
>> =
>> =
>> =
>> =
>> =
>> =====================================================================
>> John M. McIntosh<[hidden email]>    Twitter:  
>> squeaker68882
>> Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
>> =
>> =
>> =
>> =
>> =
>> =
>> =====================================================================
>>
>>
>>
>>
>>
>>
>>
>>
>
>
> _______________________________________________
> Pharo-project mailing list
> [hidden email]
> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project

--
www.tudorgirba.com

"Value is always contextual."




12