Hey all...
I'm still having problems with my external shared memory thingie. I can evoke the unix shm* calls and obtain a reference to a shared memory buffer and attach it to squeak's heap. I can write into it and read back. I can even attach the same shared memory to two different instances of squeak and write to the buffer in one and read it with the other. What I still cannot figure out is how to take that external buffer and direct the OpenGL>>glReadPixels call to it so that I can draw using squeak and have it become a bitmap for a different application. I've tried to figure out how forms and bitmaps work. I've looked over a reasonably large bit of source code and read various bits of the wiki and example code and what not. I still can't get glReadPixels to change the contents of my shared buffer, however. OTOH, I've become very good at getting Squeak to crash, sometimes silently and sometimes with the typical Mac OS X crash report dialog. Anyone know how to simply take an arbitrary raw pointer and successfully direct glReadPixels to it? I've looked at the source for the Tweak-OpenGL code, but can't wrap my brain around how it is [correctly] using a bitmap as the target buffer for glReadPixels... Or rather, how to properly create a bitmap from the raw pointer so that I can use the example code as a starting point. http://www.croquetconsortium.org/index.php/Procedural_Texturing thanks in advance. Lawson |
Lawson English wrote:
> I'm still having problems with my external shared memory thingie. I can > evoke the unix shm* calls and obtain a reference to a shared memory > buffer and attach it to squeak's heap. I can write into it and read > back. I can even attach the same shared memory to two different > instances of squeak and write to the buffer in one and read it with the > other. What I still cannot figure out is how to take that external > buffer and direct the OpenGL>>glReadPixels call to it so that I can draw > using squeak and have it become a bitmap for a different application. Post your code. Using glReadPixels is straightforward, so there must be something simple in your code that goes wrong. Simply speaking the following should work fine: "Allocate or obtain shared memory buffer" xHandle := ExternalHandle allocate: rect width * rect height * 4. "Convert it to ExternalData" xData := ExternalData fromHandle: xHandle type: ExternalType void asPointerType. "Call glReadPixels" ogl glReadPixels: rect left with: ogl extent y - rect bottom with: rect width with: rect height with: ogl imagePixelFormat32 with: ogl imagePixelType32 with: xData. This should be all there is to it. Cheers, - Andreas |
Andreas Raab wrote:
> Lawson English wrote: >> I'm still having problems with my external shared memory thingie. I >> can evoke the unix shm* calls and obtain a reference to a shared >> memory buffer and attach it to squeak's heap. I can write into it and >> read back. I can even attach the same shared memory to two different >> instances of squeak and write to the buffer in one and read it with >> the other. What I still cannot figure out is how to take that >> external buffer and direct the OpenGL>>glReadPixels call to it so >> that I can draw using squeak and have it become a bitmap for a >> different application. > > Post your code. Using glReadPixels is straightforward, so there must > be something simple in your code that goes wrong. Simply speaking the > following should work fine: > > "Allocate or obtain shared memory buffer" > xHandle := ExternalHandle allocate: rect width * rect height * 4. > Confused. ExternalHandle is...? Lawson |
Lawson English wrote:
> Andreas Raab wrote: >> Lawson English wrote: >>> I'm still having problems with my external shared memory thingie. I >>> can evoke the unix shm* calls and obtain a reference to a shared >>> memory buffer and attach it to squeak's heap. I can write into it and >>> read back. I can even attach the same shared memory to two different >>> instances of squeak and write to the buffer in one and read it with >>> the other. What I still cannot figure out is how to take that >>> external buffer and direct the OpenGL>>glReadPixels call to it so >>> that I can draw using squeak and have it become a bitmap for a >>> different application. >> >> Post your code. Using glReadPixels is straightforward, so there must >> be something simple in your code that goes wrong. Simply speaking the >> following should work fine: >> >> "Allocate or obtain shared memory buffer" >> xHandle := ExternalHandle allocate: rect width * rect height * 4. >> > > Confused. ExternalHandle is...? Sorry, typo. Must say ExternalAddress. Cheers, - Andreas |
In reply to this post by Andreas.Raab
Hi Andreas. Even if I use a normally allocated bit of memory, I can't
get it to work: xHandle := ExternalAddress allocate: 100 * 100 * 4. "create external buffer handle" xData := ExternalData fromHandle: xHandle type: ExternalType void asPointerType. "convert for use with glReadPixels" ogl := OpenGL newIn: (0@0 extent:100@100). "create ogl context" ogl glClearColor(0.2, 0.9, 0.7, 0.5). "set clear color" ogl glClear(16r4000). "clear backbuffer" ogl swapBuffers. "draw to screen" ogl "read pixels into my buffer" glReadPixels: 0 with: -100 with: 100 with: 100 with: ogl imagePixelFormat32 with: ogl imagePixelType32 with: xData. xHandle byteAt: 1000=> 0 "print byte value in my buffer???" xHandle byteAt: 1000 put: 10 "manually set byte value in my buffer" xHandle byteAt: 1000 => 10 "print byte value in my buffer???????" Andreas Raab wrote: > Lawson English wrote: >> I'm still having problems with my external shared memory thingie. I >> can evoke the unix shm* calls and obtain a reference to a shared >> memory buffer and attach it to squeak's heap. I can write into it and >> read back. I can even attach the same shared memory to two different >> instances of squeak and write to the buffer in one and read it with >> the other. What I still cannot figure out is how to take that >> external buffer and direct the OpenGL>>glReadPixels call to it so >> that I can draw using squeak and have it become a bitmap for a >> different application. > > Post your code. Using glReadPixels is straightforward, so there must > be something simple in your code that goes wrong. Simply speaking the > following should work fine: > > "Allocate or obtain shared memory buffer" > xHandle := ExternalHandle allocate: rect width * rect height * 4. > > "Convert it to ExternalData" > xData := ExternalData fromHandle: xHandle type: ExternalType void > asPointerType. > > "Call glReadPixels" > ogl > glReadPixels: rect left > with: ogl extent y - rect bottom > with: rect width > with: rect height > with: ogl imagePixelFormat32 > with: ogl imagePixelType32 > with: xData. > > This should be all there is to it. > > Cheers, > - Andreas > > |
Lawson, I have not been following this thread closely so I'm probably
wrong but should you be using "xData byteAt:..."? xHandle doesn't make sense to me. On 11/02/2010 15:45, Lawson English wrote: > Hi Andreas. Even if I use a normally allocated bit of memory, I can't > get it to work: > > > xHandle := ExternalAddress allocate: 100 * 100 * 4. "create external > buffer handle" > xData := ExternalData fromHandle: xHandle type: ExternalType void > asPointerType. "convert for use with glReadPixels" > > ogl := OpenGL newIn: (0@0 extent:100@100). "create ogl context" > > > > ogl glClearColor(0.2, 0.9, 0.7, 0.5). "set clear color" > > ogl glClear(16r4000). "clear backbuffer" > > ogl swapBuffers. "draw to screen" > > ogl "read pixels into my buffer" > glReadPixels: 0 > with: -100 > with: 100 > with: 100 > with: ogl imagePixelFormat32 > with: ogl imagePixelType32 > with: xData. > > xHandle byteAt: 1000=> 0 "print byte value in my buffer???" > xHandle byteAt: 1000 put: 10 "manually set byte value in my buffer" > xHandle byteAt: 1000 => 10 "print byte value in my buffer???????" > > > > > > > Andreas Raab wrote: >> Lawson English wrote: >>> I'm still having problems with my external shared memory thingie. I >>> can evoke the unix shm* calls and obtain a reference to a shared >>> memory buffer and attach it to squeak's heap. I can write into it and >>> read back. I can even attach the same shared memory to two different >>> instances of squeak and write to the buffer in one and read it with >>> the other. What I still cannot figure out is how to take that >>> external buffer and direct the OpenGL>>glReadPixels call to it so >>> that I can draw using squeak and have it become a bitmap for a >>> different application. >> >> Post your code. Using glReadPixels is straightforward, so there must >> be something simple in your code that goes wrong. Simply speaking the >> following should work fine: >> >> "Allocate or obtain shared memory buffer" >> xHandle := ExternalHandle allocate: rect width * rect height * 4. >> >> "Convert it to ExternalData" >> xData := ExternalData fromHandle: xHandle type: ExternalType void >> asPointerType. >> >> "Call glReadPixels" >> ogl >> glReadPixels: rect left >> with: ogl extent y - rect bottom >> with: rect width >> with: rect height >> with: ogl imagePixelFormat32 >> with: ogl imagePixelType32 >> with: xData. >> >> This should be all there is to it. >> >> Cheers, >> - Andreas >> >> > > |
In reply to this post by LawsonEnglish
Lawson English wrote:
> ogl "read pixels into my buffer" > glReadPixels: 0 > with: -100 > with: 100 > with: 100 > with: ogl imagePixelFormat32 > with: ogl imagePixelType32 > with: xData. Reading from -100? That would be offscreen. Try replacing the -100 with 0 and you'll get a lot further. Cheers, - Andreas > > xHandle byteAt: 1000=> 0 "print byte value in my buffer???" > xHandle byteAt: 1000 put: 10 "manually set byte value in my buffer" > xHandle byteAt: 1000 => 10 "print byte value in my buffer???????" > > > > > > > Andreas Raab wrote: >> Lawson English wrote: >>> I'm still having problems with my external shared memory thingie. I >>> can evoke the unix shm* calls and obtain a reference to a shared >>> memory buffer and attach it to squeak's heap. I can write into it and >>> read back. I can even attach the same shared memory to two different >>> instances of squeak and write to the buffer in one and read it with >>> the other. What I still cannot figure out is how to take that >>> external buffer and direct the OpenGL>>glReadPixels call to it so >>> that I can draw using squeak and have it become a bitmap for a >>> different application. >> >> Post your code. Using glReadPixels is straightforward, so there must >> be something simple in your code that goes wrong. Simply speaking the >> following should work fine: >> >> "Allocate or obtain shared memory buffer" >> xHandle := ExternalHandle allocate: rect width * rect height * 4. >> >> "Convert it to ExternalData" >> xData := ExternalData fromHandle: xHandle type: ExternalType void >> asPointerType. >> >> "Call glReadPixels" >> ogl >> glReadPixels: rect left >> with: ogl extent y - rect bottom >> with: rect width >> with: rect height >> with: ogl imagePixelFormat32 >> with: ogl imagePixelType32 >> with: xData. >> >> This should be all there is to it. >> >> Cheers, >> - Andreas >> >> > > > |
Andreas Raab wrote:
> Lawson English wrote: >> ogl "read pixels into my buffer" >> glReadPixels: 0 >> with: -100 >> with: 100 >> with: 100 >> with: ogl imagePixelFormat32 >> with: ogl imagePixelType32 >> with: xData. > > Reading from -100? That would be offscreen. Try replacing the -100 > with 0 and you'll get a lot further. > > Cheers, > - Andreas > >> >>> >>> with: ogl extent y - rect bottom >>> Somehow I was thinking that ogl extent y - rect bottom = -100. I'm pretty sure my original test used 0, but no matter. Once I substitute mycode with your code it worked. Not only that but when I reserved shared memory, it worked as well. so,, the acid test: I fired up a second image in a second instance of cobalt, connected the buffer to that image and in cobalt.image: anAddress byteAt: 1000 => 128 in cobalt.image copy: anAddress byteAt: 1000 => 128 So, I can now draw using squeak opengl calls and they are now accessible in another process. Closer. Thanks very much. Lawson |
Lawson English wrote:
> > > So, I can now draw using squeak opengl calls and they are now > accessible in another process. > Benchmark: Time millisecondsToRun: [1000 timesRepeat: [ ogl "read pixels into my buffer" glReadPixels: 0 with: 0 with: 1000 with: 1000 with: ogl imagePixelFormat32 with: ogl imagePixelType32 with: xData. ]] => 5930 :-) Lawson |
Very cool, nice work!
If it turns out that you need more CPU cycles, you might try using PBOs to asynchronously read the pixels... when you call glReadPixels(), the function does not return until the pixels have been completely read, and while you're waiting for the pixels to cross the PCI-E bus, Squeak isn't doing any processing. If you're reading back 30fps at 5ms per frame (according to your measurements below), you're sitting idle about 15% of the time. Search for "Asynchronous glReadPixels:" in the PBO extension spec (http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt). Google will turn up many more tutorials. It probably won't be your top priority right now, but I wanted to ensure that you're aware of the option. Cheers, Josh On Feb 11, 2010, at 1:54 PM, Lawson English wrote: > Lawson English wrote: >> >> >> So, I can now draw using squeak opengl calls and they are now accessible in another process. >> > > Benchmark: > > Time millisecondsToRun: [1000 timesRepeat: [ > ogl "read pixels into my buffer" > glReadPixels: 0 > with: 0 > with: 1000 > with: 1000 > with: ogl imagePixelFormat32 > with: ogl imagePixelType32 > with: xData. > ]] => 5930 > > > :-) > > > Lawson > |
Josh Gargus wrote:
> Very cool, nice work! > > If it turns out that you need more CPU cycles, you might try using PBOs to asynchronously read the pixels... when you call glReadPixels(), the function does not return until the pixels have been completely read, and while you're waiting for the pixels to cross the PCI-E bus, Squeak isn't doing any processing. If you're reading back 30fps at 5ms per frame (according to your measurements below), you're sitting idle about 15% of the time. > > Search for "Asynchronous glReadPixels:" in the PBO extension spec (http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt). Google will turn up many more tutorials. > > It probably won't be your top priority right now, but I wanted to ensure that you're aware of the option. > > Cheers, > Josh > > Thanks. I was aware of PBOs and the like but didn't want to get into them until I got something simple working. Am I misunderstanding 1000x1000x1000x4/6? I get more like 160fps x 1megapixel raw blitting speed which sounds fast until one realizes that not much else is going on period. BUT it is fast enough to justify things like testing the rendering of cobalt using one instance of squeak while displaying/manipulating using another if you have a multi-core system. What happens to parallel processing speeds in squeak when shared memory is used instead of streaming over sockets ala teatime? > > On Feb 11, 2010, at 1:54 PM, Lawson English wrote: > > >> Lawson English wrote: >> >>> So, I can now draw using squeak opengl calls and they are now accessible in another process. >>> >>> >> Benchmark: >> >> Time millisecondsToRun: [1000 timesRepeat: [ >> ogl "read pixels into my buffer" >> glReadPixels: 0 >> with: 0 >> with: 1000 >> with: 1000 >> with: ogl imagePixelFormat32 >> with: ogl imagePixelType32 >> with: xData. >> ]] => 5930 >> >> >> :-) >> >> >> Lawson >> >> > > > |
Lawson English wrote:
> Josh Gargus wrote: >> Very cool, nice work! >> >> If it turns out that you need more CPU cycles, you might try using >> PBOs to asynchronously read the pixels... when you call >> glReadPixels(), the function does not return until the pixels have >> been completely read, and while you're waiting for the pixels to >> cross the PCI-E bus, Squeak isn't doing any processing. If you're >> reading back 30fps at 5ms per frame (according to your measurements >> below), you're sitting idle about 15% of the time. >> >> Search for "Asynchronous glReadPixels:" in the PBO extension spec >> (http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt). >> Google will turn up many more tutorials. >> >> It probably won't be your top priority right now, but I wanted to >> ensure that you're aware of the option. >> >> Cheers, >> Josh >> >> > > Thanks. I was aware of PBOs and the like but didn't want to get into > them until I got something simple working. Am I misunderstanding > 1000x1000x1000x4/6? > > I get more like 160fps x 1megapixel raw blitting speed which sounds > fast until one realizes that not much else is going on period. doh 160 MB/sec ~ 30fps, like you said. LOL |
On Feb 11, 2010, at 10:33 PM, Lawson English wrote: > Lawson English wrote: >> Josh Gargus wrote: >>> Very cool, nice work! >>> >>> If it turns out that you need more CPU cycles, you might try using PBOs to asynchronously read the pixels... when you call glReadPixels(), the function does not return until the pixels have been completely read, and while you're waiting for the pixels to cross the PCI-E bus, Squeak isn't doing any processing. If you're reading back 30fps at 5ms per frame (according to your measurements below), you're sitting idle about 15% of the time. >>> >>> Search for "Asynchronous glReadPixels:" in the PBO extension spec (http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt). Google will turn up many more tutorials. >>> >>> It probably won't be your top priority right now, but I wanted to ensure that you're aware of the option. >>> >>> Cheers, >>> Josh >>> >>> >> >> Thanks. I was aware of PBOs and the like but didn't want to get into them until I got something simple working. Always the right way to proceed. >> Am I misunderstanding 1000x1000x1000x4/6? I'm not sure what you mean by this. Maybe what I write below will clear up your question. >> >> I get more like 160fps x 1megapixel raw blitting speed which sounds fast until one realizes that not much else is going on period. > > doh 160 MB/sec ~ 30fps, like you said. LOL Ha ha, only by pure coincidence. I picked a "typical" frame rate out of the air (maybe your app would require 60fps, or maybe only 15fps); to be more clear, I should have written "If you're reading back, say, 30fps...". By your measurements it takes about 5ms to read back a frame, so if you read back 30 every second it will take about 150ms just waiting on glReadPixels. That's 15% of your time (30% at 60fps, and 7.5% at 15fps). Cheers, Josh |
Free forum by Nabble | Edit this page |