On Feb 1, 2008 7:21 AM, Igor Stasenko <[hidden email]> wrote:
For the holo-projector example, you need "architecture". For example, consider this ASCII-art layered architecture for a GUI: Application | ToolBuilder / \ 2-D Widgets 3-D Widgets | | Canvas OpenGL or something | BitBlt, Cairo, etc. Of course, there's a lot more to it. I believe (and I'm putting words in Juan's mouth here) that Morphic 3 is primarily a 2-D GUI. In terms of hardware support, the Canvas class (currently used by Morphic for drawing everything) needs to be rethought. I've got a preliminary brain dump here: http://gulik.pbwiki.com/Canvas. Morphic 2 (i.e. in Squeak now) isn't very smart about how it draws stuff; it's very slow. BitBlt is capable of a lot more. Also, the underlying layers of architecture (BitBlt particularly) aren't smart about rendering. The X Windows implementation of Squeak for example (AFAIK) only uses a single bit-mapped "window". The X Window system can do a lot more, such as vectored graphics and multiple windows. I suspect that the VNC implementation doesn't cache bitmaps on the client, although this is pure speculation. I would change Canvas by: - Allowing a canvas to have movable sub-canvases. These would map 1:1 to "windows" (i.e. drawable areas without borders, title bars) in the X window system, or cached bitmaps in VNC, or display lists / textures in OpenGL. These could be moved around the screen easily by only changing the location of the sub-canvas. - Canvases could be implemented as bitmaps or vectored graphics/display lists; the application doesn't need to know what implementation is actually used. - Introduce a "needsRedraw" system of some sort. A Canvas implementation may or may not cache its contents (as a bitmap or vectored graphics/display list). Various implementations may discard the cached contents at times, or perhaps not even cache content. - Use micrometers rather than pixels as the unit of measurement and provide a "pixelPitch" method to return the size of a pixel. For example, my screen has a pixel pitch of 282 micrometers. A 600dpi printer would have a pixel pitch of around 42 micrometers. You could use a SmallInteger to store micrometer values. - Introduce, somehow, an event system closely coupled to a Canvas (because some events have coordinates relative to a canvas). - Somehow support remotely cached bitmaps. I haven't thought about this yet. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
On 31/01/2008, Michael van der Gulik <[hidden email]> wrote:
> > > On Feb 1, 2008 7:21 AM, Igor Stasenko <[hidden email]> wrote: > > > > On 31/01/2008, Bert Freudenberg <[hidden email]> wrote: > > > > > I am beginning to understand your point :) Yes, having that power in > > > the base system would be cool. I still think it can be implemented on > > > latest-gen OpenGL hardware (which can do the non-linear transform and > > > adaptively tesselate curves to pixel resolution) but that then would > > > be just an optimization. > > > > > > > What i'm against, is to bind rendering subsystem to specific hardware. > > There should be a layer, which should offer a rendering services to > > application, and number of layers to deliver graphics to device(s). > > In perfect, it should be able to render itself using any device: > > screen or printer or remote (networked) canvas. > > There also can be a different options in what a rendering media is: > > it's wrong to assume that rendering surface is planar (it can be a 3D > > holo-projector, for instance). > > What is hard, is to design such system to be fast and optimal and > > still be generic enough to be able to render anywhere. > > > > > For the holo-projector example, you need "architecture". For example, > consider this ASCII-art layered architecture for a GUI: > > Application > | > ToolBuilder > / \ > 2-D Widgets 3-D Widgets > | | > Canvas OpenGL or something > | > BitBlt, Cairo, etc. > I simply can't accept this. GIU building architecture should be trivial and not spawned like tree: Application | ToolBuilder | Widgets | Canvas | Device/Surface. Why keeping 2D and 3D apart? What i like in Opengl, that it can handle both 2D/3D drawing primitives, so there is no need in using another library to make your content 3D aware. As for Morphic3 - if we talking about coordinate systems: canvas should accept drawing primitive commands in any coordinate system (be it 1D, 2D, 3D , logarithmic or whatever). Then, in uniform way it should translate drawing commands to those, which are understood by device on which is drawing performed. No separation is needed! That's why i coded my small GLCanvas to show that there is no need in having special context for drawing 3D in application. You can use same interface for drawing different things be it 2D or 3D. Moreover, you don't need to create a separate drawing context (such as OS window) for drawing 3D widgets there. Following this way we will have crappy applications with crappy architectural design. > Of course, there's a lot more to it. I believe (and I'm putting words in > Juan's mouth here) that Morphic 3 is primarily a 2-D GUI. > > In terms of hardware support, the Canvas class (currently used by Morphic > for drawing everything) needs to be rethought. I've got a preliminary brain > dump here: http://gulik.pbwiki.com/Canvas. Morphic 2 (i.e. in Squeak now) > isn't very smart about how it draws stuff; it's very slow. BitBlt is capable > of a lot more. Also, the underlying layers of architecture (BitBlt > particularly) aren't smart about rendering. The X Windows implementation of > Squeak for example (AFAIK) only uses a single bit-mapped "window". The X > Window system can do a lot more, such as vectored graphics and multiple > windows. > > I suspect that the VNC implementation doesn't cache bitmaps on the client, > although this is pure speculation. > Well, there is a CachingCanvas in current morphic system. Too bad, it's not used, and it feels to me that developers simply lose a point why they should use CachingCanvas instead of creating Forms manually for persisting intermediate drawing results. So, they draw on Forms (and bound themselves with pixel blitting operations), and then again, use blitting to draw these cached forms on display surface. More smarter canvas interfacing wouldn't hurt much :) > I would change Canvas by: > > - Allowing a canvas to have movable sub-canvases. These would map 1:1 to > "windows" (i.e. drawable areas without borders, title bars) in the X window > system, or cached bitmaps in VNC, or display lists / textures in OpenGL. > These could be moved around the screen easily by only changing the location > of the sub-canvas. > > - Canvases could be implemented as bitmaps or vectored graphics/display > lists; the application doesn't need to know what implementation is actually > used. Exactly, application should not assume that output surface is planar and pixel based. This should be handled at lower levels (canvas/device) and never show up at application level. > > - Introduce a "needsRedraw" system of some sort. A Canvas implementation may > or may not cache its contents (as a bitmap or vectored graphics/display > list). Various implementations may discard the cached contents at times, or > perhaps not even cache content. > Yes, cached content can be sent multiple times to device, but if we talking about generic architecture, then you can't have any sorts of redraws, because you can't redraw just printed page on a printer, you only can draw new one :) And i'm strongly for keeping this straight: once drawing command is sent, there is no way back. You should not manipulate device state in such manner, because many devices simply can't return back to previous state, or it will take too much resources and time, so this will be a performance killer. As for example, there is a LensMorph in squeak, which grabbing pixels from screen and then transforming them to create fancy effect. This should not be allowed. If you need to perform a 'post-draw' effects, you need to cache intermediate drawing results somewhere, and then issue commands to draw final results, but not manipulate with pixels directly, because it's simply breaking the drawing chain on device. And manual pixel manipulation can be much less effective comparing to capabilities of device (such as video card). > - Use micrometers rather than pixels as the unit of measurement and provide > a "pixelPitch" method to return the size of a pixel. For example, my screen > has a pixel pitch of 282 micrometers. A 600dpi printer would have a pixel > pitch of around 42 micrometers. You could use a SmallInteger to store > micrometer values. > > - Introduce, somehow, an event system closely coupled to a Canvas (because > some events have coordinates relative to a canvas). > > - Somehow support remotely cached bitmaps. I haven't thought about this yet. > Simply issue command 'create cached canvas', then issue commands to it. And then use cached canvas handle to manipulate with it's contents. Again: no need to tie yourself with any sorts of bitmaps. Compare the bandwidth you need for sending a Rect(0,0,1000,1000) command and sending 1000*1000 bitmap. > Gulik. > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > -- Best regards, Igor Stasenko AKA sig. |
On Feb 2, 2008 2:03 AM, Igor Stasenko <[hidden email]> wrote:
Because 2-D widgets are composed of lines and pixels placed on a 2-D canvas. A button is a 2-D rectangle with a border and some black text. 3-D widgets on the other hand might be rendered... well... in 3-D. For example, you could make a button that is a very nice curved 3-D object that casts a slight shadow on the area of the window just below it, with actual 3-D embossed text, and has just enough subtle specular reflection added to it that you could swear you could see your own face in it. This is the sort of CPU-wasting stuff that would make Steve Jobs want to lick his screen. 2-D widgets and 3-D widgets in this example would need different implementations.
I don't think I explained this one very well. I was referring to the Canvas either setting a flag in itself saying "I'm dirty and need to be redrawn" which would be polled by the application, or sending an event to the application saying "please redraw yourself on me!". This would happen for example when a canvas is uncovered (e.g. a window move/resize) and the content was not cached. Of course, printers and fully cached implementations would never need to do this. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
On 02/02/2008, Michael van der Gulik <[hidden email]> wrote:
> > > On Feb 2, 2008 2:03 AM, Igor Stasenko <[hidden email]> wrote: > > > > > > Why keeping 2D and 3D apart? What i like in Opengl, that it can handle > > both 2D/3D drawing primitives, so there is no need in using another > > library to make your content 3D aware. > > > Because 2-D widgets are composed of lines and pixels placed on a 2-D canvas. > A button is a 2-D rectangle with a border and some black text. > > 3-D widgets on the other hand might be rendered... well... in 3-D. For > example, you could make a button that is a very nice curved 3-D object that > casts a slight shadow on the area of the window just below it, with actual > 3-D embossed text, and has just enough subtle specular reflection added to > it that you could swear you could see your own face in it. This is the sort > of CPU-wasting stuff that would make Steve Jobs want to lick his screen. > > 2-D widgets and 3-D widgets in this example would need different > implementations. > Yes, this would require issuing different commands to canvas, but not require to have separate rendering pipelines. -- Best regards, Igor Stasenko AKA sig. |
Free forum by Nabble | Edit this page |