Hi all.
I'm about to start work on a project called "Subcanvas", which will be a refactoring of the Canvas class. The intention is that this forms a platform for other GUI projects, including a possible future version of Morphic. It will be a general 2-D drawing API and event handling system. Is anybody interested in this? Does anybody have any comments on any of the following? Are there any annoyances with the current Canvas class that people want to rant about? Current "design sketches" are at http://gulik.pbwiki.com/Canvas. This API will be the main graphics / event-handling API for my SecureSqueak project (http://gulik.pbwiki.com/SecureSqueak). It is likely that Morphic will be ported to it at some stage. The code will be written using Namespaces and my own Package system, and it will be the first real trial of my Namespaces architecture. Features of it are: * Some canvases can have child canvases, each with a z-index. These could be used, e.g., to implement movable windows, sprites, clipped scrollable areas, or flyweight graphics. This will use the underlying graphics system's capabilities. * The Canvas will be a general abstraction for underlying 2-D vector-based or raster-based drawing APIs - e.g. Forms/BitBlt, OpenGL, VNC. * An event handling system will also be part of this package. Mouse events will have a canvas (or sub-canvas) coordinate; keyboard events will be sent to the canvas that has the "keyboard focus". * Canvases must be "secure"; as this will be part of SecureSqueak. Specifically: * Canvas methods must be locked down so that users cannot gain unauthorised access to anything or cause destructive behaviour. * Drawing operations will be clipped. Having access to a canvas only allows the user to draw in that particular area. * Stalled event handlers or drawOn: methods will not affect the operation of other Canvases on the screen. * The coordinate system will use micrometers; 0@0 will be at the bottom left corner of the canvas. Each Canvas will provide a "pixelPitch" method to return the number of micrometers in each pixel (if that Canvas has pixels :-) ) so that pixel-based operations are possible. I don't know how to handle fonts - I don't know what the pros/cons of having a font API built in to the canvas is, or whether it is better to have the font drawing done externally by each application. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
2008/7/4 Michael van der Gulik <[hidden email]>:
> Hi all. > > I'm about to start work on a project called "Subcanvas", which will be a > refactoring of the Canvas class. The intention is that this forms a platform > for other GUI projects, including a possible future version of Morphic. It > will be a general 2-D drawing API and event handling system. > > Is anybody interested in this? Does anybody have any comments on any of the > following? Are there any annoyances with the current Canvas class that > people want to rant about? > I have a big interest, but lack in time to get deeply with it :( > Current "design sketches" are at http://gulik.pbwiki.com/Canvas. > > This API will be the main graphics / event-handling API for my SecureSqueak > project (http://gulik.pbwiki.com/SecureSqueak). It is likely that Morphic > will be ported to it at some stage. The code will be written using > Namespaces and my own Package system, and it will be the first real trial of > my Namespaces architecture. > > Features of it are: > * Some canvases can have child canvases, each with a z-index. These could be > used, e.g., to implement movable windows, sprites, clipped scrollable areas, > or flyweight graphics. This will use the underlying graphics system's > capabilities. > general: no z-index (or any early binding to coordinate system). Simply child canvas concept. > * The Canvas will be a general abstraction for underlying 2-D vector-based > or raster-based drawing APIs - e.g. Forms/BitBlt, OpenGL, VNC. > +1 > * An event handling system will also be part of this package. Mouse events > will have a canvas (or sub-canvas) coordinate; keyboard events will be sent > to the canvas that has the "keyboard focus". > Please don't. An event subsystem should not be connected directly with canvases. It should be a separate layer for applications. Any coordinate translations should come this way: Event -> morph(widget) -> canvas. But never Event->canvas. Suppose you moving a scrollbar knob. For this you would need 2 different coordinates in result: - one to update hand position - second to update knob position Or, suppose you dragging something in 3D space. You may move mouse to the left or right, but movements will be translated in different way (dragging object(s) closer/farther from eye). It is up to morphs/UI how to deal with events and then how to update themselves on screen as a reaction on such event. > * Canvases must be "secure"; as this will be part of SecureSqueak. > Specifically: > * Canvas methods must be locked down so that users cannot gain > unauthorised access to anything or cause destructive behaviour. > * Drawing operations will be clipped. Having access to a canvas only > allows the user to draw in that particular area. > * Stalled event handlers or drawOn: methods will not affect the operation > of other Canvases on the screen. > > * The coordinate system will use micrometers; 0@0 will be at the bottom left > corner of the canvas. Each Canvas will provide a "pixelPitch" method to > return the number of micrometers in each pixel (if that Canvas has pixels > :-) ) so that pixel-based operations are possible. > Surely, but only if pixel-based operations is possible at all :) I would suggest you to make it as possible as you can to abstract the media type of canvas, or principles, how it performs drawings - be it pixel based or vector based or curve-based. > I don't know how to handle fonts - I don't know what the pros/cons of having > a font API built in to the canvas is, or whether it is better to have the > font drawing done externally by each application. > Lets discuss that a bit, before you going to start implementing it. Recently, we discussed a lot of ideas with Gary about canvases/events. I think you should be aware of what conclusions we had, at least. Gary, can you refresh my memory about ordinates & events ideas we discussed? :) -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Michael van der Gulik-2
You might want to talk to Vassili about what he's doing for Brazil in Newspeak. His latest posting says that he's got it working on Morphic and mostly on Windows so far.
-david
On Thu, Jul 3, 2008 at 6:08 PM, Michael van der Gulik <[hidden email]> wrote: Hi all. |
In reply to this post by Igor Stasenko
On Fri, Jul 4, 2008 at 12:10 PM, Igor Stasenko <[hidden email]> wrote: 2008/7/4 Michael van der Gulik <[hidden email]>: A parent canvas could have multiple children. When the Canvas architecture wants to render these, it needs to know the distance each child is from the shared parent. You also need to know the distance between child and parent if you want to add relection, shadows and lighting in the OpenGL version :-D.
Every Canvas has it's own coordinate system; they can be positioned anywhere on the screen, but still have (0@0) in their bottom-left corner. This means that mouse-based events with a position are relevant only for a particular Canvas. What I was considering doing was making the Canvas the source of events. Every Canvas has a model which must implement event handling methods and a #drawOn:bounds: method. A Canvas can ask the model to redraw itself when the Canvas becomes dirty (e.g. when sub-canvases move and the canvas has no cached state). I've implemented a scroll bar using this kind of system. The scroll bar just needs to remember where the original mouseDown event was. I don't understand what your point was here. As with dragging things in 3-D space, I'll need to invent some way of making mouse capture secure. Do you still think this is a bad design?
This is why I posted here :-). IRC logs would be good, if they can be found. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
2008/7/4 Michael van der Gulik <[hidden email]>:
> > > On Fri, Jul 4, 2008 at 12:10 PM, Igor Stasenko <[hidden email]> wrote: >> >> 2008/7/4 Michael van der Gulik <[hidden email]>: >> >> >> > Features of it are: >> > * Some canvases can have child canvases, each with a z-index. These >> > could be >> > used, e.g., to implement movable windows, sprites, clipped scrollable >> > areas, >> > or flyweight graphics. This will use the underlying graphics system's >> > capabilities. >> > >> mmm.. i like this idea in general, but please, lets make it more >> general: no z-index (or any early binding to coordinate system). >> Simply child canvas concept. > > A parent canvas could have multiple children. When the Canvas architecture > wants to render these, it needs to know the distance each child is from the > shared parent. You also need to know the distance between child and parent > if you want to add relection, shadows and lighting in the OpenGL version > :-D. > Lets keep GL things aside. Its up to developer how to render reflections and what distance(s) come to play with his techniques. And scheme you proposing better fit to a layers concept, not child-parent relations. Maybe you should introduce layers then? > >> >> > * An event handling system will also be part of this package. Mouse >> > events >> > will have a canvas (or sub-canvas) coordinate; keyboard events will be >> > sent >> > to the canvas that has the "keyboard focus". >> > >> >> Please don't. An event subsystem should not be connected directly with >> canvases. >> It should be a separate layer for applications. >> Any coordinate translations should come this way: Event -> >> morph(widget) -> canvas. >> But never Event->canvas. >> >> Suppose you moving a scrollbar knob. For this you would need 2 >> different coordinates in result: >> - one to update hand position >> - second to update knob position >> >> Or, suppose you dragging something in 3D space. You may move mouse to >> the left or right, but movements will be translated in different way >> (dragging object(s) closer/farther from eye). >> >> It is up to morphs/UI how to deal with events and then how to update >> themselves on screen as a reaction on such event. > > Every Canvas has it's own coordinate system; they can be positioned anywhere > on the screen, but still have (0@0) in their bottom-left corner. This means > that mouse-based events with a position are relevant only for a particular > Canvas. > have a mouse? > What I was considering doing was making the Canvas the source of events. > Every Canvas has a model which must implement event handling methods and a > #drawOn:bounds: method. A Canvas can ask the model to redraw itself when the > Canvas becomes dirty (e.g. when sub-canvases move and the canvas has no > cached state). > A dirty/clean is a not a basic canvas capability. Needless to say, that for some devices (including GL) sometimes its easier and faster to redraw everything from scratch rather than care about dirty areas. Some devices (like printers) have nothing to do with dirty/clean approach. Don't let a premature optimizations influence the basic model! :) > I've implemented a scroll bar using this kind of system. The scroll bar just > needs to remember where the original mouseDown event was. I don't understand > what your point was here. > The point is , that you may never know what portions of screen need to be updated as a reaction on mouseDown (or any other) event. I can write a simple code which updates an opposite point of screen to where mouse located. Or i can write a code which writes a character $A in file each time you clicking a mouse. I don't see how and why canvas should take part in event handling. > As with dragging things in 3-D space, I'll need to invent some way of making > mouse capture secure. > Right, also, don't forget about relative mouse pointer motion. A good illustration of capturing a relative mouse movement is 3D first person shooter game :) It is not interesting where mouse cursor is, its only interested in amount of mouse movement along its two axises. And in fact, mouse, as device generates relative events, is knows nothing about screen size , or where mouse cursor are allowed to be. So, binding mouse to a screen space is wrong by its nature. Event should generate a relative movement, and then World (or top-level handler) can translate such events to absolute coordinates in its own space (if it cares). > Do you still think this is a bad design? > > >> >> > I don't know how to handle fonts - I don't know what the pros/cons of >> > having >> > a font API built in to the canvas is, or whether it is better to have >> > the >> > font drawing done externally by each application. >> > >> >> Lets discuss that a bit, before you going to start implementing it. >> Recently, we discussed a lot of ideas with Gary about canvases/events. >> I think you should be aware of what conclusions we had, at least. >> Gary, can you refresh my memory about ordinates & events ideas we >> discussed? :) > > > This is why I posted here :-). > > IRC logs would be good, if they can be found. > > Gulik. > > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > > > -- Best regards, Igor Stasenko AKA sig. |
On Fri, Jul 4, 2008 at 2:16 PM, Igor Stasenko <[hidden email]> wrote:
On reflection, yes, it is a premature optimisation. Hmm... Right, also, don't forget about relative mouse pointer motion. A good IIRC, mouse pointer events normally contain absolute coordinates in most windowing systems, including Squeak. Mouse velocity could be passed as extra information in the event. Mouse pointer capture could be done using some key combination, although this is a "nice-to-have" feature that I won't implement in the first release. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
2008/7/4 Michael van der Gulik <[hidden email]>:
> > > On Fri, Jul 4, 2008 at 2:16 PM, Igor Stasenko <[hidden email]> wrote: >> >> 2008/7/4 Michael van der Gulik <[hidden email]>: >> >> > What I was considering doing was making the Canvas the source of events. >> > Every Canvas has a model which must implement event handling methods and >> > a >> > #drawOn:bounds: method. A Canvas can ask the model to redraw itself when >> > the >> > Canvas becomes dirty (e.g. when sub-canvases move and the canvas has no >> > cached state). >> > >> >> A dirty/clean is a not a basic canvas capability. >> Needless to say, that for some devices (including GL) sometimes its >> easier and faster to redraw everything from scratch rather than care >> about dirty areas. Some devices (like printers) have nothing to do >> with dirty/clean approach. >> Don't let a premature optimizations influence the basic model! :) > > On reflection, yes, it is a premature optimisation. Hmm... > > >> Right, also, don't forget about relative mouse pointer motion. A good >> illustration of capturing a relative mouse movement is 3D first person >> shooter game :) It is not interesting where mouse cursor is, its only >> interested in amount of mouse movement along its two axises. >> And in fact, mouse, as device generates relative events, is knows >> nothing about screen size , or where mouse cursor are allowed to be. >> So, binding mouse to a screen space is wrong by its nature. Event >> should generate a relative movement, and then World (or top-level >> handler) can translate such events to absolute coordinates in its own >> space (if it cares). > > > IIRC, mouse pointer events normally contain absolute coordinates in most > windowing systems, including Squeak. > > Mouse velocity could be passed as extra information in the event. Mouse > pointer capture could be done using some key combination, although this is a > "nice-to-have" feature that I won't implement in the first release. > I don't like to centralize event system around single device such as mouse. It needs an abstraction. Think about different devices, such as stylus pen, or multi-touch sensor screen. Any device can generate a 'click' event, or some gestures which then transformed to events. Event system should be flexible enough to able to work with wide range of input devices , not only mouse. > Gulik. > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > > > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Michael van der Gulik-2
Hi Gulik,
I know you know this, but I think that the choosing a coordinate system is the application programmer's problem, not the framework designer's. You favor real metric units, i.e. micrometers. However, I prefer zoomeable interfaces. The actual size of objects on the screen can be adjusted by the user. That's what resolution independence is for! Anyway, what is the pixel size for a projector? Cheers, Juan Vuletich Ps. I tried to refrain from speaking about the wonders of the Image Processing approach to anti-aliasing and non-linear coordinate systems and transformations! Michael van der Gulik wrote: > Hi all. > > I'm about to start work on a project called "Subcanvas", which will be > a refactoring of the Canvas class. The intention is that this forms a > platform for other GUI projects, including a possible future version > of Morphic. It will be a general 2-D drawing API and event handling > system. > > Is anybody interested in this? Does anybody have any comments on any > of the following? Are there any annoyances with the current Canvas > class that people want to rant about? > > Current "design sketches" are at http://gulik.pbwiki.com/Canvas. > > This API will be the main graphics / event-handling API for my > SecureSqueak project (http://gulik.pbwiki.com/SecureSqueak). It is > likely that Morphic will be ported to it at some stage. The code will > be written using Namespaces and my own Package system, and it will be > the first real trial of my Namespaces architecture. > > Features of it are: > * Some canvases can have child canvases, each with a z-index. These > could be used, e.g., to implement movable windows, sprites, clipped > scrollable areas, or flyweight graphics. This will use the underlying > graphics system's capabilities. > > * The Canvas will be a general abstraction for underlying 2-D > vector-based or raster-based drawing APIs - e.g. Forms/BitBlt, OpenGL, > VNC. > > * An event handling system will also be part of this package. Mouse > events will have a canvas (or sub-canvas) coordinate; keyboard events > will be sent to the canvas that has the "keyboard focus". > > * Canvases must be "secure"; as this will be part of SecureSqueak. > Specifically: > * Canvas methods must be locked down so that users cannot gain > unauthorised access to anything or cause destructive behaviour. > * Drawing operations will be clipped. Having access to a canvas > only allows the user to draw in that particular area. > * Stalled event handlers or drawOn: methods will not affect the > operation of other Canvases on the screen. > > * The coordinate system will use micrometers; 0@0 will be at the > bottom left corner of the canvas. Each Canvas will provide a > "pixelPitch" method to return the number of micrometers in each pixel > (if that Canvas has pixels :-) ) so that pixel-based operations are > possible. > > I don't know how to handle fonts - I don't know what the pros/cons of > having a font API built in to the canvas is, or whether it is better > to have the font drawing done externally by each application. > > Gulik. > > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > ------------------------------------------------------------------------ > > > > ------------------------------------------------------------------------ > > No virus found in this incoming message. > Checked by AVG. > Version: 7.5.524 / Virus Database: 270.4.3/1528 - Release Date: 7/1/2008 7:26 AM > |
2008/7/4 Juan Vuletich <[hidden email]>:
> Hi Gulik, > > I know you know this, but I think that the choosing a coordinate system is > the application programmer's problem, not the framework designer's. > > You favor real metric units, i.e. micrometers. However, I prefer zoomeable > interfaces. The actual size of objects on the screen can be adjusted by the > user. That's what resolution independence is for! > > Anyway, what is the pixel size for a projector? > > Cheers, > Juan Vuletich > > Ps. I tried to refrain from speaking about the wonders of the Image > Processing approach to anti-aliasing and non-linear coordinate systems and > transformations! > Yeah, but first you need a decent plugin to make use of it :) -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> 2008/7/4 Juan Vuletich <[hidden email]>: > >> Hi Gulik, >> >> I know you know this, but I think that the choosing a coordinate system is >> the application programmer's problem, not the framework designer's. >> >> You favor real metric units, i.e. micrometers. However, I prefer zoomeable >> interfaces. The actual size of objects on the screen can be adjusted by the >> user. That's what resolution independence is for! >> >> Anyway, what is the pixel size for a projector? >> >> Cheers, >> Juan Vuletich >> >> Ps. I tried to refrain from speaking about the wonders of the Image >> Processing approach to anti-aliasing and non-linear coordinate systems and >> transformations! >> >> > > Yeah, but first you need a decent plugin to make use of it :) > > > We're talking ideas here. |
In reply to this post by Juan Vuletich-4
On Fri, 04 Jul 2008 09:53:52 -0300
Juan Vuletich <[hidden email]> wrote: > Hi Gulik, > > I know you know this, but I think that the choosing a coordinate system > is the application programmer's problem, not the framework designer's. > > You favor real metric units, i.e. micrometers. However, I prefer > zoomeable interfaces. The actual size of objects on the screen can be > adjusted by the user. That's what resolution independence is for! Well, using micrometers does give you resolution independence. One of the steps of installing this system would be to calibrate your monitor: hold a ruler up to it and make sure than 10cm on screen really is 10cm. Colour correction might also come later. This gives the UI designer some idea of what coordinates really are. Currently, a distance of "1000" could mean anything depending on the DPI of the screen or paper. This also applies to Morphic3, especially if you're doing fancy things with the coordinate systems. Zooming in or out of the whole screen might be a future feature. > Anyway, what is the pixel size for a projector? Really big :-). We can make exceptions for projectors, although they could be calibrated too if you really wanted to :-). > Cheers, > Juan Vuletich > > Ps. I tried to refrain from speaking about the wonders of the Image > Processing approach to anti-aliasing and non-linear coordinate systems > and transformations! Well done :-). I only need a simple 2-D graphics API. Hopefully my API will be flexible enough to integrate some of your fanciness at a later stage. Gulik. -- Michael van der Gulik <[hidden email]> |
In reply to this post by Igor Stasenko
Sorry for delay...
I'm not sure we did discuss events wrt ordinates. No reason that the same approach couldn't be taken (rather than just the look of something being determined by an N dimensional ordinate but also its interactive behaviour). Aside from that, a solid abstract model of event sources (tying into the "real" world, along with interaction method discovery/capabilities) would be a good foundation. Regards, Gary. > > Lets discuss that a bit, before you going to start implementing it. > Recently, we discussed a lot of ideas with Gary about canvases/events. > I think you should be aware of what conclusions we had, at least. > Gary, can you refresh my memory about ordinates & events ideas we > discussed? :) > > > -- > Best regards, > Igor Stasenko AKA sig. > |
In reply to this post by Michael van der Gulik-2
I'm working on the squeakGtk and I've added the support of the Cairo
library maybe you can use this library ? The support of Cairo is not yet finished but it is usable Cheers Gwenael On Fri, Jul 4, 2008 at 1:08 AM, Michael van der Gulik <[hidden email]> wrote: > Hi all. > > I'm about to start work on a project called "Subcanvas", which will be a > refactoring of the Canvas class. The intention is that this forms a platform > for other GUI projects, including a possible future version of Morphic. It > will be a general 2-D drawing API and event handling system. > > Is anybody interested in this? Does anybody have any comments on any of the > following? Are there any annoyances with the current Canvas class that > people want to rant about? > > Current "design sketches" are at http://gulik.pbwiki.com/Canvas. > > This API will be the main graphics / event-handling API for my SecureSqueak > project (http://gulik.pbwiki.com/SecureSqueak). It is likely that Morphic > will be ported to it at some stage. The code will be written using > Namespaces and my own Package system, and it will be the first real trial of > my Namespaces architecture. > > Features of it are: > * Some canvases can have child canvases, each with a z-index. These could be > used, e.g., to implement movable windows, sprites, clipped scrollable areas, > or flyweight graphics. This will use the underlying graphics system's > capabilities. > > * The Canvas will be a general abstraction for underlying 2-D vector-based > or raster-based drawing APIs - e.g. Forms/BitBlt, OpenGL, VNC. > > * An event handling system will also be part of this package. Mouse events > will have a canvas (or sub-canvas) coordinate; keyboard events will be sent > to the canvas that has the "keyboard focus". > > * Canvases must be "secure"; as this will be part of SecureSqueak. > Specifically: > * Canvas methods must be locked down so that users cannot gain > unauthorised access to anything or cause destructive behaviour. > * Drawing operations will be clipped. Having access to a canvas only > allows the user to draw in that particular area. > * Stalled event handlers or drawOn: methods will not affect the operation > of other Canvases on the screen. > > * The coordinate system will use micrometers; 0@0 will be at the bottom left > corner of the canvas. Each Canvas will provide a "pixelPitch" method to > return the number of micrometers in each pixel (if that Canvas has pixels > :-) ) so that pixel-based operations are possible. > > I don't know how to handle fonts - I don't know what the pros/cons of having > a font API built in to the canvas is, or whether it is better to have the > font drawing done externally by each application. > > Gulik. > > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > > > |
Free forum by Nabble | Edit this page |