Hello lists,
I making a cross-post, because there are interest from both camps to make it done :) Michael published a changesets with new event sensor code for Pharo, which mainly focused on supporting multiple event consumers(frameworks). Recently , we discussed it with Andreas, and had an idea to integrate this with Announcement framework (and Announcement will become a core package). I very like this idea, but i think we should discuss the implementation details before starting it. Here is my vision , how things should look like: - role 1. Event source. VM, is an events source, in most cases it is the only source, but not always. I would like to be able to plug-in a different event source.. imagine a remote socket connection, or previously recorded events and then replayed. So, an event source should be a separate, abstract entity, where VM is one of the concrete kind of it. There is also, sometimes a need to inject a fabricated events into event queue, to simulate user input, or for producing a synthetic events. - role 2. Event listener/Event sensor (or InputSensor) is a mediator between event source and higher level framework (Morphic.. etc), its role is to listen events from event source and then dispatch it to concrete consumer(s), if any. - role 3. Events. Events should be a full featured objects, with good protocol. A high level framework should not peruse with raw data, as it currently does with raw event buffers which coming from VM. This means, that changes will affect the Morphic. Morphic having a classes for user input events (keyboard/mouse) and decifering raw VM events into own representation. I think we should move the 'decifering' part to EventSource (sub)classes, and make sure EventSensor (and its consumers) dealing with full featured event objects, leaving event consumers free to decide what to do with them. - role 4. Event consumers. Note, there can be multiple different consumers, not just one, as its currently Morphic. We should make sure, that integration with any other framework will be painless. - be ready for supporting multiple host windows. This part is quite simple to do in EventSensor.. but not so simple in Morphic. One thing would be is to refactor all code which uses the Sensor global directly, and replace it with appropriate thing(s). But this is out of scope of event handling framework, at initial stage , we could keep things compatible with old ways (1 Sensor, 1 Display, 1 Morphic World). I started prototyping a classes for events, where each event kind having own event subclass, and directly decoding the VM raw event buffer, so then any event consumer(s) don't have to poke raw events. But since there were a good idea to make use of Announcements for it, it may need some refactoring. Michael, Andreas, i'd like to hear your comments and remarks, as well as any others are wellcome. -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> Recently , we discussed it with Andreas, and had an idea to integrate > this with Announcement framework (and Announcement will become a core > package). Actually, Jecel propsed the combination. > Here is my vision , how things should look like: [... big snip ...] > Michael, Andreas, i'd like to hear your comments and remarks, as well > as any others are wellcome. It's too much to do all that in one go around. What I would propose is to start simple by having an event source which maps from raw event buffers to some kind of (non-morphic) event objects and have InputSensor be a client of that. I believe that is a straightforward extension of the work that has already been done. Cheers, - Andreas |
2009/3/21 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> Recently , we discussed it with Andreas, and had an idea to integrate >> this with Announcement framework (and Announcement will become a core >> package). > > Actually, Jecel propsed the combination. > >> Here is my vision , how things should look like: > > [... big snip ...] > >> Michael, Andreas, i'd like to hear your comments and remarks, as well >> as any others are wellcome. > > It's too much to do all that in one go around. What I would propose is to > start simple by having an event source which maps from raw event buffers to > some kind of (non-morphic) event objects and have InputSensor be a client of > that. I believe that is a straightforward extension of the work that has > already been done. > Wait, i proposing nearly same: Have an event source which produces (non-morphic) event objects and InputSensor. I just want to know where Announcements takes part of it, or should be postpone that for a next step? > Cheers, > - Andreas > -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> Wait, i proposing nearly same: Have an event source which produces > (non-morphic) event objects and InputSensor. > I just want to know where Announcements takes part of it, or should be > postpone that for a next step? What I did while exploring an alternative UI framework was to use the rewrite and add an Announcer as a second listener. "Interested parties" could then subscribe to event announcements. The raw input events are first converted to first class event objects before submitting them to the announcer. As discussed earlier this allows for having a completely separate UI running without any overlaps to morphic. Tweak always had the problem of still being tied into the morphic event processing, the combination of the sensor rewrite and announcers avoid this. I meant to make this stuff available a long time ago, partly as an effort to try to avoid duplicate effort with the Alain's Miro framework, but kept distracted by other things. Will put it a bit higher on my list :-) Michael |
2009/3/21 Michael Rueger <[hidden email]>:
> Igor Stasenko wrote: > >> Wait, i proposing nearly same: Have an event source which produces >> (non-morphic) event objects and InputSensor. >> I just want to know where Announcements takes part of it, or should be >> postpone that for a next step? > > What I did while exploring an alternative UI framework was to use the > rewrite and add an Announcer as a second listener. "Interested parties" > could then subscribe to event announcements. The raw input events are first > converted to first class event objects before submitting them to the > announcer. Right, but here we're talking about doing such conversion much more earlier (at event source object), so then event sensor already deals with first class event objects. I want to know, if such scheme (which i described in first post) is plausible. > As discussed earlier this allows for having a completely separate UI running > without any overlaps to morphic. Tweak always had the problem of still being > tied into the morphic event processing, the combination of the sensor > rewrite and announcers avoid this. > Right, that's why we need a separate set of classes (i called them KernelXXXEvent) which representing an events which came from VM and not tied to Morphic. > I meant to make this stuff available a long time ago, partly as an effort to > try to avoid duplicate effort with the Alain's Miro framework, but kept > distracted by other things. > Will put it a bit higher on my list :-) > Let me know, if you need some help. At least i can send you a proto implementation of KernelXXXEvent classes. I'm also started writing it, but then other things drawn my attention :) > Michael > > -- Best regards, Igor Stasenko AKA sig. |
On 21-Mar-09, at 3:09 AM, Igor Stasenko wrote: > Right, but here we're talking about doing such conversion much more > earlier (at event source object), > so then event sensor already deals with first class event objects. > I want to know, if such scheme (which i described in first post) is > plausible. For the iPhone VM I return a complex event type, which then points to Smalltalk objects which are the representation of the touch events. For location and acceleration data I return the actual objective-C objects. This data is then processed by EventSensor. If you choose to push the responsibility to the VM for creating event objects then you need to be cognizant of the fact that whatever is proposed has to change very little over time, otherwise you end up with the issue of image versus VM compatibility and the fact that VM version changes proceed at a slow rate. -- = = = ======================================================================== John M. McIntosh <[hidden email]> Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com = = = ======================================================================== |
2009/3/21 John M McIntosh <[hidden email]>:
> > On 21-Mar-09, at 3:09 AM, Igor Stasenko wrote: > >> Right, but here we're talking about doing such conversion much more >> earlier (at event source object), >> so then event sensor already deals with first class event objects. >> I want to know, if such scheme (which i described in first post) is >> plausible. > > For the iPhone VM I return a complex event type, which then points to > Smalltalk objects which are the > representation of the touch events. For location and acceleration data I > return the actual objective-C objects. > This data is then processed by EventSensor. > > If you choose to push the responsibility to the VM for creating event > objects then you need to be cognizant > of the fact that whatever is proposed has to change very little over time, > otherwise you end up with the issue > of image versus VM compatibility and the fact that VM version changes > proceed at a slow rate. > VM will still use the old event buffers to deliver events to image. But once image receiving it, it should convert it to an instance of event as soon as source. This is the role of EventSource class - represent VM as event source, which producing an instances of KernelXXXEvent classes, and hiding the details of converting raw event buffers from the eyes of higher layers, which then going to handle the event (EventSensor/Morphic etc) > > -- > =========================================================================== > John M. McIntosh <[hidden email]> > Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com > =========================================================================== > > > > > -- Best regards, Igor Stasenko AKA sig. |
2009/3/21 Igor Stasenko <[hidden email]>:
> 2009/3/21 John M McIntosh <[hidden email]>: >> >> On 21-Mar-09, at 3:09 AM, Igor Stasenko wrote: >> >>> Right, but here we're talking about doing such conversion much more >>> earlier (at event source object), >>> so then event sensor already deals with first class event objects. >>> I want to know, if such scheme (which i described in first post) is >>> plausible. >> >> For the iPhone VM I return a complex event type, which then points to >> Smalltalk objects which are the >> representation of the touch events. For location and acceleration data I >> return the actual objective-C objects. >> This data is then processed by EventSensor. >> >> If you choose to push the responsibility to the VM for creating event >> objects then you need to be cognizant >> of the fact that whatever is proposed has to change very little over time, >> otherwise you end up with the issue >> of image versus VM compatibility and the fact that VM version changes >> proceed at a slow rate. >> > Nope. I don't want VM to deal with real event objects. > VM will still use the old event buffers to deliver events to image. > But once image receiving it, it should convert it to an instance of > event as soon as source. > This is the role of EventSource class - represent VM as event source, > which producing an instances of KernelXXXEvent classes, and hiding the > details of converting raw event buffers from the eyes of higher > layers, which then going to handle the event (EventSensor/Morphic etc) > To give an example what i talking about, here the bits of prototype implementation: KernelEvent class>>initialize "Initialize the array of event types. Note, the order of array elements is important and should be same as event type returned by VM in event buffer" EventTypes := { KernelMouseEvent. KernelKeyboardEvent. KernelDragDropEvent. KernelMenuEvent. KernelWindowEvent. } ----- KernelEvent class>>fromBuffer: eventBuffer "Decode a raw VM event into an instance of KernelEvent subclass" | type | type := EventTypes at: (eventBuffer at: 1) ifAbsent: [ ^ KernelUnknownEvent from: eventBuffer ]. ^ type new from: eventBuffer ----- KernelEvent>>from: buffer "Initialize an event instance from raw event buffer. Note, all subclasses should call super to initialize fields correctly" eventType := buffer at: 1. timeStamp := buffer at: 2. timeStamp = 0 ifTrue: [timeStamp := Time millisecondClockValue]. windowIndex := buffer at: 8. ---- KernelMouseEvent>>from: buffer super from: buffer. position := Point x: (buffer at: 3) y: (buffer at: 4). buttons := buffer at: 5. modifiers := buffer at: 6. as you can see, there is nothing complicated. It simply frees underlaying event handling layers from deciphering event buffers by themselves, instead, they deal with first class even objects, with harmonized protocol. > > -- > Best regards, > Igor Stasenko AKA sig. > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Igor Stasenko
On Saturday 21 Mar 2009 9:51:10 pm Igor Stasenko wrote:
> This is the role of EventSource class - represent VM as event source, > which producing an instances of KernelXXXEvent classes, and hiding the > details of converting raw event buffers from the eyes of higher > layers, which then going to handle the event (EventSensor/Morphic etc) A plain translation of events is not sufficient. The traditional keyboard (ASCII + modifiers) event type breaks down for the handhelds and newer devices (e.g. Nokia N810 does not have an ALT key!). It does not accomodate gestures (e.g. tilting or rocking). Squeak VM needs a flexible filter layer to translate such gestures to characters (e.g. Chinese or Indic) and commands. Plugins, perhaps? Subbu |
2009/3/22 K. K. Subramaniam <[hidden email]>:
> On Saturday 21 Mar 2009 9:51:10 pm Igor Stasenko wrote: >> This is the role of EventSource class - represent VM as event source, >> which producing an instances of KernelXXXEvent classes, and hiding the >> details of converting raw event buffers from the eyes of higher >> layers, which then going to handle the event (EventSensor/Morphic etc) > A plain translation of events is not sufficient. The traditional keyboard > (ASCII + modifiers) event type breaks down for the handhelds and newer > devices (e.g. Nokia N810 does not have an ALT key!). It does not accomodate > gestures (e.g. tilting or rocking). Squeak VM needs a flexible filter layer > to translate such gestures to characters (e.g. Chinese or Indic) and > commands. Plugins, perhaps? > as soon as someone come up with implementation for new input devices, new events can be added easily. Concerning accomodation, i don't agree. At most low level, any device provides very simple signals to operating system. It's the system's responsibility to interpret them into something more complex. For instance - look at mouse: OS receives only axis relative movement and button states. It is OS then translating movement and adding a mouse cursor position on top of it, while mouse doesn't needs to know nothing about screen or cursor :) Same for gestures - you can emit a very simple/basic events, and then combine them to make something more complex/different. > Subbu > -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote on Sun, 22 Mar 2009 20:42:00 +0200
> Same for gestures - you can emit a very simple/basic events, and then > combine them to make something more complex/different. It would be great if we could have a whole eco-system of "filters" that would receive some types of announcements and would generate new ones. This could even be efficient if there is some way to check if there is anybody subscribed to a given event (then the filters generating these announcements could suspend their activity, unsubscribing from their own inputs in turn, until someone interested showed up). Some application might just want to get text independently of whether it was typed on the keyboard, recognized from speech or drawn with a pen. A game might want to see raw keyboard events to deal with keyDown and keyUp separately, though it would be better for there to be a "game filter" that handled this so the application could use they keyboard, multi-touch gestures or a fancy joystick equally well. About the general idea, I agree that the interface between the VM and the image should be changed as little as possible (not at all would be ideal, but it unlikely with iPhone and such) and that the current in-image APIs should remain available for full compatibility - only new applications would take full advantage of the announcements. -- Jecel |
Free forum by Nabble | Edit this page |