Hello list.
I've been looking at discarding Morphic and writing an application to use Display and EventSensor directly. I have some questions. I'm seeing an inputSemaphore in the EventSensor class, but it doesn't appear to be used. Is it used anywhere? Considering that it doesn't appear to be used, how stable is the VM side of this code? How many platforms support this inputSemaphore? Does it work on Windows, Mac and Linux? Also, am I seeing this right: Morphic polls Sensor? The loop is in Project>>spawnNewProcess, which eventually calls HandMorph>>processEvents which reads from Sensor. Looking at the implementation of EventSensor and InputSensor, what is the actual history there? In terms of primitives, EventSensor (the "new" class) seems to be more consistent with chapter 29 of the blue book, while the InputSensor (the "old" class) uses undocumented primitives. How did it end up this way? Thanks, Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
2009/2/12 Michael van der Gulik <[hidden email]>:
> Hello list. > > I've been looking at discarding Morphic and writing an application to use > Display and EventSensor directly. I have some questions. > > I'm seeing an inputSemaphore in the EventSensor class, but it doesn't appear > to be used. Is it used anywhere? Considering that it doesn't appear to be > used, how stable is the VM side of this code? > > How many platforms support this inputSemaphore? Does it work on Windows, > Mac and Linux? > > Also, am I seeing this right: Morphic polls Sensor? The loop is in > Project>>spawnNewProcess, which eventually calls HandMorph>>processEvents > which reads from Sensor. > > Looking at the implementation of EventSensor and InputSensor, what is the > actual history there? In terms of primitives, EventSensor (the "new" class) > seems to be more consistent with chapter 29 of the blue book, while the > InputSensor (the "old" class) uses undocumented primitives. How did it > end up this way? > The inputSemaphore is not signaled by VM (win32), but instead used to check how image polls its events. Btw, there is a refactoring of this stuff in Pharo, not sure if it already incorporated in image. It was wtitten with idea kept in mind, that there can be multiple event listeners at once, not just hand morph. > Thanks, > Gulik. > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Michael van der Gulik-2
Michael van der Gulik wrote:
> I'm seeing an inputSemaphore in the EventSensor class, but it doesn't > appear to be used. Is it used anywhere? Considering that it doesn't > appear to be used, how stable is the VM side of this code? The input semaphore is used for two purposes: First, it tells the VM to use the event based primitive set (contrary to the state-based set) and second, it is signaled by the VM when a new event is recorded. I don't think the latter information is currently being used but the former most definitely is. > How many platforms support this inputSemaphore? Does it work on Windows, > Mac and Linux? Yes. As far as I recall it is supported everywhere. > Also, am I seeing this right: Morphic polls Sensor? The loop is in > Project>>spawnNewProcess, which eventually calls > HandMorph>>processEvents which reads from Sensor. Correct. This is badly broken for many reasons. In Tweak, the hand holds an event queue into which events are being pushed from the outside. This is a much safer and more robust approach. > Looking at the implementation of EventSensor and InputSensor, what > is the actual history there? In terms of primitives, EventSensor (the > "new" class) seems to be more consistent with chapter 29 of the blue > book, while the InputSensor (the "old" class) uses > undocumented primitives. How did it end up this way? InputSensor was first. It used state-based primitive (i.e., primMouseButtons) and was used for a very long time. However, because of its design, it resulted in events being lost (for example, when a mouse down-up transition happens between two polls for input sensor state) so I added the event based primitives. I have never cared much for the blue book so if this is more consistent with chapter 29 it's purely by accident (or perhaps because that's the more sensible way of doing things ;-) Cheers, - Andreas |
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
> The inputSemaphore is not signaled by VM (win32), but instead used to > check how image polls its events. Oh. You are right. How pathetic is that! I should fix that ASAP. > Btw, there is a refactoring of this stuff in Pharo, not sure if it > already incorporated in image. It was wtitten with idea kept in mind, > that there can be multiple event listeners at once, not just hand > morph. I'm not sure if having multiple listeners is useful at this level - all EventSensor does is pulling the events from the VM, packaging them up and passing them into the event queue for any downstream consumers. If there were multiple listeners in a morphic environment, they should probably be registering with the hand (which has such a listener mechanism already), not with Sensor. What's the use case for this refactoring? Cheers, - Andreas |
Andreas Raab wrote:
> I'm not sure if having multiple listeners is useful at this level - all > EventSensor does is pulling the events from the VM, packaging them up > and passing them into the event queue for any downstream consumers. If > there were multiple listeners in a morphic environment, they should > probably be registering with the hand (which has such a listener > mechanism already), not with Sensor. What's the use case for this > refactoring? The refactoring allows for listeners *outside* the morphic environment to listen to input and window events, supporting alternative UI frameworks like e.g. Miro. Michael |
Michael Rueger wrote:
> Andreas Raab wrote: >> I'm not sure if having multiple listeners is useful at this level - >> all EventSensor does is pulling the events from the VM, packaging them >> up and passing them into the event queue for any downstream consumers. >> If there were multiple listeners in a morphic environment, they should >> probably be registering with the hand (which has such a listener >> mechanism already), not with Sensor. What's the use case for this >> refactoring? > > The refactoring allows for listeners *outside* the morphic environment > to listen to input and window events, supporting alternative UI > frameworks like e.g. Miro. Miro must be an unusual framework then (where is it?). Generally speaking, Squeak UI frameworks all have the need for some sort of top-level desktop (representing Display) which is the natural (and generally only) receiver for events from Sensor. That is true for MVC, Morphic, Tweak etc. The only situation in which I could imagine not having a desktop representative is when it comes to native windows (in which case there is a native desktop that Squeak doesn't control) and here one obviously wants to dispatch events to all the different windows. Although even here I would probably opt for a dispatch table in Sensor mapping window IDs to event queues instead of a straightforward listener. Cheers, - Andreas |
2009/2/12 Andreas Raab <[hidden email]>:
> Michael Rueger wrote: >> >> Andreas Raab wrote: >>> >>> I'm not sure if having multiple listeners is useful at this level - all >>> EventSensor does is pulling the events from the VM, packaging them up and >>> passing them into the event queue for any downstream consumers. If there >>> were multiple listeners in a morphic environment, they should probably be >>> registering with the hand (which has such a listener mechanism already), not >>> with Sensor. What's the use case for this refactoring? >> >> The refactoring allows for listeners *outside* the morphic environment to >> listen to input and window events, supporting alternative UI frameworks like >> e.g. Miro. > > Miro must be an unusual framework then (where is it?). Generally speaking, > Squeak UI frameworks all have the need for some sort of top-level desktop > (representing Display) which is the natural (and generally only) receiver > for events from Sensor. That is true for MVC, Morphic, Tweak etc. The only > situation in which I could imagine not having a desktop representative is > when it comes to native windows (in which case there is a native desktop > that Squeak doesn't control) and here one obviously wants to dispatch events > to all the different windows. Although even here I would probably opt for a > dispatch table in Sensor mapping window IDs to event queues instead of a > straightforward listener. > When you have a multiple host windows using hostwindowplugin, we need to dispatch events to a host window first, and then handle them with any framework, which is bound to that window. For this means, the role of Sensor will primarily to poll events from VM , and convert them to events from raw byte array. Next, it should dispatch an event based on window id, and then window should push it farther to whatever listener it have to. The event handling logic could be different, from what i described, but anyway, i would love to see, that there is no other classes in system, which need to deal with raw event buffer except EventSensor. This should help a lot in code clarity & reducing complexity. > Cheers, > - Andreas > > -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> When you have a multiple host windows using hostwindowplugin, we need > to dispatch events to a host window first, and then handle them with > any framework, which is bound to that window. > For this means, the role of Sensor will primarily to poll events from > VM , and convert them to events from raw byte array. > Next, it should dispatch an event based on window id, and then window > should push it farther to whatever listener it have to. Right. That's what I meant by having a dispatch table that maps window IDs to event queues. In which case EventSensor would push the event into the queue that corresponds to the index and whatever wants to handle events for a window just pulls it out of the queue and processes it. > The event handling logic could be different, from what i described, > but anyway, i would love to see, that there is no other classes in > system, which need to deal with raw event buffer except EventSensor. > This should help a lot in code clarity & reducing complexity. I thought that too, but it turns out in practice it doesn't work unless you have one and only one UI framework. There is always custom information that is useful to be passed along with the event plus there are differences in how different frameworks want to model event hierarchies etc. I find the raw event buffer the most useful entity to pass along because you get all the information the VM had at that point and you can derive whatever state is relevant in your application. In fact, we've had many difficulties by the intermediate layers (Morphic in particular) making the original data inaccessible and had several problems because of it (for example button-swizzling etc). I think the end-to-end principle also holds in UI frameworks ;-) Cheers, - Andreas |
2009/2/12 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> When you have a multiple host windows using hostwindowplugin, we need >> to dispatch events to a host window first, and then handle them with >> any framework, which is bound to that window. >> For this means, the role of Sensor will primarily to poll events from >> VM , and convert them to events from raw byte array. >> Next, it should dispatch an event based on window id, and then window >> should push it farther to whatever listener it have to. > > Right. That's what I meant by having a dispatch table that maps window IDs > to event queues. In which case EventSensor would push the event into the > queue that corresponds to the index and whatever wants to handle events for > a window just pulls it out of the queue and processes it. > >> The event handling logic could be different, from what i described, >> but anyway, i would love to see, that there is no other classes in >> system, which need to deal with raw event buffer except EventSensor. >> This should help a lot in code clarity & reducing complexity. > > I thought that too, but it turns out in practice it doesn't work unless you > have one and only one UI framework. There is always custom information that > is useful to be passed along with the event plus there are differences in > how different frameworks want to model event hierarchies etc. I find the raw > event buffer the most useful entity to pass along because you get all the > information the VM had at that point and you can derive whatever state is > relevant in your application. In fact, we've had many difficulties by the > intermediate layers (Morphic in particular) making the original data > inaccessible and had several problems because of it (for example > button-swizzling etc). I think the end-to-end principle also holds in UI > frameworks ;-) > Well, a mouse event is a mouse event, you can't treat it differently. :) This is why i thinking that even wrapping event buffer by specialized event subclass could be helpful. I.e. event := KernelMouseEvent from: eventBuffer. so, then you can write: event isMouseEvent instead of: (eventBuffer at: 1) = EventTypeMouse The idea is to promote most basic event types from Morphic-Events to Kernel-Events. So then another framework could reuse them as a base, without poking in raw event buffer. It is also useful in a way, that in case if we need to alter event handling logic in future VM, then we will have a much less places to visit in image to adopt these changes. The custom framework could always adopt these events for own need , by encapsulating them with own event types/whatever. I just think that events deserve to be encapsulated into a well-defined objects at kernel level. In current state it looks like, that if you have a socket primitives, but have no Socket class which enables to work with them in a meaningful way, so each user of sockets needs to deal with primitives directly and write own data handling logic :) > Cheers, > - Andreas > -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> Well, a mouse event is a mouse event, you can't treat it differently. :) Oh, but you can! Just look at #swapMouseButtons. More interestingly along these lines is that there are synthesized events (mouse enter/leave for example) that all UI frameworks have but that aren't part of the "kernel" event types. So in order to make this fit "your framework" you'd have to munge with the kernel types. That's why I'm saying that unless you are insisting on a single framework there will always be differences in how the event structures are modeled and a one-size-fits-all-approach isn't very helpful. > This is why i thinking that even wrapping event buffer by specialized > event subclass could be helpful. > I.e. > > event := KernelMouseEvent from: eventBuffer. > > so, then you can write: > > event isMouseEvent > instead of: > (eventBuffer at: 1) = EventTypeMouse > > The idea is to promote most basic event types from Morphic-Events to > Kernel-Events. > So then another framework could reuse them as a base, without poking > in raw event buffer. Understood. Yes, this could be helpful for some purposes. The reason why I don't like it is that since a UI framework will require its own events to begin with (see above) why clutter the kernel with artificial distinctions about event types that the kernel really doesn't care about? What good would it do Morphic or MVC or Tweak if you were to add these events? Given the tiny interface between the sensor and its clients[*] it seems like a complication with very little benefit to me. [*] Never mind all the abuses of Sensor. > It is also useful in a way, that in case if we need to alter event > handling logic in future VM, then we will have a much less places to > visit in image to adopt these changes. That's a much better argument and one of the few I'd be willing to accept from what has been said so far. The other one is actually documentation - it can be quite difficult to wade through tons of constants trying to figure out which bit corresponds to what field. > The custom framework could always adopt these events for own need , by > encapsulating them with own event types/whatever. > > I just think that events deserve to be encapsulated into a > well-defined objects at kernel level. > In current state it looks like, that if you have a socket primitives, > but have no Socket class which enables to work with them in a > meaningful way, so each user of sockets needs to deal with primitives > directly and write own data handling logic :) This on the other hand is not a very good argument since there is only a single user of socket primitives (Socket) and there aren't multiple competing types of Sockets. Following this logic you might as well create Morphic events right in Sensor. Cheers, - Andreas |
On Fri, Feb 13, 2009 at 9:30 AM, Andreas Raab <[hidden email]> wrote:
If I had written EventSensor, this is how I would have done it because it makes code a lot more readable. I would make this event wrapper class only support the actual events coming from the VM; it is still the responsibility of the UI framework to synthesize the extra events. The reason I asked my original question is because I'm working on Subcanvas (http://gulik.pbwiki.com/Canvas), which is a graphics and events API; it is a high-tech replacement for the Canvas class. Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
In reply to this post by Andreas.Raab
On Thu, Feb 12, 2009 at 10:20 PM, Andreas Raab <[hidden email]> wrote:
Another question: using the new event-based primitives, does the VM, underlying window system or OS also queue events on the platform side? I.e. if the EventSensor class doesn't pick up a new event fast enough and queue it, will that event be lost? Does this behaviour differ between platforms? Are there any platforms where this could be a problem? Gulik. -- http://people.squeakfoundation.org/person/mikevdg http://gulik.pbwiki.com/ |
In reply to this post by Andreas.Raab
2009/2/12 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> Well, a mouse event is a mouse event, you can't treat it differently. :) > > Oh, but you can! Just look at #swapMouseButtons. More interestingly along > these lines is that there are synthesized events (mouse enter/leave for > example) that all UI frameworks have but that aren't part of the "kernel" > event types. So in order to make this fit "your framework" you'd have to > munge with the kernel types. That's why I'm saying that unless you are > insisting on a single framework there will always be differences in how the > event structures are modeled and a one-size-fits-all-approach isn't very > helpful. A framework free to interpret events as it likes to. I don't see a conflict here. Imagine two host windows, each running own UI framework. One is swapping buttons, other don't. Both frameworks receiving events from a single source (event sensor), and its not an event sensor's job to interpret events. Its job is to represent them with nice objects, which have a nice protocol, so these frameworks will work with events instead of trying to decipher raw events originated from VM. > >> This is why i thinking that even wrapping event buffer by specialized >> event subclass could be helpful. >> I.e. >> >> event := KernelMouseEvent from: eventBuffer. >> >> so, then you can write: >> >> event isMouseEvent >> instead of: >> (eventBuffer at: 1) = EventTypeMouse >> >> The idea is to promote most basic event types from Morphic-Events to >> Kernel-Events. >> So then another framework could reuse them as a base, without poking >> in raw event buffer. > > Understood. Yes, this could be helpful for some purposes. The reason why I > don't like it is that since a UI framework will require its own events to > begin with (see above) why clutter the kernel with artificial distinctions > about event types that the kernel really doesn't care about? What good would > it do Morphic or MVC or Tweak if you were to add these events? Given the > tiny interface between the sensor and its clients[*] it seems like a > complication with very little benefit to me. > Let me describe a little how i see it. An event sensor having a little logic in itself: it simply stands a an interface which connects a low-level VM with a language side. Once event sensor receiving an event in a raw buffer, it should encode it into an instance of appropriate event class. /* from sq.h */ #define EventTypeNone 0 #define EventTypeMouse 1 #define EventTypeKeyboard 2 #define EventTypeDragDropFiles 3 #define EventTypeMenu 4 #define EventTypeWindow 5 EventTypeClasses := #(NotAnEvent MouseEvent KeyboardEvent DragDropEvent MenuEvent WindowEvent). event := (EventTypeClasses at: (eventBuffer at: 1)) from: eventBuffer. Now, an event consumer could do: event handleWith: handler. and handle receiving a nice #handleMouseEvent: , #handleMenuEvent: etc. no need to test event type anymore , anywhere. See, how easily this thing can be extended: #define EventTypeMyUberEvent 6 then change EventTypeClasses then add a #handleAppropriateEventType: in your framework. Done. > [*] Never mind all the abuses of Sensor. > >> It is also useful in a way, that in case if we need to alter event >> handling logic in future VM, then we will have a much less places to >> visit in image to adopt these changes. > > That's a much better argument and one of the few I'd be willing to accept > from what has been said so far. The other one is actually documentation - it > can be quite difficult to wade through tons of constants trying to figure > out which bit corresponds to what field. > >> The custom framework could always adopt these events for own need , by >> encapsulating them with own event types/whatever. >> >> I just think that events deserve to be encapsulated into a >> well-defined objects at kernel level. >> In current state it looks like, that if you have a socket primitives, >> but have no Socket class which enables to work with them in a >> meaningful way, so each user of sockets needs to deal with primitives >> directly and write own data handling logic :) > > This on the other hand is not a very good argument since there is only a > single user of socket primitives (Socket) and there aren't multiple > competing types of Sockets. Following this logic you might as well create > Morphic events right in Sensor. > practice early, to hide unnecessary implementation complexity details from the end user. > Cheers, > - Andreas > > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Michael van der Gulik-2
2009/2/12 Michael van der Gulik <[hidden email]>:
> > > On Thu, Feb 12, 2009 at 10:20 PM, Andreas Raab <[hidden email]> wrote: >> >> Michael van der Gulik wrote: >> >>> Looking at the implementation of EventSensor and InputSensor, what is the >>> actual history there? In terms of primitives, EventSensor (the "new" class) >>> seems to be more consistent with chapter 29 of the blue book, while the >>> InputSensor (the "old" class) uses undocumented primitives. How did it end >>> up this way? >> >> InputSensor was first. It used state-based primitive (i.e., >> primMouseButtons) and was used for a very long time. However, because of its >> design, it resulted in events being lost (for example, when a mouse down-up >> transition happens between two polls for input sensor state) so I added the >> event based primitives. I have never cared much for the blue book so if this >> is more consistent with chapter 29 it's purely by accident (or perhaps >> because that's the more sensible way of doing things ;-) > > > Another question: using the new event-based primitives, does the VM, > underlying window system or OS also queue events on the platform side? I.e. > if the EventSensor class doesn't pick up a new event fast enough and queue > it, will that event be lost? > polled by event sensor. > Does this behaviour differ between platforms? Are there any platforms where > this could be a problem? > since i dealt mostly with win32 VM, i can assure you that it using own event queue. I can't see how other VM's can avoid that. Otherwise same image on different platforms would behave very different. > Gulik. > > -- > http://people.squeakfoundation.org/person/mikevdg > http://gulik.pbwiki.com/ > > > > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Andreas.Raab
2009/2/12 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> The inputSemaphore is not signaled by VM (win32), but instead used to >> check how image polls its events. > > Oh. You are right. How pathetic is that! I should fix that ASAP. > Btw, Andreas. Currently, most images polling events by themselves and never wait on input semaphore. With new rewrite, there are 2 InputSensor classes with different #waitForInput method behavior: 1. waitForInput inputSemaphore wait. 2. waitForInput self class eventPollDelay wait. Now, if you will fix signaling the input semaphore, we need some way to determine what method of event polling is most appropriate - to install a best event polling mechanism. Any ideas how this can be determined by image side? Maybe you can patch the primSetInputSemaphore: semaIndex "Set the input semaphore the VM should use for asynchronously signaling the availability of events. Primitive. Optional." <primitive: 93> to return a true, if VM guarantees that it will signal the input semaphore when new event occurs. And any other value returned should mean, that image should keep using old manual event polling mechanism and do not rely on input semaphore. >> Btw, there is a refactoring of this stuff in Pharo, not sure if it >> already incorporated in image. It was wtitten with idea kept in mind, >> that there can be multiple event listeners at once, not just hand >> morph. > > I'm not sure if having multiple listeners is useful at this level - all > EventSensor does is pulling the events from the VM, packaging them up and > passing them into the event queue for any downstream consumers. If there > were multiple listeners in a morphic environment, they should probably be > registering with the hand (which has such a listener mechanism already), not > with Sensor. What's the use case for this refactoring? > > Cheers, > - Andreas > > > -- Best regards, Igor Stasenko AKA sig. |
Sounds like a reasonable way to migrate...
Regards, Gary ----- Original Message ----- From: "Igor Stasenko" <[hidden email]> To: "The general-purpose Squeak developers list" <[hidden email]> Sent: Friday, February 13, 2009 11:57 PM Subject: Re: [squeak-dev] Re: EventSensor questions > 2009/2/12 Andreas Raab <[hidden email]>: >> Igor Stasenko wrote: >>> >>> The inputSemaphore is not signaled by VM (win32), but instead used to >>> check how image polls its events. >> >> Oh. You are right. How pathetic is that! I should fix that ASAP. >> > > Btw, Andreas. Currently, most images polling events by themselves and > never wait on input semaphore. > > With new rewrite, there are 2 InputSensor classes with different > #waitForInput method behavior: > > 1. waitForInput > inputSemaphore wait. > > 2. waitForInput > self class eventPollDelay wait. > > Now, if you will fix signaling the input semaphore, we need some way > to determine what method of event polling is most appropriate - to > install a best event polling mechanism. > Any ideas how this can be determined by image side? > Maybe you can patch the > primSetInputSemaphore: semaIndex > "Set the input semaphore the VM should use for asynchronously > signaling the availability of events. Primitive. Optional." > <primitive: 93> > > to return a true, if VM guarantees that it will signal the input > semaphore when new event occurs. > And any other value returned should mean, that image should keep using > old manual event polling mechanism and do not rely on input semaphore. > > >>> Btw, there is a refactoring of this stuff in Pharo, not sure if it >>> already incorporated in image. It was wtitten with idea kept in mind, >>> that there can be multiple event listeners at once, not just hand >>> morph. >> >> I'm not sure if having multiple listeners is useful at this level - all >> EventSensor does is pulling the events from the VM, packaging them up and >> passing them into the event queue for any downstream consumers. If there >> were multiple listeners in a morphic environment, they should probably be >> registering with the hand (which has such a listener mechanism already), >> not >> with Sensor. What's the use case for this refactoring? >> >> Cheers, >> - Andreas >> >> >> > > > > -- > Best regards, > Igor Stasenko AKA sig. > |
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
> Btw, Andreas. Currently, most images polling events by themselves and > never wait on input semaphore. > > With new rewrite, there are 2 InputSensor classes with different > #waitForInput method behavior: Seriously? Even more InputSensor classes? (*rolling his eyes*) > Now, if you will fix signaling the input semaphore, we need some way > to determine what method of event polling is most appropriate - to > install a best event polling mechanism. > Any ideas how this can be determined by image side? Easy. Run the poller until the first event comes in. Check whether the semaphore is signaled. Switch to non-polling if so. Alternatively, use #waitTimeoutMSecs: - it means you're still occasionally polling but that is probably acceptable. > And any other value returned should mean, that image should keep using > old manual event polling mechanism and do not rely on input semaphore. How much backwards compatibility do you need? Wouldn't it be easier to fix it and then switch? Cheers, - Andreas |
2009/2/14 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> Btw, Andreas. Currently, most images polling events by themselves and >> never wait on input semaphore. >> >> With new rewrite, there are 2 InputSensor classes with different >> #waitForInput method behavior: > > Seriously? Even more InputSensor classes? (*rolling his eyes*) > difference in only a few methods. >> Now, if you will fix signaling the input semaphore, we need some way >> to determine what method of event polling is most appropriate - to >> install a best event polling mechanism. >> Any ideas how this can be determined by image side? > > Easy. Run the poller until the first event comes in. Check whether the > semaphore is signaled. Switch to non-polling if so. Alternatively, use > #waitTimeoutMSecs: - it means you're still occasionally polling but that is > probably acceptable. > more complex logic in places where its not necessary. That's why i made two subclasses by factoring out #waitForInput logic to avoid putting too much if's into a single method. Again, when in future we will no longer need to care about backward compatibility, you'll need a minimal change - simply stop using the subclass instead of analyzing the complex code to remove obsolete cruft. >> And any other value returned should mean, that image should keep using >> old manual event polling mechanism and do not rely on input semaphore. > > How much backwards compatibility do you need? Wouldn't it be easier to fix > it and then switch? > I want it to be backward compatible, but i prefer a clear ways to determine VM's behavior instead of impirically guessing it (as with checking if semaphore is signaled). > Cheers, > - Andreas -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> 2009/2/14 Andreas Raab <[hidden email]>: >> Igor Stasenko wrote: >>> Btw, Andreas. Currently, most images polling events by themselves and >>> never wait on input semaphore. >>> >>> With new rewrite, there are 2 InputSensor classes with different >>> #waitForInput method behavior: >> Seriously? Even more InputSensor classes? (*rolling his eyes*) >> > yes, the base class using inputSensor, and subclass using polling. The > difference in only a few methods. Yeah. I just don't like the whole subsystem all that much and having more InputSensor subclasses looks to me as if it may be making things worse. BTW, where is that code? >>> Now, if you will fix signaling the input semaphore, we need some way >>> to determine what method of event polling is most appropriate - to >>> install a best event polling mechanism. >>> Any ideas how this can be determined by image side? >> Easy. Run the poller until the first event comes in. Check whether the >> semaphore is signaled. Switch to non-polling if so. Alternatively, use >> #waitTimeoutMSecs: - it means you're still occasionally polling but that is >> probably acceptable. >> > Sure it is easy. I'm just feel uncomfortable when we need to introduce > more complex logic in places where its not necessary. > That's why i made two subclasses by factoring out #waitForInput logic > to avoid putting too much if's into a single method. > Again, when in future we will no longer need to care about backward > compatibility, you'll need a minimal change - simply stop using the > subclass instead of analyzing the complex code to remove obsolete > cruft. Yes. Although it seems that you don't get to avoid whatever complexity is required to decided which version to use. This is partly why I'm thinking it may actually be simpler to have this in the same place, along the lines of: processEvents self waitForInputPolling. haveSema := inputSemaphore isSignaled. [true] whileTrue:[ haveSema ifTrue:[inputSemaphore wait] ifFalse:[self waitForInputPolling]. self processNextEvent. ]. Really not much complexity here. >>> And any other value returned should mean, that image should keep using >>> old manual event polling mechanism and do not rely on input semaphore. >> How much backwards compatibility do you need? Wouldn't it be easier to fix >> it and then switch? >> > I want it to be backward compatible, but i prefer a clear ways to > determine VM's behavior instead of impirically guessing it (as with > checking if semaphore is signaled). What could be clearer than actually checking? ;-) Seriously, I find this the most straightforward way to figure out whether we can rely on the VM to signal the semaphore or not if you really want to be have that level of backwards compatibility. Alternatively, just require VMs that signal the damn semaphore! ;-) Cheers, - Andreas |
2009/2/15 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> 2009/2/14 Andreas Raab <[hidden email]>: >>> >>> Igor Stasenko wrote: >>>> >>>> Btw, Andreas. Currently, most images polling events by themselves and >>>> never wait on input semaphore. >>>> >>>> With new rewrite, there are 2 InputSensor classes with different >>>> #waitForInput method behavior: >>> >>> Seriously? Even more InputSensor classes? (*rolling his eyes*) >>> >> yes, the base class using inputSensor, and subclass using polling. The >> difference in only a few methods. > > Yeah. I just don't like the whole subsystem all that much and having more > InputSensor subclasses looks to me as if it may be making things worse. BTW, > where is that code? > AFAIK, it wasn't published anywhere, try searching Pharo list archives, it should be in attachments. The refactoring was targeted to support multiple event handlers instead of just single Sensor. Currently i developing a new model, based on this but a little different. - initially there will be a single event polling loop running, governed by a EventListener class. - an EventListener installs the polling loop and fetching events from VM - once received, event buffer is converted to an instance of corresponding KernelXXXEvent and then passed to event handler (kept as inst var). The EventListener is intentionally made as simple as possible, to minimise the probability of changing it unless something changed in VM. Its name speaks for itself, it listens for events from some source, and in our case source is VM. You are free to set a listener handle to anything you like, just make sure it responds to #handleEvent: message. Now, the most fun begins with implementation of handler. First, i want to implement a handler, which will behave similarily to what current Sensor does. Next, when multiple host windows machinery will be ready, i will replace it with dispatching handler, which will dispatch events based on window id. So, instead of chain: VM -> listener -> Sensor -> hand we will have something like: VM -> listener -> window -> Sensor -> hand of course, such replacement is not possible until we will get rid of Display/World/Sensor globals in most critical places to make at least most functionality work for multiple host windows. >>>> Now, if you will fix signaling the input semaphore, we need some way >>>> to determine what method of event polling is most appropriate - to >>>> install a best event polling mechanism. >>>> Any ideas how this can be determined by image side? >>> >>> Easy. Run the poller until the first event comes in. Check whether the >>> semaphore is signaled. Switch to non-polling if so. Alternatively, use >>> #waitTimeoutMSecs: - it means you're still occasionally polling but that >>> is >>> probably acceptable. >>> >> Sure it is easy. I'm just feel uncomfortable when we need to introduce >> more complex logic in places where its not necessary. >> That's why i made two subclasses by factoring out #waitForInput logic >> to avoid putting too much if's into a single method. >> Again, when in future we will no longer need to care about backward >> compatibility, you'll need a minimal change - simply stop using the >> subclass instead of analyzing the complex code to remove obsolete >> cruft. > > Yes. Although it seems that you don't get to avoid whatever complexity is > required to decided which version to use. This is partly why I'm thinking it > may actually be simpler to have this in the same place, along the lines of: > > processEvents > self waitForInputPolling. > haveSema := inputSemaphore isSignaled. > [true] whileTrue:[ > haveSema > ifTrue:[inputSemaphore wait] > ifFalse:[self waitForInputPolling]. > self processNextEvent. > ]. > > Really not much complexity here. I agree.. you convinced me to put everything into a single method :) lets stop wasting our breath on this subtle detail :) > >>>> And any other value returned should mean, that image should keep using >>>> old manual event polling mechanism and do not rely on input semaphore. >>> >>> How much backwards compatibility do you need? Wouldn't it be easier to >>> fix >>> it and then switch? >>> >> I want it to be backward compatible, but i prefer a clear ways to >> determine VM's behavior instead of impirically guessing it (as with >> checking if semaphore is signaled). > > What could be clearer than actually checking? ;-) Seriously, I find this the > most straightforward way to figure out whether we can rely on the VM to > signal the semaphore or not if you really want to be have that level of > backwards compatibility. Alternatively, just require VMs that signal the > damn semaphore! ;-) > > Cheers, > - Andreas > > -- Best regards, Igor Stasenko AKA sig. |
Free forum by Nabble | Edit this page |