I think Juan's point may have been missed, but shows up in his change notes:
- Automatic hourglass mouse pointer when Morphic is busy - Removed about 80 calls like 'Cursor wait showWhile: []', as they are no longer needed! $0.02 -KenD -- Ken [dot] Dickey [at] whidbey [dot] com
-KenD
|
On 25 June 2013 16:41, Ken Dickey <[hidden email]> wrote:
> I think Juan's point may have been missed, but shows up in his change notes: > > - Automatic hourglass mouse pointer when Morphic is busy > - Removed about 80 calls like 'Cursor wait showWhile: []', as they are > no longer needed! I hadn't missed this particular point. But then, the #showWhile: calls usually show _kinds_ of busy-ness: reading, writing, etc. frank > $0.02 > -KenD > -- > Ken [dot] Dickey [at] whidbey [dot] com > |
That's another point, I remember when using ST80, the cursor was blinking fast between these states read/write/hour glass/normal, and this particular feedback did not add much value. 2013/6/25 Frank Shearar <[hidden email]>
|
That's a somewhat self contradicting statement -- because:
"the cursor was blinking fast between these states read/write/hour glass/normal" is chock full of information about the running program. First, and foremost, one would know the system is not locked up. Indicating reading vs. writing vs. processing is very useful with applications that involve a lot of I/O, such as database or network applications. By relating input actions to cursor status, the system can impart a lot information about what it's doing. By contrast, the Cuis approach of indicating busy on a timer confirms to the waiting user something they already know -- that they're waiting. It tells them that the built-in timer to switch the cursor is working, but nothing else about what the system is or isn't doing. This should not be taken as a criticism of Cuis itself. There's a lot to like about Cuis. On Tue, Jun 25, 2013 at 10:55 AM, Nicolas Cellier <[hidden email]> wrote: > That's another point, I remember when using ST80, the cursor was blinking > fast between these states read/write/hour glass/normal, and this particular > feedback did not add much value. > > > 2013/6/25 Frank Shearar <[hidden email]> >> >> On 25 June 2013 16:41, Ken Dickey <[hidden email]> wrote: >> > I think Juan's point may have been missed, but shows up in his change >> > notes: >> > >> > - Automatic hourglass mouse pointer when Morphic is busy >> > - Removed about 80 calls like 'Cursor wait showWhile: []', as they are >> > no longer needed! >> >> I hadn't missed this particular point. But then, the #showWhile: calls >> usually show _kinds_ of busy-ness: reading, writing, etc. >> >> frank >> >> > $0.02 >> > -KenD >> > -- >> > Ken [dot] Dickey [at] whidbey [dot] com >> > >> > > > > |
Ah yes, maybe it's some sort of useful warning: "beware of epileptic crisis" or something like that... 2013/6/25 Chris Muller <[hidden email]> That's a somewhat self contradicting statement -- because: |
I can see how that could be annoying, and I've definitely measured a
performance difference in Magma apps when cursor flipping is on. At the same time, the amount of information conveyed by a sensitive cursor is unequivocally useful, at times. I think we'd need a user-preference to adjust the "sensitivity" of the Cursor to application signals. It could be specified in milliseconds; 0 for maximum sensitivity (all signals) or 1000 to wait 1 sec before changing the cursor. On Tue, Jun 25, 2013 at 2:58 PM, Nicolas Cellier <[hidden email]> wrote: > Ah yes, maybe it's some sort of useful warning: "beware of epileptic crisis" > or something like that... > > > > 2013/6/25 Chris Muller <[hidden email]> >> >> That's a somewhat self contradicting statement -- because: >> >> "the cursor was blinking fast between these states read/write/hour >> glass/normal" >> >> is chock full of information about the running program. First, and >> foremost, one would know the system is not locked up. Indicating >> reading vs. writing vs. processing is very useful with applications >> that involve a lot of I/O, such as database or network applications. >> By relating input actions to cursor status, the system can impart a >> lot information about what it's doing. >> >> By contrast, the Cuis approach of indicating busy on a timer confirms >> to the waiting user something they already know -- that they're >> waiting. It tells them that the built-in timer to switch the cursor >> is working, but nothing else about what the system is or isn't doing. >> >> This should not be taken as a criticism of Cuis itself. There's a lot >> to like about Cuis. >> >> On Tue, Jun 25, 2013 at 10:55 AM, Nicolas Cellier >> <[hidden email]> wrote: >> > That's another point, I remember when using ST80, the cursor was >> > blinking >> > fast between these states read/write/hour glass/normal, and this >> > particular >> > feedback did not add much value. >> > >> > >> > 2013/6/25 Frank Shearar <[hidden email]> >> >> >> >> On 25 June 2013 16:41, Ken Dickey <[hidden email]> wrote: >> >> > I think Juan's point may have been missed, but shows up in his change >> >> > notes: >> >> > >> >> > - Automatic hourglass mouse pointer when Morphic is busy >> >> > - Removed about 80 calls like 'Cursor wait showWhile: []', as they >> >> > are >> >> > no longer needed! >> >> >> >> I hadn't missed this particular point. But then, the #showWhile: calls >> >> usually show _kinds_ of busy-ness: reading, writing, etc. >> >> >> >> frank >> >> >> >> > $0.02 >> >> > -KenD >> >> > -- >> >> > Ken [dot] Dickey [at] whidbey [dot] com >> >> > >> >> >> > >> > >> > >> > >> > > > > |
In reply to this post by Chris Muller-3
On 06/25/2013 03:04 PM, Chris Muller wrote:
> By relating input actions to cursor status, the system can impart a > lot information about what it's doing. I think one important point worth stressing is that there's only one Cursor - it's a global ambient resource. Concurrency is much more common these days than in it was in the time of ST80, so we should be moving toward being able to report on multiple ongoing activities at once. Cursor doesn't cut it here at all. Seen from this perspective, the only remaining use for a real busy cursor is exactly the "beachball" case, similar to what Juan has implemented for Cuis. Travis and Vassili's remarks [1] are dead on. Cheers, Tony [1] http://www.cincomsmalltalk.com/userblogs/travis/blogView?showComments=true&printTitle=Cursor_consider_showWhile:_[Harmful]&entry=3432339015 |
On 26-06-2013, at 9:31 AM, Tony Garnock-Jones <[hidden email]> wrote: > On 06/25/2013 03:04 PM, Chris Muller wrote: >> By relating input actions to cursor status, the system can impart a >> lot information about what it's doing. > > I think one important point worth stressing is that there's only one Cursor - it's a global ambient resource. > > Concurrency is much more common these days than in it was in the time of ST80, so we should be moving toward being able to report on multiple ongoing activities at once. Cursor doesn't cut it here at all. I agree here. Feedback ought to be via some affordance within the application doing the work; hence my suggestions a while back about perhaps making buttons act as an 'in-place' progress bar. Apple does a moderately ok but not really adequate job of application feedback in a number of places, with the Safari URL/progressbar and the Finder whirly thing - though they ought to be much more prominent. From the depths of the application/system, we should use something akin to exceptions (probably starting with actual Exceptions would be smart) that can be raised without having to know any details about how the information will be used. The UI or other front-end would handle the notification as it chooses (including optionally whacking the cursor) and carry on, calmly. Another thing to consider is whether to, and how to, handle having a progress UI widget that has a 'cancel/quit' button to abort too-long running operations. I can see how using the bones of the Exception resume mechanism might provide some help here, though there is potential for a lot of tidy-up work when cancelling a partially complete job. There's obvious tie-ins with handling errors. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim Any Sufficiently Advanced Incompetence Is Indistinguishable From Malice |
In reply to this post by Tony Garnock-Jones-3
Hi,
WorldState>>activeHand and friends still exist. Never made use of that but I assumed every hand is / has a separate cursor? Cheers Herbert Am 26.06.2013 18:31, schrieb Tony Garnock-Jones: > On 06/25/2013 03:04 PM, Chris Muller wrote: >> By relating input actions to cursor status, the system can impart a >> lot information about what it's doing. > > I think one important point worth stressing is that there's only one > Cursor - it's a global ambient resource. > > Concurrency is much more common these days than in it was in the time > of ST80, so we should be moving toward being able to report on > multiple ongoing activities at once. Cursor doesn't cut it here at all. > > Seen from this perspective, the only remaining use for a real busy > cursor is exactly the "beachball" case, similar to what Juan has > implemented for Cuis. Travis and Vassili's remarks [1] are dead on. > > Cheers, > Tony > > [1] > http://www.cincomsmalltalk.com/userblogs/travis/blogView?showComments=true&printTitle=Cursor_consider_showWhile:_[Harmful]&entry=3432339015 > |
On 06/26/2013 01:04 PM, Herbert König wrote:
> WorldState>>activeHand and friends still exist. Never made use of that > but I assumed every hand is / has a separate cursor? :-) Swedish has a wonderful word: "drygt", which means "just ever so slightly more than". So we could say that there's *drygt* one Cursor! Perhaps 1.013. Tony |
In reply to this post by Herbert König
On 26.06.2013, at 10:04, Herbert König <[hidden email]> wrote:
> Hi, > > WorldState>>activeHand and friends still exist. Never made use of that but I assumed every hand is / has a separate cursor? Indeed. The primary hand is mapped to the system pointer ("hardware cursor") and every other hand is drawn by Morphic. My multi-touch implementation for the iPad maps each finger to a separate hand. This lets you drag around multiple morphs at the same time without any special coding effort, it just works. Also, a long time ago (around Squeak 2.8) I published a neat Linux hack where every mouse you plugged into the system would show up as a separate hand. The original use case was within Nebraska though, were every remote participant got their own hand (see RemoteHandMorph). There is also the event recorder and playback which uses a secondary hand. - Bert - |
I think it was telemorphic, predating Nebrasks.
Cheers, Bob On 6/26/13 1:33 PM, Bert Freudenberg
wrote:
The original use case was within Nebraska though, were every remote participant got their own hand (see RemoteHandMorph). |
In reply to this post by Bert Freudenberg
On 06/26/2013 01:33 PM, Bert Freudenberg wrote:
> My multi-touch implementation for the iPad maps each finger to a separate hand. Perfect. Could your implementation be adapted to e.g. CogDroid, do you think? (I have a new Android tablet specifically for such experimentation.) Tony |
On 26.06.2013, at 10:45, Tony Garnock-Jones <[hidden email]> wrote: > On 06/26/2013 01:33 PM, Bert Freudenberg wrote: >> My multi-touch implementation for the iPad maps each finger to a separate hand. > > Perfect. Could your implementation be adapted to e.g. CogDroid, do you think? (I have a new Android tablet specifically for such experimentation.) Certainly, you just need to somehow get the events from the VM into the image. If CogDroid could use the same event format then the image code should just work. The current format is based on Apple's, so I'm not sure how simple the mapping would be. If it's not easy, then now might be a good time to create a generic format. All platforms (except Mac OS?) now have support for multi-touch so making it general would be nice. - Bert - |
On 06/26/2013 02:04 PM, Bert Freudenberg wrote:
> Certainly, you just need to somehow get the events from the VM into the image. If CogDroid could use the same event format then the image code should just work. The current format is based on Apple's, so I'm not sure how simple the mapping would be. If it's not easy, then now might be a good time to create a generic format. All platforms (except Mac OS?) now have support for multi-touch so making it general would be nice. I haven't looked too far into the Android NDK's event formats, but from what I can gather they are closely based on the underlying Linux /dev/input events. If it were possible on Unix machines to read from /dev/input/* without blocking when there is no pending data, we'd be more-or-less at the point where we could avoid VM modifications entirely for Android multitouch. Tony |
On 26.06.2013, at 11:38, Tony Garnock-Jones <[hidden email]> wrote: > On 06/26/2013 02:04 PM, Bert Freudenberg wrote: >> Certainly, you just need to somehow get the events from the VM into the image. If CogDroid could use the same event format then the image code should just work. The current format is based on Apple's, so I'm not sure how simple the mapping would be. If it's not easy, then now might be a good time to create a generic format. All platforms (except Mac OS?) now have support for multi-touch so making it general would be nice. > > I haven't looked too far into the Android NDK's event formats, but from what I can gather they are closely based on the underlying Linux /dev/input events. > > If it were possible on Unix machines to read from /dev/input/* without blocking when there is no pending data, we'd be more-or-less at the point where we could avoid VM modifications entirely for Android multitouch. But it's the VM's job to translate platform-dependent things into generic resources for the image's consumption. It presents a virtual environment for the image to exist in, which should be as independent from the actual machine it's running on as possible. - Bert - |
In reply to this post by Bert Freudenberg
Am 26.06.2013 um 19:33 schrieb Bert Freudenberg <[hidden email]>:
> On 26.06.2013, at 10:04, Herbert König <[hidden email]> wrote: > >> Hi, >> >> WorldState>>activeHand and friends still exist. Never made use of that but I assumed every hand is / has a separate cursor? > > > Indeed. The primary hand is mapped to the system pointer ("hardware cursor") and every other hand is drawn by Morphic. > > My multi-touch implementation for the iPad maps each finger to a separate hand. This lets you drag around multiple morphs at the same time without any special coding effort, it just works. > > Also, a long time ago (around Squeak 2.8) I published a neat Linux hack where every mouse you plugged into the system would show up as a separate hand. > > The original use case was within Nebraska though, were every remote participant got their own hand (see RemoteHandMorph). > > There is also the event recorder and playback which uses a secondary hand. In 2009, we had a modified SqueakVM that mapped each USB HID Mouse to a separate Hand, and also had separated hands for RFB (aka VNC) connections drawn. We then had a Squeak session with more than seven (7) different hands. Best -Tobias |
In reply to this post by Bert Freudenberg
Mac OS X supports multi-touch (i.e. MacBook touch pads and the Apple Magic Mouse) |
On 26.06.2013, at 13:53, Darius Clarke <[hidden email]> wrote:
> Mac OS X supports multi-touch (i.e. MacBook touch pads and the Apple Magic Mouse) Nah, you cannot independently access the finger positions. We're really talking about touchscreen support. - Bert - |
Isn't this what you're talking about?
Touch Events Represent Fingers on the TrackpadInstead of handling a gesture, you could choose to track and handle the “raw” touches that make up the gesture. But why might you make such a choice? One obvious reason is that OS X does not recognize the particular gesture you are interested in—that is, something other than magnify (pinch in and out), rotate, or swipe. Or you want your view to respond to a system-supported gesture, but you want more information about the gesture than the AppKit framework currently provides; for example, you would like to have anchor points for a zooming operation. Unless you have reasons such as these, you should prefer gestures to raw touch events. The following sections discuss the multi-touch sequence that delimits a touch event in an abstract sense, point out important touch-event attributes, and show you how to handle touch events. A Multi-Touch SequenceWhen a user touches a trackpad with one or more fingers and moves those fingers over the trackpad, the hardware generates low-level events that represent each of those fingers on the trackpad. The stream of events, as with all type of events, is continuous. However, there is a logical unit of touches that together, represent a multi-touch sequence. A multi-touch sequence begins when the user puts one or more fingers on the trackpad. The finger can move in various directions over the trackpad, and additional fingers may touch the trackpad. The multi-touch sequence doesn’t end until all of those fingers are lifted from the trackpad. Within a multi-touch sequence, a finger on the trackpad typically goes through distinct phases:
The AppKit framework uses objects of the On Wed, Jun 26, 2013 at 2:08 PM, Bert Freudenberg <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |