[Q] RecursionLock tricky question

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

[Q] RecursionLock tricky question

Ladislav Lenart
Hello.

Suppose the following situation:

lock := RecursionLock new.
"some stuff"
lock critical: [
     "some stuff"
     lock uncritical: [...].
     "some other stuff"
].

lock is aRecursionLock.
Is it possible to implement #uncritical: in such a way that:
  * It evaluates its block argument OUTSIDE of the critical
    section no matter what the call chain is.
  * The evaluation continues as usual afterwards.
  * During evaluation of the block the critical section can
    be entered anew.

Please advice.

My motivation: I implement some coordination logic accessed
from multiple (UI) processes. I need the above scheme to prevent
a deadlock. Now, I ensure that the "problematic" method is
always invoked outside the critical section by restructuring
the source code and the entire call chain (i.e. nested critical:)
which is ugly and error prone.


Thanks in advance,

Ladislav Lenart



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

andre
Am 28.12.2011 um 11:44 schrieb Ladislav Lenart:

> Is it possible to implement #uncritical: in such a way that:
>  * It evaluates its block argument OUTSIDE of the critical
>    section no matter what the call chain is.
>  * The evaluation continues as usual afterwards.
>  * During evaluation of the block the critical section can
>    be entered anew.
>
> Please advice.
>
> My motivation: I implement some coordination logic accessed
> from multiple (UI) processes. I need the above scheme to prevent
> a deadlock. Now, I ensure that the "problematic" method is
> always invoked outside the critical section by restructuring
> the source code and the entire call chain (i.e. nested critical:)
> which is ugly and error prone.


I strongly doubt this would be possible without ugly hacks which will very likely introduce even more deadlock issues. One thing that instantly comes to my mind is there is no easy way for the #uncritical: method to know the current level of recursion, let alone to safely restore it.

A safe way to avoid deadlocks is to obtain locks strictly in "outside to inside" order. This ensures your threads do not attempt crossover locking. Another solution is to execute critical sections on a dedicated service thread. The code run there can be entirely lock-free.

On a side note, congratulations for using RecursionLocks in UI code at all! Not many Smalltalkers seem to have this on their radar, because it is so tempting to rely on Smalltalks inherent robustness. The unlikeliness of seeing hard crashes however does not mean your code is correct. In contrast to C++, for example, Smalltalk will not blow your app to pieces if you messed with the UI code. You will rather see a lot of "soft" errors, that is, unwanted behavior, visual glitches and side effects.

Andre


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Terry Raymond
In reply to this post by Ladislav Lenart
I am not aware of any way to get into deadlock with only one lock. So, I
assume
you really have more than one lock. That being the case, the way I usually
avoid
deadlocks with multiple locks is to insure that all processes acquire the
locks in
the same order.

Terry

===========================================================
Terry Raymond
Crafted Smalltalk
80 Lazywood Ln.
Tiverton, RI  02878
(401) 624-4517      [hidden email]
===========================================================

> -----Original Message-----
> From: [hidden email] [mailto:[hidden email]] On
> Behalf Of Ladislav Lenart
> Sent: Wednesday, December 28, 2011 5:44 AM
> To: vwnc
> Subject: [vwnc] [Q] RecursionLock tricky question
>
> Hello.
>
> Suppose the following situation:
>
> lock := RecursionLock new.
> "some stuff"
> lock critical: [
>      "some stuff"
>      lock uncritical: [...].
>      "some other stuff"
> ].
>
> lock is aRecursionLock.
> Is it possible to implement #uncritical: in such a way that:
>   * It evaluates its block argument OUTSIDE of the critical
>     section no matter what the call chain is.
>   * The evaluation continues as usual afterwards.
>   * During evaluation of the block the critical section can
>     be entered anew.
>
> Please advice.
>
> My motivation: I implement some coordination logic accessed from multiple
> (UI) processes. I need the above scheme to prevent a deadlock. Now, I
> ensure that the "problematic" method is always invoked outside the
critical
> section by restructuring the source code and the entire call chain (i.e.
nested

> critical:) which is ugly and error prone.
>
>
> Thanks in advance,
>
> Ladislav Lenart
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Paul Baumann
In reply to this post by Ladislav Lenart
Ladislav,

I think this is what you are asking for:

RecursionLock>>uncritical: aBlock
        | activeProcess answer |
        activeProcess := Processor activeProcess.
        activeProcess ~~ owner ifTrue: [^aBlock value].
        owner := nil.
        semaphore signal.
        answer := aBlock value.
        semaphore wait.
        owner := activeProcess.
        ^answer

It would behave similar to:

lock := RecursionLock new.
"some stuff"
lock critical: ["some stuff"].
"uncritical..."
lock critical: ["some other stuff"].

The difference being that using #uncritical: would allow easier (and multiple) execution from inside a single critical block. The way I've shown it doesn't prevent other processes from using the lock for other activities before the original locking process has finished "some other stuff". Perhaps that is OK for your needs though. However, if you find that you really do need to lock other stuff from starting before current has finished then you can look into either managing the links of the Semaphore or creating a different kind of RecursionLock that uses two semaphores where one acts as a work queue and the other acts as a lock that can be unlocked while one of the work items is being done.

Most code uses a single UI process and background processes use either #uiEventFor: or #uiEventNowFor: to conditionally queue UI activities. You are using a RecursionLock to conditionally queue work that includes UI activities from multiple UI processes. If your approach was pure and clean (with only one process doing all the actions for each window) then I don't see how you'd get deadlocks. To get UI-specific deadlocks you'd likely have at least one process performing UI activities when it is not the windowProcess. You might add some assertion checks to your code to see if that is happening. If you find it is happening for a valid reason then you may have to use #uiEventFor: or #uiEventNowFor: to avoid the problem.

Here is a pattern I use frequently is to ratchet-fork a background process to fetch data from the UI process and then use #uiEventFor: from the forked process to schedule refresh with the fetched data. It might give some ideas on how you can address the problem.

view_fetchInBackground
        "For maximum performance, this starts getting data from GS while VW is busy opening the window. -plb"

        | ratchetSem seq |
        seq := OrderedCollection new.
        ratchetSem := Semaphore new.
        seq add: 1.
        [
                seq add: 3.
                ratchetSem signal.
                seq add: 4.
                self view_choices_refresh.
                seq add: 6.
        ] fork.
        seq add: 2.
        ratchetSem wait.
        seq add: 5.
        "seq inspect."

view_choices_refresh
        "For maximum performance, this starts getting data from GS while VW is busy opening the window. -plb"

        | answer |
        answer := self gsQuery
                spec: #current
                level: 2
                object: nil
                execute: #(...).
        [
                self view_choices: answer.
        ] uiEventFor: builder window.
        ^answer


You might also experiment with control objects like a Promise in place of a queue. The overhead (new process) and deferred (non-ratchet) start of a Promise may not meet your needs but it could give you ideas about what you could queue that would meet your needs.

Paul Baumann




-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf Of Ladislav Lenart
Sent: Wednesday, December 28, 2011 05:44
To: vwnc
Subject: [vwnc] [Q] RecursionLock tricky question

Hello.

Suppose the following situation:

lock := RecursionLock new.
"some stuff"
lock critical: [
     "some stuff"
     lock uncritical: [...].
     "some other stuff"
].

lock is aRecursionLock.
Is it possible to implement #uncritical: in such a way that:
  * It evaluates its block argument OUTSIDE of the critical
    section no matter what the call chain is.
  * The evaluation continues as usual afterwards.
  * During evaluation of the block the critical section can
    be entered anew.

Please advice.

My motivation: I implement some coordination logic accessed
from multiple (UI) processes. I need the above scheme to prevent
a deadlock. Now, I ensure that the "problematic" method is
always invoked outside the critical section by restructuring
the source code and the entire call chain (i.e. nested critical:)
which is ugly and error prone.


Thanks in advance,

Ladislav Lenart



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc


This message may contain confidential information and is intended for specific recipients unless explicitly noted otherwise. If you have reason to believe you are not an intended recipient of this message, please delete it and notify the sender. This message may not represent the opinion of IntercontinentalExchange, Inc. (ICE), its subsidiaries or affiliates, and does not constitute a contract or guarantee. Unencrypted electronic mail is not secure and the recipient of this message is expected to provide safeguards from viruses and pursue alternate means of communication where privacy or a binding message is desired.


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Ladislav Lenart
In reply to this post by Ladislav Lenart
Hello!

Thank you all for such a quick and very useful responses!

You are right that there are in fact several (two) locks involved.
And thanks to you all I now enter the critical sections always in
the same order. One less deadlock to worry about :-) However this
does NOT solve my original problem. Please keep reading.

-- Little side note --
I think a deadlock between two processes using only one lock is
quite possible (typed from memory, I don't have VW at hand):

lock := RecursionLock new.
p1 := [
[
"Heavy stuff that takes long time to complete."
] ensure: [
lock critical: [
"clean shared data".
p1 resume.
].
].
] fork.
p2 := [
lock critical: [
p1 terminate.
Processor activeProcess suspend.
] fork.

What am I missing?
-- End of the side note --


> I think this is what you are asking for:
> ...

Wow! That is exactly what I need. One question though. What if I use
#ensure: instead of #value, will it make any difference?

My exact use-case: We implement an application that deals with
specific user-edited text documents. Each document is presented in a
separate window running in its own WindowManager (UI) process. All
modifications of a particular document are evaluated in its UI
process. We decided to run slow tasks in the background. These are:
save document, load document and export document. All of them
can take very long time to complete (tens of seconds). Each task
consists of three phases:
* initialization evaluated as part of the initial UI event,
* independent execution evaluated in the background process,
* (optional) result application initiated at the end of a background
process via #uiEventFor:.

The tricky part is that these tasks compete for resources, files in
this case. We implemented a TaskManager to detect and resolve the
conflicts. Suppose the following real-life scenario:
* User edits a document.
* User clicks on a Save button.
* A new task is initiated.
* A fully independent copy of a document is created in the very same
UI event (this is fast).
* The slow task to save the document copy to a disk file is started
in the background.
* User continues to edit her document in the meantime (UI runs on a
higher priority).
* User clicks on a Save button again. It is possible that the
previously started task is still running. What we want is to abort
the old task and start the new one but ONLY AFTER the old task HAS
REALLY TERMINATED (executed all its unwind blocks).

I implement it like this now (TaskManager>>addTask: aTask):
* In critical section:
* Update my internals:
* Add aTask to a set of curently managed tasks.
* Update a resource map. It maps resources to COLLECTIONS of
Tasks to account for a fact, that, for a limited period of
time, several tasks can exist for one resource (but only one
of them will be actively running in any given time). We have to
update internals BEFORE we abort the conflicting tasks to
prevent race conditions.
* Find conflicting tasks.
* If there aren't any, start aTask's background process and
leave.
* OUTSIDE the critical section:
* Abort all conflicting tasks (basically send them #terminate).
* WAIT for them to really terminate (execute their unwinds).
* The reason this has to be evaluated outside the critical
section is that each task updates the SHARED structures when it
terminates (to clean up after itself).
* Recursively:
* In the critical section:
* If aTask has been terminated in the meantime, leave. (A
new conflicting task could have been started from another UI
process while this process was waiting).
* Find conflicting tasks.
* If there aren't any, start aTask's background process and
leave.
* Outside the critical section:
* Abort the tasks.
* Wait for them to terminate.
* Recurse.

The above scheme works, to the best of my knowledge. The problem
that motivated my question is that #addTask: MUST be called
OUTSIDE the critical section to prevent a deadlock. Otherwise the
process that called #addTask: would wait endlessy for the
conflicting tasks to terminate, because they would in-turn wait
endlessly
on the lock to enter the same critical section to update shared
structures of the TaskManager (clean up after themselves). With
#uncritical: I can always enforce the correct setting which is
awesome.

I hope it is more clear now why I need to use locks between UI
processes. The background tasks, though initiated from different UI
processes, can still compete for the same resources. In other words,
TaskManager holds state shared among several UI processes and
associated background tasks and I don't see a way to implement it
without one.


Thank you very much!

Ladislav Lenart


> I think this is what you are asking for:
>
> RecursionLock>>uncritical: aBlock
> | activeProcess answer |
> activeProcess := Processor activeProcess.
> activeProcess ~~ owner ifTrue: [^aBlock value].
> owner := nil.
> semaphore signal.
> answer := aBlock value.
> semaphore wait.
> owner := activeProcess.
> ^answer
>
> It would behave similar to:
>
> lock := RecursionLock new.
> "some stuff"
> lock critical: ["some stuff"].
> "uncritical..."
> lock critical: ["some other stuff"].
>
> The difference being that using #uncritical: would
> allow easier (and multiple) execution from inside
> a single critical block. The way I've shown it
> doesn't prevent other processes from using the
> lock for other activities before the original
> locking process has finished "some other stuff".
> Perhaps that is OK for your needs though. However,
> if you find that you really do need to lock other
> stuff from starting before current has finished
> then you can look into either managing the links
> of the Semaphore or creating a different kind of
> RecursionLock that uses two semaphores where one
> acts as a work queue and the other acts as a lock
> that can be unlocked while one of the work items
> is being done.
>
> Most code uses a single UI process and background
> processes use either #uiEventFor: or
> #uiEventNowFor: to conditionally queue UI
> activities. You are using a RecursionLock to
> conditionally queue work that includes UI
> activities from multiple UI processes. If your
> approach was pure and clean (with only one process
> doing all the actions for each window) then I
> don't see how you'd get deadlocks. To get
> UI-specific deadlocks you'd likely have at least
> one process performing UI activities when it is
> not the windowProcess. You might add some
> assertion checks to your code to see if that is
> happening. If you find it is happening for a valid
> reason then you may have to use #uiEventFor: or
> #uiEventNowFor: to avoid the problem.
>
> Here is a pattern I use frequently is to
> ratchet-fork a background process to fetch data
> from the UI process and then use #uiEventFor: from
> the forked process to schedule refresh with the
> fetched data. It might give some ideas on how you
> can address the problem.
>
> view_fetchInBackground
> "For maximum performance, this starts getting data
> from GS while VW is busy opening the window. -plb"
>
> | ratchetSem seq |
> seq := OrderedCollection new.
> ratchetSem := Semaphore new.
> seq add: 1.
> [
> seq add: 3.
> ratchetSem signal.
> seq add: 4.
> self view_choices_refresh.
> seq add: 6.
> ] fork.
> seq add: 2.
> ratchetSem wait.
> seq add: 5.
> "seq inspect."
>
> view_choices_refresh
> "For maximum performance, this starts getting data
> from GS while VW is busy opening the window. -plb"
>
> | answer |
> answer := self gsQuery
> spec: #current
> level: 2
> object: nil
> execute: #(...).
> [
> self view_choices: answer.
> ] uiEventFor: builder window.
> ^answer
>
>
> You might also experiment with control objects
> like a Promise in place of a queue. The overhead
> (new process) and deferred (non-ratchet) start of
> a Promise may not meet your needs but it could
> give you ideas about what you could queue that
> would meet your needs.
>
> Paul Baumann
>
>
>
>
> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]] On Behalf Of
> Ladislav Lenart
> Sent: Wednesday, December 28, 2011 05:44
> To: vwnc
> Subject: [vwnc] [Q] RecursionLock tricky question
>
> Hello.
>
> Suppose the following situation:
>
> lock := RecursionLock new.
> "some stuff"
> lock critical: [
> "some stuff"
> lock uncritical: [...].
> "some other stuff"
> ].
>
> lock is aRecursionLock.
> Is it possible to implement #uncritical: in such a
> way that:
> * It evaluates its block argument OUTSIDE of the
> critical
> section no matter what the call chain is.
> * The evaluation continues as usual afterwards.
> * During evaluation of the block the critical
> section can
> be entered anew.
>
> Please advice.
>
> My motivation: I implement some coordination logic
> accessed
> from multiple (UI) processes. I need the above
> scheme to prevent
> a deadlock. Now, I ensure that the "problematic"
> method is
> always invoked outside the critical section by
> restructuring
> the source code and the entire call chain (i.e.
> nested critical:)
> which is ugly and error prone.
>
>
> Thanks in advance,
>
> Ladislav Lenart
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
>
>
> This message may contain confidential information
> and is intended for specific recipients unless
> explicitly noted otherwise. If you have reason to
> believe you are not an intended recipient of this
> message, please delete it and notify the sender.
> This message may not represent the opinion of
> IntercontinentalExchange, Inc. (ICE), its
> subsidiaries or affiliates, and does not
> constitute a contract or guarantee. Unencrypted
> electronic mail is not secure and the recipient of
> this message is expected to provide safeguards
> from viruses and pursue alternate means of
> communication where privacy or a binding message
> is desired.
>
>


--
Tradiční i moderní adventní a novoroční zvyky, sváteční jídlo a
pití, výzdoba a dárky... - čtěte vánoční a silvestrovský speciál
portálu VOLNÝ.cz na http://web.volny.cz/data/click.php?id=1301




--
Tradiční i moderní adventní a novoroční zvyky, sváteční jídlo a
pití, výzdoba a dárky... - čtěte vánoční a silvestrovský speciál
portálu VOLNÝ.cz na http://web.volny.cz/data/click.php?id=1301


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Ladislav Lenart
In reply to this post by Ladislav Lenart
Grrr, sorry for the messy formatting. Second try...


Hello!

Thank you all for such a quick and very useful responses!

You are right that there are in fact several (two) locks involved.
And thanks to you all I now enter the critical sections always in
the same order. One less deadlock to worry about :-) However this
does NOT solve my original problem. Please keep reading.

-- Little side note --
I think a deadlock between two processes using only one lock is
quite possible (typed from memory, I don't have VW at hand):

     | lock p1 p2 |
     lock := RecursionLock new.
     p1 := [
         [
             "Heavy stuff that takes long time to complete."
         ] ensure: [
             lock critical: [
                 "clean shared data".
                 p1 resume.
             ].
         ].
     ] fork.
     p2 := [
         lock critical: [
             p1 terminate.
             Processor activeProcess suspend.
         ].
     ] fork.

What am I missing?
-- End of the side note --


> I think this is what you are asking for:
> ...

Wow! That is exactly what I need. One question though. What if I use
#ensure: instead of #value, will it make any difference?

My exact use-case: We implement an application that deals with
specific user-edited text documents. Each document is presented in a
separate window running in its own WindowManager (UI) process. All
modifications of a particular document are evaluated in its UI
process. We decided to run slow tasks in the background. These are:
save document, load document and export document. All of them
can take very long time to complete (tens of seconds). Each task
consists of three phases:
* initialization evaluated as part of the initial UI event,
* independent execution evaluated in the background process,
* (optional) result application initiated at the end of a background
   process via #uiEventFor:.

The tricky part is that these tasks compete for resources, files in
this case. We implemented a TaskManager to detect and resolve the
conflicts. Suppose the following real-life scenario:
* User edits a document.
* User clicks on a Save button.
* A new task is initiated.
* A fully independent copy of a document is created in the very same
   UI event (this is fast).
* The slow task to save the document copy to a disk file is started
   in the background.
* User continues to edit her document in the meantime (UI runs on a
   higher priority).
* User clicks on a Save button again. It is possible that the
   previously started task is still running. What we want is to abort
   the old task and start the new one but ONLY AFTER the old task HAS
   REALLY TERMINATED (executed all its unwind blocks).

I implement it like this now (TaskManager>>addTask: aTask):
* In critical section:
   * Update my internals:
   * Add aTask to a set of curently managed tasks.
   * Update a resource map. It maps resources to COLLECTIONS of
     Tasks to account for a fact, that, for a limited period of
     time, several tasks can exist for one resource (but only one
     of them will be actively running in any given time). We have to
     update internals BEFORE we abort the conflicting tasks to
     prevent race conditions.
   * Find conflicting tasks.
   * If there aren't any, start aTask's background process and leave.
* OUTSIDE the critical section:
   * Abort all conflicting tasks (basically send them #terminate).
   * WAIT for them to really terminate (execute their unwinds).
   * The reason this has to be evaluated outside the critical
     section is that each task updates the SHARED structures when it
     terminates (to clean up after itself).
   * Recursively:
     * In the critical section:
       * If aTask has been terminated in the meantime, leave. (A
         new conflicting task could have been started from another UI
         process while this process was waiting).
       * Find conflicting tasks.
       * If there aren't any, start aTask's background process and leave.
     * Outside the critical section:
       * Abort the tasks.
       * Wait for them to terminate.
       * Recurse.

The above scheme works, to the best of my knowledge. The problem that
motivated my question is that #addTask: MUST be called OUTSIDE the
critical section to prevent a deadlock. Otherwise the process that
called #addTask: would wait endlessy for the conflicting tasks to
terminate and they would in-turn wait endlessly on the lock to enter
the same critical section to update shared structures of the TaskManager
(clean up after themselves). With #uncritical: I can always enforce
the correct setting which is awesome.

I hope it is more clear now why I need to use locks between UI
processes. The background tasks, though initiated from different UI
processes, can still compete for the same resources. In other words,
TaskManager holds state shared among several UI processes and
associated background tasks and I don't see a way to implement it
without one.


Thank you very much!

Ladislav Lenart


> I think this is what you are asking for:
>
> RecursionLock>>uncritical: aBlock
> | activeProcess answer |
> activeProcess := Processor activeProcess.
> activeProcess ~~ owner ifTrue: [^aBlock value].
> owner := nil.
> semaphore signal.
> answer := aBlock value.
> semaphore wait.
> owner := activeProcess.
> ^answer
>
> It would behave similar to:
>
> lock := RecursionLock new.
> "some stuff"
> lock critical: ["some stuff"].
> "uncritical..."
> lock critical: ["some other stuff"].
>
> The difference being that using #uncritical: would
> allow easier (and multiple) execution from inside
> a single critical block. The way I've shown it
> doesn't prevent other processes from using the
> lock for other activities before the original
> locking process has finished "some other stuff".
> Perhaps that is OK for your needs though. However,
> if you find that you really do need to lock other
> stuff from starting before current has finished
> then you can look into either managing the links
> of the Semaphore or creating a different kind of
> RecursionLock that uses two semaphores where one
> acts as a work queue and the other acts as a lock
> that can be unlocked while one of the work items
> is being done.
>
> Most code uses a single UI process and background
> processes use either #uiEventFor: or
> #uiEventNowFor: to conditionally queue UI
> activities. You are using a RecursionLock to
> conditionally queue work that includes UI
> activities from multiple UI processes. If your
> approach was pure and clean (with only one process
> doing all the actions for each window) then I
> don't see how you'd get deadlocks. To get
> UI-specific deadlocks you'd likely have at least
> one process performing UI activities when it is
> not the windowProcess. You might add some
> assertion checks to your code to see if that is
> happening. If you find it is happening for a valid
> reason then you may have to use #uiEventFor: or
> #uiEventNowFor: to avoid the problem.
>
> Here is a pattern I use frequently is to
> ratchet-fork a background process to fetch data
> from the UI process and then use #uiEventFor: from
> the forked process to schedule refresh with the
> fetched data. It might give some ideas on how you
> can address the problem.
>
> view_fetchInBackground
> "For maximum performance, this starts getting data
> from GS while VW is busy opening the window. -plb"
>
> | ratchetSem seq |
> seq := OrderedCollection new.
> ratchetSem := Semaphore new.
> seq add: 1.
> [
> seq add: 3.
> ratchetSem signal.
> seq add: 4.
> self view_choices_refresh.
> seq add: 6.
> ] fork.
> seq add: 2.
> ratchetSem wait.
> seq add: 5.
> "seq inspect."
>
> view_choices_refresh
> "For maximum performance, this starts getting data
> from GS while VW is busy opening the window. -plb"
>
> | answer |
> answer := self gsQuery
> spec: #current
> level: 2
> object: nil
> execute: #(...).
> [
> self view_choices: answer.
> ] uiEventFor: builder window.
> ^answer
>
>
> You might also experiment with control objects
> like a Promise in place of a queue. The overhead
> (new process) and deferred (non-ratchet) start of
> a Promise may not meet your needs but it could
> give you ideas about what you could queue that
> would meet your needs.
>
> Paul Baumann
>
>
>
>
> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]] On Behalf Of
> Ladislav Lenart
> Sent: Wednesday, December 28, 2011 05:44
> To: vwnc
> Subject: [vwnc] [Q] RecursionLock tricky question
>
> Hello.
>
> Suppose the following situation:
>
> lock := RecursionLock new.
> "some stuff"
> lock critical: [
> "some stuff"
> lock uncritical: [...].
> "some other stuff"
> ].
>
> lock is aRecursionLock.
> Is it possible to implement #uncritical: in such a
> way that:
> * It evaluates its block argument OUTSIDE of the
> critical
> section no matter what the call chain is.
> * The evaluation continues as usual afterwards.
> * During evaluation of the block the critical
> section can
> be entered anew.
>
> Please advice.
>
> My motivation: I implement some coordination logic
> accessed
> from multiple (UI) processes. I need the above
> scheme to prevent
> a deadlock. Now, I ensure that the "problematic"
> method is
> always invoked outside the critical section by
> restructuring
> the source code and the entire call chain (i.e.
> nested critical:)
> which is ugly and error prone.
>
>
> Thanks in advance,
>
> Ladislav Lenart
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
>
>
> This message may contain confidential information
> and is intended for specific recipients unless
> explicitly noted otherwise. If you have reason to
> believe you are not an intended recipient of this
> message, please delete it and notify the sender.
> This message may not represent the opinion of
> IntercontinentalExchange, Inc. (ICE), its
> subsidiaries or affiliates, and does not
> constitute a contract or guarantee. Unencrypted
> electronic mail is not secure and the recipient of
> this message is expected to provide safeguards
> from viruses and pursue alternate means of
> communication where privacy or a binding message
> is desired.
>
>


--
Tradiční i moderní adventní a novoroční zvyky, sváteční jídlo a
pití, výzdoba a dárky... - čtěte vánoční a silvestrovský speciál
portálu VOLNÝ.cz na http://web.volny.cz/data/click.php?id=1301




--
Tradiční i moderní adventní a novoroční zvyky, sváteční jídlo a
pití, výzdoba a dárky... - čtěte vánoční a silvestrovský speciál
portálu VOLNÝ.cz na http://web.volny.cz/data/click.php?id=1301


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Paul Baumann
Ladislav,

> -- Little side note --
> I think a deadlock between two processes using only one lock is
> quite possible (typed from memory, I don't have VW at hand):
>
>      | lock p1 p2 |
>      lock := RecursionLock new.
>      p1 := [
>          [
>              "Heavy stuff that takes long time to complete."
>          ] ensure: [
>              lock critical: [
>                  "clean shared data".
>                  p1 resume.
>              ].
>          ].
>      ] fork.
>      p2 := [
>          lock critical: [
>              p1 terminate.
>              Processor activeProcess suspend.
>          ].
>      ] fork.
>
> What am I missing?
> -- End of the side note --

The lock is not held during "heavy stuff"; p2 would terminate p1 if control is given to p2 during "heavy stuff". The code asks p2 to suspend while holding the lock that the ensure: block wants before p2 can be resumed. Also, if p1 is terminated before #ensure: send (which is unlikely but possible in theory) then ensure: would never run to resume p2. Semaphores are already designed to regulate processing. It isn't clear why you'd want p2 to terminate p1. If you want to time-limit execution then one way can be seen in #evaluate:withinMilliseconds:orDo:.

Here is better code for what I think that code is trying to do:

RecursionLock>>queueWork: twoArgWorkBlock cleanupWith: twoArgCleanupBlock context: context
        | ratchetSem forkedProcess status |
        ratchetSem := Semaphore new.
        status := Association new.
        forkedProcess := [
                ratchetSem signal.
                status key: #waitingForLock.
                [
                        self critical: [
                                status key: #acquiredLock.
                                [status key: #started; value: (twoArgWorkBlock value: status value: context); key: #finished]
                                        ensure: [twoArgCleanupBlock value: status value: context]
                        ].
                ]
                        on: Error
                        do: [:ex | status key: #error; value: ex. ex pass ].
        ] fork.
        status key: #scheduled; value: forkedProcess.
        ratchetSem wait.
        ^status

Here is an example of how it might be used:

        | lock |
        lock := RecursionLock new.
        ^#(one two three four) collect: [:ea |
                lock
                        queueWork: [:status :context | "heavy stuff" context halt ]
                        cleanupWith: [:status :context | "clean shared data" ]
                        context: ea
        ].

I wrote it with 'status' and 'context' passed through the blocks so it is possible for your application code to define simple and reusable blocks (like from a message send) that need not have external references. 'status' is an association with a symbol key and value. Play with it to see if it does what you want. You'll likely want to customize to your needs.

> > I think this is what you are asking for:
> > ...

> Wow! That is exactly what I need. One question though. What if I use
> #ensure: instead of #value, will it make any difference?

There is already an #ensure: in the outer #critical: block. That will keep the signal count correct. I'm not sure what you'd want to ensure in that example; you wouldn't want to ensure a wait will happen even if the original process is terminated. Note too that the example used #wait instead of #waitIfCurtailedSignal because execution is already within the outer #ensure: that will end with a #signal. My concern was that #waitIfCurtailedSignal would leave an extra signal in the error situation that it is intended to avoid.

Paul Baumann


This message may contain confidential information and is intended for specific recipients unless explicitly noted otherwise. If you have reason to believe you are not an intended recipient of this message, please delete it and notify the sender. This message may not represent the opinion of IntercontinentalExchange, Inc. (ICE), its subsidiaries or affiliates, and does not constitute a contract or guarantee. Unencrypted electronic mail is not secure and the recipient of this message is expected to provide safeguards from viruses and pursue alternate means of communication where privacy or a binding message is desired.


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Reinout Heeck-2
In reply to this post by Ladislav Lenart

What is the motivation to use multiple windowmanagers?

If I understand your description correctly it seems your problem of creating a thread-safe #addTask: completely collapses if #addTask: would be called from only one thread (windowmanager).

If that is true using a single windowmanger in your app would seem to remove a lot of complexity/brittleness.




HTH,

Reinout

--
Untitled Document

Soops b.v. Reinout Heeck, Sr. Software Engineer

Soops - Specialists in Object Technology

Tel : +31 (0) 20 6222844
Fax : +31 (0) 20 6360827
Web: www.soops.nl


* Please consider the environment before printing this e-mail *


Dit e-mailbericht is alleen bestemd voor de geadresseerde(n). Gebruik door anderen is niet toegestaan. Indien u niet de geadresseerde(n) bent wordt u verzocht de verzender hiervan op de hoogte te stellen en het bericht te verwijderen. Door de elektronische verzending kunnen aan de inhoud van dit bericht geen rechten worden ontleend.

Soops B.V. is gevestigd te Amsterdam, Nederland, en is geregistreerd bij de Kamer van Koophandel onder nummer 33240368. Soops B.V. levert volgens de Fenit voorwaarden, gedeponeerd te Den Haag op 8 december 1994 onder nummer 1994/189.


This e-mail message is intended to be exclusively for the addressee. If you are not the intended recipient you are kindly requested not to make any use whatsoever of the contents and to notify the sender immediately by returning this e-mail message. No rights can be derived from this message.

Soops B.V. is a private limited liability company and has its seat at Amsterdam, The Netherlands and is registered with the Trade Registry of the Chamber of Commerce and Industry under number 33240368. Soops B.V. delivers according to the General Terms and Conditions of Business of Fenit, registered at The Hague, The Netherlands on December 8th, 1994, under number 1994/189.


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Ladislav Lenart
Hello.

On 2.1.2012 16:41, Reinout Heeck wrote:
> What is the motivation to use multiple windowmanagers?

Our application is essentially a special-purpose word processor. It
deals with user-edited text documents. User can open several of them,
each in a separate window.


> If I understand your description correctly it seems your problem
> of creating a thread-safe #addTask: completely collapses if
> #addTask: would be called from only one thread (windowmanager).
>
> If that is true using a single windowmanger in your app would
>seem to remove a lot of complexity/brittleness.

I am not sure about this. Each task (save a document, load a document,
export a document) is a background activity on its own (i.e. a process
is associated with it).

Besides these, we deliberately postpone some frequent & CPU consuming UI
updates until the user is idle for a while. For example we have a quite
intensive computation that displays details about what is under the
current text cursor position. The texts in a document can have special
emphases - stickers. We present them in a list. Each list item has
associated few actions the user can click on. The list has to be updated
after every cursor change. We postpone it for ~200ms. If the user does
not move the cursor in this time, we issue the computation with #uiEventFor:.
However, if the user moves the cursor within these 200ms, we just postpone
the update for another 200ms. This greatly speeds up the user-perceived
speed of the application. Moreover, without these asynchronous UI updates
in-place, the application was too slow on a modest hardware. This in itself
opened a whole can of worms we had to solve. But let's go back to the
background tasks...

Sometimes we need to wait for a particular condition to happen in
another process, usually a UI process associated with a particular
document. A real-life example:
  * User closes the last window of a particular document (note that
    all windows of ONE document are running in the same UI thread
    because they all modify the same document).
  * Before the window closes, it switches to a 'subsume' mode (with
    a Store-like progress overlay on top of the window).
  * The UI thread essentially blocks (except for periodic redisplays)
    until all tasks on that document are finished.
  * Only then is the window closed for good.

The waiting is done in a call like #waitForTask: which essentially
suspends the active process (the caller) until the task terminates.
But I want to guarantee that the caller is resumed only AFTER the
tasks's process executed all its unwind blocks (e.g. closed all open
files). To ensure this, the TOP-MOST unwind block of the task's process
removes the task from the manager's structures in a thread-safe manner.
It then attempts to resume all waiting callers by retesting their
conditions and sending #resume to them if their conditions are met
(e.g. the task is not present in the shared structures of the task
manager anymore). Since this is the last action a task process ever
does we can safely declare it dead and resume the caller of #waitForTask:
afterwards.

Even in the solution with only one UI process there are several
processes running concurrently: the UI process and a task processes.
It is true that the task processes run on a lower priority so the
explicit synchronization is PERHAPS not required (but there might
be synchronization issues among the task processes themselves,
because they all modify the same shared structures). But I don't
like this approach. I try as hard as possible to look at different
priorities only as suggestions about how often a process is
preempted. The fact that a process with a lower priority is never
scheduled when there is a process running on a higher priority is
just an implementation detail to me and I don't want to abuse it.

But this is most probably just my personal preference than an objective
engineering necessity. I guess several years of programming concurrent
applications in Erlang completely changed my view on this subject.


-- Erlang side note --

My long-term solution is to rewrite the TaskManager as an Agent, an
object that encapsulates:
  * its private state (noone else has access to),
  * a mailbox (SharedQueue) which buffers messages sent from other
    processes / agents,
  * a process that pops one message from its mailbox and performs
    the corresponding action in an infinite loop. The process blocks
    whenever the mailbox is empty.

There are two types of messages:
  * #cast: sends a message to an agent; the sender continues running.
  * #call: blocks the sender until the receiver reacts to the message
    and sends a reply back.

When all concurrent activities are implemented as these agents, there
aren't any explicit locks involved. At least in the application code.
The locks are still used to implement the Agent, but once the generic
Agent abstraction is working, noone can see them. And the most important
part of this approach (at least for me) is: THE PROCESS IS EXPLICITLY
VISIBLE IN THE CODE AND I AM ABLE TO REASON ABOUT IT. In other words,
it is always clear who does what. To manipulate data owned by an agent,
the only way is to send it a message and let the RECEIVER perform the
work on its own data.

As you have probably realized by this time, an Agent is nothing more
than a simplified version of an Erlang process. It is simplified,
because I don't have (=don't know how to implement) the rest of Erlang
back-bone features in Smalltalk:
  * pattern matching,
  * selective receive,
  * process links (supervision).
The most visible limitation of my agent is that it can only process
messages in the order they were sent (which can be quite random when
several processes send messages to only one). But since I use agents
only for synchronization purposes and not as basic building blocks,
I can live without these features in Smalltalk easily.

If I remember correctly, Andre suggested this approach to me as one of
the solutions in the very first reply in this thread. And I can only
confirm from my own experience that he is indeed right.

As a matter of fact I implemented the small Agent framework described
above and rewrote other portions of our application to use it several
months ago, namely SpellChecker and the UI updater (mentioned above),
and everything is easiER and simplER, but perhaps just for me :-)


Anyway thank you all once again for your very helpful replies,

Ladislav Lenart

PS: #uncritical: is a really neat trick! :-)

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Reinout Heeck-2
On 1/3/2012 12:36 PM, Ladislav Lenart wrote:
Hello.

On 2.1.2012 16:41, Reinout Heeck wrote:
What is the motivation to use multiple windowmanagers?

Our application is essentially a special-purpose word processor. It
deals with user-edited text documents. User can open several of them,
each in a separate window.

That is /not/ how the multi threaded UI was intended. The idea is that an end-user application has only one window manager.
(The multi threaded UI was primarily introduced to aid development: we can now browse code while debugging the UI thread.)
A single window manager is capable of managing many open/active windows regardless of whether these windows are manipulating one 'document' or multiple.

More in general I find that multithreaded code becomes a lot more tractable when there is a distinguished UI thread that  interacts with the model. The other threads then are 'the exceptions' and need extra effort to synchronize with the UI thread. When there are multiple UI threads *all* threads will need such synchronization effort coded in, which seems to be what you are running in to.




If I understand your description correctly it seems your problem
of creating a thread-safe #addTask: completely collapses if
#addTask: would be called from only one thread (windowmanager).

If that is true using a single windowmanger in your app would
seem to remove a lot of complexity/brittleness.

I am not sure about this. Each task (save a document, load a document,
export a document) is a background activity on its own (i.e. a process
is associated with it).

Assume you subclassed WindowManager for your application and you make sure only one instance is used.

Then this WM could be extended to track which documents are open and which tasks are pending/in progress per document.
It could even manage a single background thread per document (much like a windowmanager is structured to handle a queue of tasks for a UI).
If the background tasks operate on private data (copy of the document model) they can run on their own thread/manager without further locking efforts.

This way your original question is solved as follows:
--initial operation (copy the data) is run in The window manager thread.
--bulk operation is run in a document background manager thread
--completion operation (just a block?) is run in The window manager thread.

The above can be implemented using SharedQueues only, so all kinds of 'tricky' semaphore and/or recursion lock handling is abstracted away :-)))

When the user hits the save button several times in quick succession you want to curtail already running 'save' tasks. This can be done by raising your own PleaseStop exception in the background thread from the windowmanager thread (using #interruptWith:). The document background manager processing loop can catch that exception and do nothing when caught (because you only want to make sure unwind block are run).


Besides these, we deliberately postpone some frequent & CPU consuming UI
updates until the user is idle for a while. For example we have a quite
intensive computation that displays details about what is under the
current text cursor position.

In the above model you have one WindowManager per user, since we are subclassing it we might as well specialize it to have some Model in there that signals its dependents whenever the UI queue has been empty for 200ms.

Your editors now simply need to register as dependents of this Model. Whether these editors compute the new lists in-thread or in a separate thread is something they can choose in a case-by-case basis, it is decoupled from the above considerations :-)


Sometimes we need to wait for a particular condition to happen in
another process, usually a UI process associated with a particular
document. A real-life example:
 * User closes the last window of a particular document (note that
   all windows of ONE document are running in the same UI thread
   because they all modify the same document).
 * Before the window closes, it switches to a 'subsume' mode (with
   a Store-like progress overlay on top of the window).
 * The UI thread essentially blocks (except for periodic redisplays)
   until all tasks on that document are finished.
 * Only then is the window closed for good.

The WindowManager as supplied by Cincom lacks the possibility to 'be deaf' to user interaction while still processing damage events, so that is something you will need to implement in your own WM subclass. While you are doing that you might as well do that on a per-document basis :-)

So when you close a window:
-tell The window manager to be deaf to user actions for a particular document
-show all related windows as subsumed.
-when the background tasks finish the documents background task manager asks The window manager to close all related windows.





The waiting is done in a call like #waitForTask: which essentially
suspends the active process (the caller) until the task terminates.

The above model does not suspend processes, making the model simpler.





But I want to guarantee that the caller is resumed only AFTER the
tasks's process executed all its unwind blocks (e.g. closed all open
files). To ensure this, the TOP-MOST unwind block of the task's process
removes the task from the manager's structures in a thread-safe manner.
It then attempts to resume all waiting callers by retesting their
conditions and sending #resume to them if their conditions are met
(e.g. the task is not present in the shared structures of the task
manager anymore). Since this is the last action a task process ever
does we can safely declare it dead and resume the caller of #waitForTask:
afterwards.

Pffffff :-)
Looks like 'dont go there', who wants 'conditions' when you don't need them...



Even in the solution with only one UI process there are several
processes running concurrently: the UI process and a task processes.
It is true that the task processes run on a lower priority so the
explicit synchronization is PERHAPS not required (but there might
be synchronization issues among the task processes themselves,
because they all modify the same shared structures). But I don't
like this approach. I try as hard as possible to look at different
priorities only as suggestions about how often a process is
preempted.
Agreed, make synchronizing explicit (but prefer queues above semaphores).



-- Erlang side note --

Snipped,
above looks a *lot* like your agents with one exception: the distinguished UI thread which needs no synchronizing code.


PS: #uncritical: is a really neat trick! :-)



/me shudders




R
-



--
Untitled Document

Soops b.v. Reinout Heeck, Sr. Software Engineer

Soops - Specialists in Object Technology

Tel : +31 (0) 20 6222844
Fax : +31 (0) 20 6360827
Web: www.soops.nl


* Please consider the environment before printing this e-mail *


Dit e-mailbericht is alleen bestemd voor de geadresseerde(n). Gebruik door anderen is niet toegestaan. Indien u niet de geadresseerde(n) bent wordt u verzocht de verzender hiervan op de hoogte te stellen en het bericht te verwijderen. Door de elektronische verzending kunnen aan de inhoud van dit bericht geen rechten worden ontleend.

Soops B.V. is gevestigd te Amsterdam, Nederland, en is geregistreerd bij de Kamer van Koophandel onder nummer 33240368. Soops B.V. levert volgens de Fenit voorwaarden, gedeponeerd te Den Haag op 8 december 1994 onder nummer 1994/189.


This e-mail message is intended to be exclusively for the addressee. If you are not the intended recipient you are kindly requested not to make any use whatsoever of the contents and to notify the sender immediately by returning this e-mail message. No rights can be derived from this message.

Soops B.V. is a private limited liability company and has its seat at Amsterdam, The Netherlands and is registered with the Trade Registry of the Chamber of Commerce and Industry under number 33240368. Soops B.V. delivers according to the General Terms and Conditions of Business of Fenit, registered at The Hague, The Netherlands on December 8th, 1994, under number 1994/189.


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Ladislav Lenart
Thank you very much for your insights and your bullet-proof
design concept. Pity I didn't think out of such a beautiful
solution myself roughly two years ago when I had the oportunity.
At least now I know it and I've already created an issue ticket
to solve this properly. I can only hope that I will have time
to address this soon, because as always, the time is what's
missing...

I have still one question about #interruptWith: which is still
a big mystery to me. From what I undestood from your post, it
can be used to synchronously terminate a process. Why is #terminate
implemented differently then and not simply like this?

     Process>>terminate

         self interruptWith: [TerminateException raise]


Once again thank you for your help and time,

Ladislav Lenart


On 3.1.2012 17:01, Reinout Heeck wrote:

> On 1/3/2012 12:36 PM, Ladislav Lenart wrote:
>> Hello.
>>
>> On 2.1.2012 16:41, Reinout Heeck wrote:
>>> What is the motivation to use multiple windowmanagers?
>>
>> Our application is essentially a special-purpose word processor. It
>> deals with user-edited text documents. User can open several of them,
>> each in a separate window.
>
> That is /not/ how the multi threaded UI was intended. The idea is that an end-user application has only one window manager.
> (The multi threaded UI was primarily introduced to aid development: we can now browse code while debugging the UI thread.)
> A single window manager is capable of managing many open/active windows regardless of whether these windows are manipulating one 'document' or multiple.
>
> More in general I find that multithreaded code becomes a lot more tractable when there is a distinguished UI thread that  interacts with the model. The other threads then are 'the exceptions' and need
> extra effort to synchronize with the UI thread. When there are multiple UI threads *all* threads will need such synchronization effort coded in, which seems to be what you are running in to.
>
>
>>
>>
>>> If I understand your description correctly it seems your problem
>>> of creating a thread-safe #addTask: completely collapses if
>>> #addTask: would be called from only one thread (windowmanager).
>>>
>>> If that is true using a single windowmanger in your app would
>>> seem to remove a lot of complexity/brittleness.
>>
>> I am not sure about this. Each task (save a document, load a document,
>> export a document) is a background activity on its own (i.e. a process
>> is associated with it).
>
> Assume you subclassed WindowManager for your application and you make sure only one instance is used.
>
> Then this WM could be extended to track which documents are open and which tasks are pending/in progress per document.
> It could even manage a single background thread per document (much like a windowmanager is structured to handle a queue of tasks for a UI).
> If the background tasks operate on private data (copy of the document model) they can run on their own thread/manager without further locking efforts.
>
> This way your original question is solved as follows:
> --initial operation (copy the data) is run in The window manager thread.
> --bulk operation is run in a document background manager thread
> --completion operation (just a block?) is run in The window manager thread.
>
> The above can be implemented using SharedQueues only, so all kinds of 'tricky' semaphore and/or recursion lock handling is abstracted away :-)))
>
> When the user hits the save button several times in quick succession you want to curtail already running 'save' tasks. This can be done by raising your own PleaseStop exception in the background
> thread from the windowmanager thread (using #interruptWith:). The document background manager processing loop can catch that exception and do nothing when caught (because you only want to make sure
> unwind block are run).
>
>
>> Besides these, we deliberately postpone some frequent & CPU consuming UI
>> updates until the user is idle for a while. For example we have a quite
>> intensive computation that displays details about what is under the
>> current text cursor position.
>
> In the above model you have one WindowManager per user, since we are subclassing it we might as well specialize it to have some Model in there that signals its dependents whenever the UI queue has
> been empty for 200ms.
>
> Your editors now simply need to register as dependents of this Model. Whether these editors compute the new lists in-thread or in a separate thread is something they can choose in a case-by-case
> basis, it is decoupled from the above considerations :-)
>
>>
>> Sometimes we need to wait for a particular condition to happen in
>> another process, usually a UI process associated with a particular
>> document. A real-life example:
>>  * User closes the last window of a particular document (note that
>>    all windows of ONE document are running in the same UI thread
>>    because they all modify the same document).
>>  * Before the window closes, it switches to a 'subsume' mode (with
>>    a Store-like progress overlay on top of the window).
>>  * The UI thread essentially blocks (except for periodic redisplays)
>>    until all tasks on that document are finished.
>>  * Only then is the window closed for good.
>
> The WindowManager as supplied by Cincom lacks the possibility to 'be deaf' to user interaction while still processing damage events, so that is something you will need to implement in your own WM
> subclass. While you are doing that you might as well do that on a per-document basis :-)
>
> So when you close a window:
> -tell The window manager to be deaf to user actions for a particular document
> -show all related windows as subsumed.
> -when the background tasks finish the documents background task manager asks The window manager to close all related windows.
>
>
>
>
>>
>> The waiting is done in a call like #waitForTask: which essentially
>> suspends the active process (the caller) until the task terminates.
>
> The above model does not suspend processes, making the model simpler.
>
>
>
>
>
>> But I want to guarantee that the caller is resumed only AFTER the
>> tasks's process executed all its unwind blocks (e.g. closed all open
>> files). To ensure this, the TOP-MOST unwind block of the task's process
>> removes the task from the manager's structures in a thread-safe manner.
>> It then attempts to resume all waiting callers by retesting their
>> conditions and sending #resume to them if their conditions are met
>> (e.g. the task is not present in the shared structures of the task
>> manager anymore). Since this is the last action a task process ever
>> does we can safely declare it dead and resume the caller of #waitForTask:
>> afterwards.
>
> Pffffff :-)
> Looks like 'dont go there', who wants 'conditions' when you don't need them...
>
>
>>
>> Even in the solution with only one UI process there are several
>> processes running concurrently: the UI process and a task processes.
>> It is true that the task processes run on a lower priority so the
>> explicit synchronization is PERHAPS not required (but there might
>> be synchronization issues among the task processes themselves,
>> because they all modify the same shared structures). But I don't
>> like this approach. I try as hard as possible to look at different
>> priorities only as suggestions about how often a process is
>> preempted.
> Agreed, make synchronizing explicit (but prefer queues above semaphores).
>
>
>>
>> -- Erlang side note --
>
> Snipped,
> above looks a *lot* like your agents with one exception: the distinguished UI thread which needs no synchronizing code.
>
>
>> PS: #uncritical: is a really neat trick! :-)
>>
>>
>
> /me shudders
>
>
>
>
> R
> -
>
>
>
> --
>
> Soops b.v. <http://www.soops.nl> Reinout Heeck, Sr. Software Engineer
>
> Soops - Specialists in Object Technology
>
> Tel : +31 (0) 20 6222844
> Fax : +31 (0) 20 6360827
> Web: www.soops.nl
>
>
> * Please consider the environment before printing this e-mail *
>
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Dit e-mailbericht is alleen bestemd voor de geadresseerde(n). Gebruik door anderen is niet toegestaan. Indien u niet de geadresseerde(n) bent wordt u verzocht de verzender hiervan op de hoogte te
> stellen en het bericht te verwijderen. Door de elektronische verzending kunnen aan de inhoud van dit bericht geen rechten worden ontleend.
>
> Soops B.V. is gevestigd te Amsterdam, Nederland, en is geregistreerd bij de Kamer van Koophandel onder nummer 33240368. Soops B.V. levert volgens de Fenit voorwaarden, gedeponeerd te Den Haag op 8
> december 1994 onder nummer 1994/189.
>
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> This e-mail message is intended to be exclusively for the addressee. If you are not the intended recipient you are kindly requested not to make any use whatsoever of the contents and to notify the
> sender immediately by returning this e-mail message. No rights can be derived from this message.
>
> Soops B.V. is a private limited liability company and has its seat at Amsterdam, The Netherlands and is registered with the Trade Registry of the Chamber of Commerce and Industry under number
> 33240368. Soops B.V. delivers according to the General Terms and Conditions of Business of Fenit, registered at The Hague, The Netherlands on December 8th, 1994, under number 1994/189.
>
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Q] RecursionLock tricky question

Reinout Heeck-2


I have still one question about #interruptWith: which is still
a big mystery to me. From what I undestood from your post, it
can be used to synchronously terminate a process. Why is #terminate
implemented differently then and not simply like this?

    Process>>terminate

        self interruptWith: [TerminateException raise]

After a quick look at Process it seems to me that the above is the semantics of #terminateUnsafely when sent from another thread (even though it is written differently).

Comparing the comments of #terminate and #terminateUnsafely it also seems that my suggestion opens a can of worms regarding timing though:


terminate
    "Terminate the receiver process, by sending the Process terminateSignal. Allow all unwind blocks to run, even if they are currently in progress."

terminateUnsafely
    "Terminate the receiver process, by sending the Process terminateSignal. Unwind blocks will usually be run, but if the process is in the middle of an unwind block when the terminate signal is received, then that unwind block will not be completed, but subsequent unwind blocks will be run. This is the semantics used when you close the debugger while debugging a process. In this circumstance it is often appropriate, because the unwind block may not be able to complete, either because of an error, or because it is hung, and this is the reason you are in the debugger. In most other circumstances, you would want to use #terminate, which allows unwind blocks in progress to complete. In very rare circumstances you might want #terminateUnsafelyNow, which terminates the process without attempting to run any of its unwind blocks."



Perhaps a better idea would be to not use exceptions at all, but make it such that a background task periodically checks a flag that can be set from the UI process.

Perhaps #terminate is just right for the task and my suggestion is not....

Cab anybody else here educate me on this?








--
Untitled Document

Soops b.v. Reinout Heeck, Sr. Software Engineer

Soops - Specialists in Object Technology

Tel : +31 (0) 20 6222844
Fax : +31 (0) 20 6360827
Web: www.soops.nl


* Please consider the environment before printing this e-mail *


Dit e-mailbericht is alleen bestemd voor de geadresseerde(n). Gebruik door anderen is niet toegestaan. Indien u niet de geadresseerde(n) bent wordt u verzocht de verzender hiervan op de hoogte te stellen en het bericht te verwijderen. Door de elektronische verzending kunnen aan de inhoud van dit bericht geen rechten worden ontleend.

Soops B.V. is gevestigd te Amsterdam, Nederland, en is geregistreerd bij de Kamer van Koophandel onder nummer 33240368. Soops B.V. levert volgens de Fenit voorwaarden, gedeponeerd te Den Haag op 8 december 1994 onder nummer 1994/189.


This e-mail message is intended to be exclusively for the addressee. If you are not the intended recipient you are kindly requested not to make any use whatsoever of the contents and to notify the sender immediately by returning this e-mail message. No rights can be derived from this message.

Soops B.V. is a private limited liability company and has its seat at Amsterdam, The Netherlands and is registered with the Trade Registry of the Chamber of Commerce and Industry under number 33240368. Soops B.V. delivers according to the General Terms and Conditions of Business of Fenit, registered at The Hague, The Netherlands on December 8th, 1994, under number 1994/189.


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc