#fork and deterministic resumption of the resulting process

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
109 messages Options
1234 ... 6
Reply | Threaded
Open this post in threaded view
|

#fork and deterministic resumption of the resulting process

Andreas.Raab
Hi -

In my never-ending quest for questionable behavior in multi-threaded
situations just today I ran into a pattern which is dangerously common
in our code. It basically goes like this:

MyClass>>startWorkerProcess
        "worker is an instance variable"
        worker:= [self runWorkerProcess] fork.

MyClass>>runWorkerProcess
        "Run the worker process"
        [Processor activeProcess == worker] whileTrue:[
                "...do the work..."
        ].

MyClass>>stopWorkerProcess
        "Stop the worker process"
        worker := nil. "let it terminate itself"

Those of you who can immediately tell what the problem is should get a
medal for an outstanding knack of analyzing concurrency problems ;-)

For the rest of us, the problem is that #fork in the above is not
deterministic in the way that there is no guarantee whether the "worker"
variable will be assigned when we enter the worker loop. It *would* be
deterministic if the priority were below or above the current process'
priority but when it's the same it can be affected by environmental
effects (external signals, delay processing etc) leading to some very
obscure runtime problems (in the above, the process would just not start).

To fix this problem I have changed BlockContext>>fork and
BlockContext>>forkAt: to read, e.g.,

BlockContext>>fork
   "Create and schedule a Process running the code in the receiver."
   ^self forkAt: Processor activePriority

BlockContext>>forkAt: priority
   "Create and schedule a Process running the code in the receiver
   at the given priority. Answer the newly created process."
   | forkedProcess helperProcess |
   forkedProcess := self newProcess.
   forkedProcess priority: priority.
   priority = Processor activePriority ifTrue:[
     helperProcess := [forkedProcess resume] newProcess.
     helperProcess priority: priority-1.
     helperProcess resume.
   ] ifFalse:[
     forkedProcess resume
   ].
   ^forkedProcess

This will make sure that #fork has (for the purpose of resumption) the
same semantics as forking at a lower priority has.

What do people think about this?

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Randal L. Schwartz
>>>>> "Andreas" == Andreas Raab <[hidden email]> writes:

Andreas> What do people think about this?

Looks good to me.  And no, I didn't spot it until I read your explanation, but
this does seem like something that could trip up a beginner.

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<[hidden email]> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Mathieu SUEN
In reply to this post by Andreas.Raab
Hi,

On Feb 4, 2008, at 10:04 PM, Andreas Raab wrote:

> Hi -
>
> In my never-ending quest for questionable behavior in multi-threaded  
> situations just today I ran into a pattern which is dangerously  
> common in our code. It basically goes like this:
>
> MyClass>>startWorkerProcess
> "worker is an instance variable"
> worker:= [self runWorkerProcess] fork.
>

Why don't you make the assignment atomic?

> MyClass>>runWorkerProcess
> "Run the worker process"
> [Processor activeProcess == worker] whileTrue:[
> "...do the work..."
> ].
>
> MyClass>>stopWorkerProcess
> "Stop the worker process"
> worker := nil. "let it terminate itself"
>
> Those of you who can immediately tell what the problem is should get  
> a medal for an outstanding knack of analyzing concurrency problems ;-)
>
> For the rest of us, the problem is that #fork in the above is not  
> deterministic in the way that there is no guarantee whether the  
> "worker" variable will be assigned when we enter the worker loop. It  
> *would* be deterministic if the priority were below or above the  
> current process' priority but when it's the same it can be affected  
> by environmental effects (external signals, delay processing etc)  
> leading to some very obscure runtime problems (in the above, the  
> process would just not start).
>
> To fix this problem I have changed BlockContext>>fork and  
> BlockContext>>forkAt: to read, e.g.,
>
> BlockContext>>fork
>  "Create and schedule a Process running the code in the receiver."
>  ^self forkAt: Processor activePriority
>
> BlockContext>>forkAt: priority
>  "Create and schedule a Process running the code in the receiver
>  at the given priority. Answer the newly created process."
>  | forkedProcess helperProcess |
>  forkedProcess := self newProcess.
>  forkedProcess priority: priority.
>  priority = Processor activePriority ifTrue:[
>    helperProcess := [forkedProcess resume] newProcess.
>    helperProcess priority: priority-1.
>    helperProcess resume.
>  ] ifFalse:[
>    forkedProcess resume
>  ].
>  ^forkedProcess
>
> This will make sure that #fork has (for the purpose of resumption)  
> the same semantics as forking at a lower priority has.
>
> What do people think about this?
>
> Cheers,
>  - Andreas
>

        Mth





Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

timrowledge
In reply to this post by Andreas.Raab

On 4-Feb-08, at 1:04 PM, Andreas Raab wrote:
>
> What do people think about this?
I'm not too terribly keen on the idea of a process I fork getting set  
to a lower priority than that that I requested. Then again, I don't  
all that often feel the need to fork processes anyway so perhaps I'm  
not really entitled to a vote.

I think I'd categorise this example as a bug, plain and simple. Don't  
do that. It's not a nice idiom at all.

To ameliorate the situation, we could *not* return the process from  
the #fork method - thus making it pointless to write foo:= [blah]  
fork. I'm sure that would upset some people that are to attached to  
pretending to be in unix-land.

I'd like to hope that something like
self critical:[foo:= [blah] fork]
might be acceptable as a replacement idiom. There are times when  
people simply have to accept that the simple looking way to do  
something is just plain wrong.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
He who hesitates is probably right.



Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

timrowledge

On 4-Feb-08, at 1:49 PM, tim Rowledge wrote:

>
> On 4-Feb-08, at 1:04 PM, Andreas Raab wrote:
>>
>> What do people think about this?
> I'm not too terribly keen on the idea of a process I fork getting  
> set to a lower priority than that that I requested.
Oh and of course the first thing that will come along to spoil the  
party is some smart-arse doing
[foo] forkAt: Processor activePriority+1
;-)

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Useful random insult:- Life by Norman Rockwell, but screenplay by  
Stephen King.



Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2
In reply to this post by Andreas.Raab


On Feb 5, 2008 10:04 AM, Andreas Raab <[hidden email]> wrote:
Hi -

In my never-ending quest for questionable behavior in multi-threaded
situations just today I ran into a pattern which is dangerously common
in our code. It basically goes like this:

MyClass>>startWorkerProcess
       "worker is an instance variable"
       worker:= [self runWorkerProcess] fork.

MyClass>>runWorkerProcess
       "Run the worker process"
       [Processor activeProcess == worker] whileTrue:[
               "...do the work..."
       ].

MyClass>>stopWorkerProcess
       "Stop the worker process"
       worker := nil. "let it terminate itself"

Those of you who can immediately tell what the problem is should get a
medal for an outstanding knack of analyzing concurrency problems ;-)

For the rest of us, the problem is that #fork in the above is not
deterministic in the way that there is no guarantee whether the "worker"
variable will be assigned when we enter the worker loop. It *would* be
deterministic if the priority were below or above the current process'
priority but when it's the same it can be affected by environmental
effects (external signals, delay processing etc) leading to some very
obscure runtime problems (in the above, the process would just not start).

To fix this problem I have changed BlockContext>>fork and
BlockContext>>forkAt: to read, e.g.,

BlockContext>>fork
  "Create and schedule a Process running the code in the receiver."
  ^self forkAt: Processor activePriority

BlockContext>>forkAt: priority
  "Create and schedule a Process running the code in the receiver
  at the given priority. Answer the newly created process."
  | forkedProcess helperProcess |
  forkedProcess := self newProcess.
  forkedProcess priority: priority.
  priority = Processor activePriority ifTrue:[
    helperProcess := [forkedProcess resume] newProcess.
    helperProcess priority: priority-1.
    helperProcess resume.
  ] ifFalse:[
    forkedProcess resume
  ].
  ^forkedProcess

This will make sure that #fork has (for the purpose of resumption) the
same semantics as forking at a lower priority has.

What do people think about this?


I'm thinking that the above is an ugly hack. When we eventually write an interpreter which is truly multitasking, your original bug will re-appear.

What you wanted to do, rather than redefining the Process class, is:

MyClass>>startWorkerProcess
       "keepRunning is an instance variable"
       keepRunning := true.
       " Always run worker processes as a lower priority than the controller process. "
       [self runWorkerProcess] forkAt: ProcessScheduler somethingeratherPriority.
       " sorry; I don't have Squeak handy right now, but you get the idea. "

MyClass>>runWorkerProcess
       "Run the worker process"
       [keepRunning] whileTrue: [
               "...do the work..."
       ].

MyClass>>stopWorkerProcess
       "Stop the worker process"
       keepRunning := false. "let it terminate itself"

Or better, make an abstraction:

process := WorkerTask doing: [ someObject doSomeWork ].
process start.
process stop.

Gulik.







--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2
In reply to this post by timrowledge


On Feb 5, 2008 10:49 AM, tim Rowledge <[hidden email]> wrote:

On 4-Feb-08, at 1:04 PM, Andreas Raab wrote:
>
> What do people think about this?
I'm not too terribly keen on the idea of a process I fork getting set
to a lower priority than that that I requested. Then again, I don't
all that often feel the need to fork processes anyway so perhaps I'm
not really entitled to a vote.

I think I'd categorise this example as a bug, plain and simple. Don't
do that. It's not a nice idiom at all.

To ameliorate the situation, we could *not* return the process from
the #fork method - thus making it pointless to write foo:= [blah]
fork. I'm sure that would upset some people that are to attached to
pretending to be in unix-land.


It would upset me. All my code would break and I might cry.
 


I'd like to hope that something like
self critical:[foo:= [blah] fork]


I've never heard of a #critical: method outside the Semaphore class. I'm assuming that self is a Semaphore? In that case, the forked/child process is not constrained by the critical: block and will escape to begin its loop with no guarantee that foo has been assigned.

Gulik.

--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

keith1y
In reply to this post by Andreas.Raab
Andreas Raab wrote:

> Hi -
>
> In my never-ending quest for questionable behavior in multi-threaded
> situations just today I ran into a pattern which is dangerously common
> in our code. It basically goes like this:
>
> MyClass>>startWorkerProcess
>     "worker is an instance variable"
>     worker:= [self runWorkerProcess] fork.
>
> MyClass>>runWorkerProcess
>     "Run the worker process"
>     [Processor activeProcess == worker] whileTrue:[
>         "...do the work..."
>     ].
>
> MyClass>>stopWorkerProcess
>     "Stop the worker process"
>     worker := nil. "let it terminate itself"
>
> Those of you who can immediately tell what the problem is should get a
> medal for an outstanding knack of analyzing concurrency problems ;-)
 In this situation the idiom I use is something like this

MyClass>>startWorkerProcess
    "worker is an instance variable"
    worker:= [self runWorkerProcess]  newProcess.
    worker resume.
 
:-)

Keith



Reply | Threaded
Open this post in threaded view
|

RE: #fork and deterministic resumption of the resulting process

Terry Raymond-2
In reply to this post by Andreas.Raab
Being aware of the problem Andreas illustrated, I frequently
write;

workerProcess := [self runWorkerProcess] newProcess.
workerProcess resume.

I think changing the semantics of fork is a way to introduce
a bug to existing code, i.e. one that is aware of the way
fork works and relies on it to work that way.

However, being that its operation is not too obvious you
could always write another method, maybe #forkDeferred, to
accomplish what Andreas is proprosing.

Terry
 
===========================================================
Terry Raymond
Crafted Smalltalk
80 Lazywood Ln.
Tiverton, RI  02878
(401) 624-4517      [hidden email]
<http://www.craftedsmalltalk.com>
===========================================================

> -----Original Message-----
> From: [hidden email] [mailto:squeak-dev-
> [hidden email]] On Behalf Of Andreas Raab
> Sent: Monday, February 04, 2008 4:04 PM
> To: The general-purpose Squeak developers list
> Subject: #fork and deterministic resumption of the resulting process
>
> Hi -
>
> In my never-ending quest for questionable behavior in multi-threaded
> situations just today I ran into a pattern which is dangerously common
> in our code. It basically goes like this:
>
> MyClass>>startWorkerProcess
> "worker is an instance variable"
> worker:= [self runWorkerProcess] fork.
>
> MyClass>>runWorkerProcess
> "Run the worker process"
> [Processor activeProcess == worker] whileTrue:[
> "...do the work..."
> ].
>
> MyClass>>stopWorkerProcess
> "Stop the worker process"
> worker := nil. "let it terminate itself"
>
> Those of you who can immediately tell what the problem is should get a
> medal for an outstanding knack of analyzing concurrency problems ;-)
>
> For the rest of us, the problem is that #fork in the above is not
> deterministic in the way that there is no guarantee whether the "worker"
> variable will be assigned when we enter the worker loop. It *would* be
> deterministic if the priority were below or above the current process'
> priority but when it's the same it can be affected by environmental
> effects (external signals, delay processing etc) leading to some very
> obscure runtime problems (in the above, the process would just not start).
>
> To fix this problem I have changed BlockContext>>fork and
> BlockContext>>forkAt: to read, e.g.,
>
> BlockContext>>fork
>    "Create and schedule a Process running the code in the receiver."
>    ^self forkAt: Processor activePriority
>
> BlockContext>>forkAt: priority
>    "Create and schedule a Process running the code in the receiver
>    at the given priority. Answer the newly created process."
>    | forkedProcess helperProcess |
>    forkedProcess := self newProcess.
>    forkedProcess priority: priority.
>    priority = Processor activePriority ifTrue:[
>      helperProcess := [forkedProcess resume] newProcess.
>      helperProcess priority: priority-1.
>      helperProcess resume.
>    ] ifFalse:[
>      forkedProcess resume
>    ].
>    ^forkedProcess
>
> This will make sure that #fork has (for the purpose of resumption) the
> same semantics as forking at a lower priority has.
>
> What do people think about this?
>
> Cheers,
>    - Andreas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
Terry Raymond wrote:
> I think changing the semantics of fork is a way to introduce
> a bug to existing code, i.e. one that is aware of the way
> fork works and relies on it to work that way.

I don't think you (as well as some of the other posters) really
understand what these changes do. To illustrate:

p1 := p2 := p3 := nil.
p1 := [p1Valid := p1 notNil] forkAt: Processor activePriority-1.
p2 := [p2Valid := p2 notNil] forkAt: Processor activePriority.
p3 := [p3Valid := p3 notNil] forkAt: Processor activePriority+1.

Both, the first and the last case are currently deterministic and will
stay that way: p1 will be consistently non-nil when this code is run; p3
will be consistently nil when run. This is currently the case and
remains the same after applying my changes.

In the second case however, p2 may or may not be nil, depending on
whether external interrupts occur. Since that is rare, in 99.99+% of all
cases p2 will be non-nil.

What I am proposing is simply to make p2 non-nil in 100% of the cases.
There is no change to those parts of the existing semantics that are
actually well-defined. The only change is that it takes a rare
non-deterministic occurrence and makes the overall behavior consistent
in this case.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2


On Feb 5, 2008 12:51 PM, Andreas Raab <[hidden email]> wrote:
Terry Raymond wrote:
> I think changing the semantics of fork is a way to introduce
> a bug to existing code, i.e. one that is aware of the way
> fork works and relies on it to work that way.

I don't think you (as well as some of the other posters) really
understand what these changes do. To illustrate:

p1 := p2 := p3 := nil.
p1 := [p1Valid := p1 notNil] forkAt: Processor activePriority-1.
p2 := [p2Valid := p2 notNil] forkAt: Processor activePriority.
p3 := [p3Valid := p3 notNil] forkAt: Processor activePriority+1.

Both, the first and the last case are currently deterministic and will
stay that way: p1 will be consistently non-nil when this code is run; p3
will be consistently nil when run. This is currently the case and
remains the same after applying my changes.

In the second case however, p2 may or may not be nil, depending on
whether external interrupts occur. Since that is rare, in 99.99+% of all
cases p2 will be non-nil.

What I am proposing is simply to make p2 non-nil in 100% of the cases.
There is no change to those parts of the existing semantics that are
actually well-defined. The only change is that it takes a rare
non-deterministic occurrence and makes the overall behavior consistent
in this case.


You're relying on the current implementation of the scheduler. If the implementation of the scheduler changes (such as would happen when Squeak is made capable of using multi-cored or multiple CPUs) then your bug will re-appear and your "fix" will no longer fix the problem 100% of the time.

Process>>fork should fork the process at the same priority as the calling process. Process>>forkAt: should fork the process at the given priority. If they use other priorities than this, then I consider it a bug.

Gulik.


--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Mathieu SUEN
In reply to this post by Andreas.Raab

On Feb 5, 2008, at 12:51 AM, Andreas Raab wrote:

> Terry Raymond wrote:
>> I think changing the semantics of fork is a way to introduce
>> a bug to existing code, i.e. one that is aware of the way
>> fork works and relies on it to work that way.
>
> I don't think you (as well as some of the other posters) really  
> understand what these changes do. To illustrate:
>
> p1 := p2 := p3 := nil.
> p1 := [p1Valid := p1 notNil] forkAt: Processor activePriority-1.
> p2 := [p2Valid := p2 notNil] forkAt: Processor activePriority.
> p3 := [p3Valid := p3 notNil] forkAt: Processor activePriority+1.
>
> Both, the first and the last case are currently deterministic and  
> will stay that way: p1 will be consistently non-nil when this code  
> is run; p3 will be consistently nil when run. This is currently the  
> case and remains the same after applying my changes.
>
> In the second case however, p2 may or may not be nil, depending on  
> whether external interrupts occur. Since that is rare, in 99.99+% of  
> all cases p2 will be non-nil.
>
> What I am proposing is simply to make p2 non-nil in 100% of the  
> cases. There is no change to those parts of the existing semantics  
> that are actually well-defined. The only change is that it takes a  
> rare non-deterministic occurrence and makes the overall behavior  
> consistent in this case.

If something could happen it will happen.
Reading the code above now I expect it not to be determinist for p2. I  
admit the first time I expected p2 to get a value.
So now I prefer to have a critical section regarding p2.

I would even wrote the critical section for p1 and p3 since in most of  
the scheduler the code is non determinist.
Especially for p3. The scheduler may not yield the current thread for  
performance reason. (But not in Squeak)

So  I vote for the critical section.


>
>
> Cheers,
>  - Andreas
>

        Mth





Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Joshua Gargus-2
On Feb 4, 2008, at 5:19 PM, Mathieu Suen wrote:

>
> On Feb 5, 2008, at 12:51 AM, Andreas Raab wrote:
>
>> Terry Raymond wrote:
>>> I think changing the semantics of fork is a way to introduce
>>> a bug to existing code, i.e. one that is aware of the way
>>> fork works and relies on it to work that way.
>>
>> I don't think you (as well as some of the other posters) really  
>> understand what these changes do. To illustrate:
>>
>> p1 := p2 := p3 := nil.
>> p1 := [p1Valid := p1 notNil] forkAt: Processor activePriority-1.
>> p2 := [p2Valid := p2 notNil] forkAt: Processor activePriority.
>> p3 := [p3Valid := p3 notNil] forkAt: Processor activePriority+1.
>>
>> Both, the first and the last case are currently deterministic and  
>> will stay that way: p1 will be consistently non-nil when this code  
>> is run; p3 will be consistently nil when run. This is currently the  
>> case and remains the same after applying my changes.
>>
>> In the second case however, p2 may or may not be nil, depending on  
>> whether external interrupts occur. Since that is rare, in 99.99+%  
>> of all cases p2 will be non-nil.
>>
>> What I am proposing is simply to make p2 non-nil in 100% of the  
>> cases. There is no change to those parts of the existing semantics  
>> that are actually well-defined. The only change is that it takes a  
>> rare non-deterministic occurrence and makes the overall behavior  
>> consistent in this case.
>
> If something could happen it will happen.
> Reading the code above now I expect it not to be determinist for p2.  
> I admit the first time I expected p2 to get a value.
> So now I prefer to have a critical section regarding p2.
>
> I would even wrote the critical section for p1 and p3 since in most  
> of the scheduler the code is non determinist.
> Especially for p3. The scheduler may not yield the current thread  
> for performance reason. (But not in Squeak)
>
> So  I vote for the critical section.

I'm not sure what you mean by critical section.  I'm assuming that you  
mean to use explicit synchronization.  For example (using TMutex from  
Croquet, sorry):

mutex := TMutex new.
p2 := nil.
mutex critical: [
        p2 := [ mutex critical: [p2Valid := p2 notNil] ] forkAt: Processor  
activePriority.
]

In other words, the programmer should be smart enough to do the  
appropriate synchronization.  Is this what you mean?  That doesn't  
seem unreasonable to me (and I say that as the guy who wrote the  
broken code that started Andreas on this topic :-)  ).  I'm a little  
uncomfortable with the notion of not giving processes the priority  
explicitly requested by the programmer.

Josh


>
>
>
>>
>>
>> Cheers,
>> - Andreas
>>
>
> Mth
>
>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Mathieu SUEN

On Feb 5, 2008, at 2:47 AM, Joshua Gargus wrote:

> On Feb 4, 2008, at 5:19 PM, Mathieu Suen wrote:
>
>>
>> On Feb 5, 2008, at 12:51 AM, Andreas Raab wrote:
>>
>>> Terry Raymond wrote:
>>>> I think changing the semantics of fork is a way to introduce
>>>> a bug to existing code, i.e. one that is aware of the way
>>>> fork works and relies on it to work that way.
>>>
>>> I don't think you (as well as some of the other posters) really  
>>> understand what these changes do. To illustrate:
>>>
>>> p1 := p2 := p3 := nil.
>>> p1 := [p1Valid := p1 notNil] forkAt: Processor activePriority-1.
>>> p2 := [p2Valid := p2 notNil] forkAt: Processor activePriority.
>>> p3 := [p3Valid := p3 notNil] forkAt: Processor activePriority+1.
>>>
>>> Both, the first and the last case are currently deterministic and  
>>> will stay that way: p1 will be consistently non-nil when this code  
>>> is run; p3 will be consistently nil when run. This is currently  
>>> the case and remains the same after applying my changes.
>>>
>>> In the second case however, p2 may or may not be nil, depending on  
>>> whether external interrupts occur. Since that is rare, in 99.99+%  
>>> of all cases p2 will be non-nil.
>>>
>>> What I am proposing is simply to make p2 non-nil in 100% of the  
>>> cases. There is no change to those parts of the existing semantics  
>>> that are actually well-defined. The only change is that it takes a  
>>> rare non-deterministic occurrence and makes the overall behavior  
>>> consistent in this case.
>>
>> If something could happen it will happen.
>> Reading the code above now I expect it not to be determinist for  
>> p2. I admit the first time I expected p2 to get a value.
>> So now I prefer to have a critical section regarding p2.
>>
>> I would even wrote the critical section for p1 and p3 since in most  
>> of the scheduler the code is non determinist.
>> Especially for p3. The scheduler may not yield the current thread  
>> for performance reason. (But not in Squeak)
>>
>> So  I vote for the critical section.
>
> I'm not sure what you mean by critical section.  I'm assuming that  
> you mean to use explicit synchronization.  For example (using TMutex  
> from Croquet, sorry):
>
> mutex := TMutex new.
> p2 := nil.
> mutex critical: [
> p2 := [ mutex critical: [p2Valid := p2 notNil] ] forkAt: Processor  
> activePriority.
> ]

Yes.

>
>
> In other words, the programmer should be smart enough to do the  
> appropriate synchronization.  Is this what you mean?  That doesn't  
> seem unreasonable to me (and I say that as the guy who wrote the  
> broken code that started Andreas on this topic :-)  ).  I'm a little  
> uncomfortable with the notion of not giving processes the priority  
> explicitly requested by the programmer.

Btw priority should only give a flavor of how often a process should  
run, is not a way to make sure that one process will run before an  
other. IMHO

>
>
> Josh
>
>
>>
>>
>>
>>>
>>>
>>> Cheers,
>>> - Andreas
>>>
>>
>> Mth
>>
>>
>>
>>
>>
>
>

        Mth





Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Andreas.Raab
On 04/02/2008, Andreas Raab <[hidden email]> wrote:
> Hi -
>
> In my never-ending quest for questionable behavior in multi-threaded
> situations just today I ran into a pattern which is dangerously common
> in our code. It basically goes like this:
>

Hmm, IMO, you wanting to kill two rabbits in one shot..

Why not write like following:

 MyClass>>startWorkerProcess
         "worker is an instance variable"
        running := true.
        worker := [self runWorkerProcess] fork.

 MyClass>>runWorkerProcess
         "Run the worker process"
         [running] whileTrue:[
                 "...do the work..."
         ].

 MyClass>>stopWorkerProcess
         "Stop the worker process"
        running := false. "let it terminate itself"

Yes, you will need an additional inst var - 'running', but i think
it's reasonable: controlling a process in context of scheduler
operations, where you need it's handle, and controlling when it's
should terminate graciously (by setting running flag to false), is
different things.

--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
In reply to this post by Michael van der Gulik-2
Michael van der Gulik wrote:

>     What I am proposing is simply to make p2 non-nil in 100% of the cases.
>     There is no change to those parts of the existing semantics that are
>     actually well-defined. The only change is that it takes a rare
>     non-deterministic occurrence and makes the overall behavior consistent
>     in this case.
>
> You're relying on the current implementation of the scheduler. If the
> implementation of the scheduler changes (such as would happen when
> Squeak is made capable of using multi-cored or multiple CPUs) then your
> bug will re-appear and your "fix" will no longer fix the problem 100% of
> the time.

Indeed, I'm trying to fix it in the context of the *current*
implementation of the scheduler. If you change the scheduler there will
be a variety of new issues of which this is by far the smallest.

Cheers,
   - Andreas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Paolo Bonzini-2
In reply to this post by Andreas.Raab

> This will make sure that #fork has (for the purpose of resumption) the
> same semantics as forking at a lower priority has.
>
> What do people think about this?

Curiously, I prefer to have the *opposite* behavior in #fork, i.e.
always resume the forked process if the priority is equal.  Note that
this wouldn't be an ultimate fix, because it could lead to an equally
wrong idiom,

     MyClass>>startWorkerProcess
         "worker is an instance variable"
         [self runWorkerProcess] fork.

     MyClass>>runWorkerProcess
         "Run the worker process"
         worker := Processor activeProcess.
         [Processor activeProcess == worker] whileTrue:[
             "...do the work..."
         ].

     MyClass>>stopWorkerProcess
         "Stop the worker process"
         worker := nil. "let it terminate itself"

I'm with Terry on the correct idiom to use, i.e.

     workerProcess := [self runWorkerProcess] newProcess.
     workerProcess resume.

if you really do not want to use a separate instance variable.  Another
possibility is to signal a Notification:

        | running |
        [running := true.
        [running] whileTrue: [ ... ]]
           on: StopRunning
           do: [ :ex | running := false. ex resume ].

Paolo

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
Paolo Bonzini wrote:
> I'm with Terry on the correct idiom to use, i.e.
>
>     workerProcess := [self runWorkerProcess] newProcess.
>     workerProcess resume.

Sigh. One of the problems with examples is that they are ... well
examples. They are not the actual code. The above solution is simply not
applicable in our context (if it were, I would agree with it as the
better solution).

[BTW, I'm gonna drop out of this thread since it's clear that there is
too much opposition for such a change to get into Squeak. Which is fine
by me - I'll wait until you will get bitten in some really cruel and
unusual ways and at that point you might be ready to understand why this
fix is valuable. Personally, I think that changes that take out an
unusual case of non-determinism like here are always worth it - if
behavior is deterministic you can test it and fix it. If it's not you
might get lucky a hundred times in a row. And in the one critical
situation it will bite you].

Cheers,
   - Andreas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Bert Freudenberg
On Feb 5, 2008, at 10:29 , Andreas Raab wrote:

> Paolo Bonzini wrote:
>> I'm with Terry on the correct idiom to use, i.e.
>>     workerProcess := [self runWorkerProcess] newProcess.
>>     workerProcess resume.
>
> Sigh. One of the problems with examples is that they are ... well  
> examples. They are not the actual code. The above solution is  
> simply not applicable in our context (if it were, I would agree  
> with it as the better solution).
>
> [BTW, I'm gonna drop out of this thread since it's clear that there  
> is too much opposition for such a change to get into Squeak. Which  
> is fine by me - I'll wait until you will get bitten in some really  
> cruel and unusual ways and at that point you might be ready to  
> understand why this fix is valuable. Personally, I think that  
> changes that take out an unusual case of non-determinism like here  
> are always worth it - if behavior is deterministic you can test it  
> and fix it. If it's not you might get lucky a hundred times in a  
> row. And in the one critical situation it will bite you].

Well, you should give us a bit more than a few hours ;) Until now  
most posters did not even understand the proposal.

I for one would appreciate getting your fix in. It does not change  
the current semantics, and makes one very common idiom (var := [...]  
fork) safer to use. There may be better idioms, granted. However, for  
now Squeak's scheduling policy is beautifully deterministic, and I  
like keeping simple things simple.

- Bert -


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Mathieu SUEN

On Feb 5, 2008, at 11:02 AM, Bert Freudenberg wrote:

> On Feb 5, 2008, at 10:29 , Andreas Raab wrote:
>
>> Paolo Bonzini wrote:
>>> I'm with Terry on the correct idiom to use, i.e.
>>>    workerProcess := [self runWorkerProcess] newProcess.
>>>    workerProcess resume.
>>
>> Sigh. One of the problems with examples is that they are ... well  
>> examples. They are not the actual code. The above solution is  
>> simply not applicable in our context (if it were, I would agree  
>> with it as the better solution).
>>
>> [BTW, I'm gonna drop out of this thread since it's clear that there  
>> is too much opposition for such a change to get into Squeak. Which  
>> is fine by me - I'll wait until you will get bitten in some really  
>> cruel and unusual ways and at that point you might be ready to  
>> understand why this fix is valuable. Personally, I think that  
>> changes that take out an unusual case of non-determinism like here  
>> are always worth it - if behavior is deterministic you can test it  
>> and fix it. If it's not you might get lucky a hundred times in a  
>> row. And in the one critical situation it will bite you].
>
> Well, you should give us a bit more than a few hours ;) Until now  
> most posters did not even understand the proposal.
Why?

>
>
> I for one would appreciate getting your fix in. It does not change  
> the current semantics, and makes one very common idiom (var := [...]  
> fork) safer to use. There may be better idioms, granted. However,  
> for now Squeak's scheduling policy is beautifully deterministic, and  
> I like keeping simple things simple.
>
> - Bert -
>
>

        Mth





1234 ... 6