#fork and deterministic resumption of the resulting process

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
109 messages Options
123456
Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Paolo Bonzini-2
Andreas Raab wrote:

> Paolo Bonzini wrote:
>> I'm with Terry on the correct idiom to use, i.e.
>>
>>     workerProcess := [self runWorkerProcess] newProcess.
>>     workerProcess resume.
>
> Sigh. One of the problems with examples is that they are ... well
> examples. They are not the actual code. The above solution is simply not
> applicable in our context (if it were, I would agree with it as the
> better solution).

Can you explain better?

My problem with your solution is that in principle the priority-1
process that you create could never be scheduled if you have two
processes already running at the "right" priority.  So you trade one
"almost never happens" situation with another.

Paolo

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Andreas.Raab
On 05/02/2008, Andreas Raab <[hidden email]> wrote:

> Paolo Bonzini wrote:
> > I'm with Terry on the correct idiom to use, i.e.
> >
> >     workerProcess := [self runWorkerProcess] newProcess.
> >     workerProcess resume.
>
> Sigh. One of the problems with examples is that they are ... well
> examples. They are not the actual code. The above solution is simply not
> applicable in our context (if it were, I would agree with it as the
> better solution).
>

Andreas, you looking at problem from wrong angle, i think.
Even in C you have separate thread creation and it's resuming:
(this code if from socket plugin win32 platform code)
-----
  asyncLookupHandle =
    CreateThread(NULL,                    /* No security descriptor */
                 0,                       /* default stack size     */
                 (LPTHREAD_START_ROUTINE) &sqGetHostByName, /* what to do */
                 (LPVOID) PLUGIN_IPARAM,       /* parameter for thread   */
                 CREATE_SUSPENDED,        /* creation parameter --
create suspended so we can check the return value */
                 &id);                    /* return value for thread id */
  if(!asyncLookupHandle)
    printLastError(TEXT("CreateThread() failed"));
  /* lookups run with normal priority */
  if(!SetThreadPriority(asyncLookupHandle, THREAD_PRIORITY_NORMAL))
    printLastError(TEXT("SetThreadPriority() failed"));
  if(!ResumeThread(asyncLookupHandle))
    printLastError(TEXT("ResumeThread() failed"));
-----
See, first you obtain handle, do checks, setting it's priority e.t.c,
and only then issue ResumeThread() to enable it running.

So, why in smalltalk things should look different?
The point is, that once you enabling process to run, you can't
guarantee that current process will continue running and will not be
preempted by forked one.
And making changes to get favor of one execution thread over another
will not solve the problem, because your solution may fit for your
purposes, but don't fit for others.

Simply don't assume that any bit of code having chance to be executed
in process which created new process by issuing #fork before forked
process is actually started execution.


> [BTW, I'm gonna drop out of this thread since it's clear that there is
> too much opposition for such a change to get into Squeak. Which is fine
> by me - I'll wait until you will get bitten in some really cruel and
> unusual ways and at that point you might be ready to understand why this
> fix is valuable. Personally, I think that changes that take out an
> unusual case of non-determinism like here are always worth it - if
> behavior is deterministic you can test it and fix it. If it's not you
> might get lucky a hundred times in a row. And in the one critical
> situation it will bite you].
>
> Cheers,
>    - Andreas
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Randal L. Schwartz
>>>>> "Igor" == Igor Stasenko <[hidden email]> writes:

Igor> Simply don't assume that any bit of code having chance to be executed
Igor> in process which created new process by issuing #fork before forked
Igor> process is actually started execution.

Yeah, my preliminary conclusion has been swayed in face of further evidence.
I don't see any way to fix even a common breakage around this.

  a := [something] fork

can never be guaranteed to put something into a before something
starts running.  it would violate what fork is doing.  If you want
a lower priority, forkAt: or simply create it suspended until ready.

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<[hidden email]> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Bert Freudenberg
In reply to this post by Igor Stasenko
On Feb 5, 2008, at 16:43 , Igor Stasenko wrote:
> Even in C you have [...]
>
> So, why in smalltalk things should look different?


Err, because we use Smalltalk precisely because we do not want to  
deal with all the low-level stuff?

- Bert -

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Paolo Bonzini-2
>> Even in C you have [...]
>>
>> So, why in smalltalk things should look different?
>
> Err, because we use Smalltalk precisely because we do not want to deal
> with all the low-level stuff?

Race conditions are not low-level stuff.

Paolo

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Bert Freudenberg
On 05/02/2008, Bert Freudenberg <[hidden email]> wrote:
> On Feb 5, 2008, at 16:43 , Igor Stasenko wrote:
> > Even in C you have [...]
> >
> > So, why in smalltalk things should look different?
>
>
> Err, because we use Smalltalk precisely because we do not want to
> deal with all the low-level stuff?
>
That's true of course, but this problem lies in algorithmic plane (an
order of operations) so implementation language don't really matter.

> - Bert -
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Joshua Gargus-2
In reply to this post by Igor Stasenko

On Feb 4, 2008, at 6:21 PM, Igor Stasenko wrote:

> On 04/02/2008, Andreas Raab <[hidden email]> wrote:
>> Hi -
>>
>> In my never-ending quest for questionable behavior in multi-threaded
>> situations just today I ran into a pattern which is dangerously  
>> common
>> in our code. It basically goes like this:
>>
>
> Hmm, IMO, you wanting to kill two rabbits in one shot..
>
> Why not write like following:
>
> MyClass>>startWorkerProcess
>         "worker is an instance variable"
>        running := true.
>        worker := [self runWorkerProcess] fork.
>
> MyClass>>runWorkerProcess
>         "Run the worker process"
>         [running] whileTrue:[
>                 "...do the work..."
>         ].
>
> MyClass>>stopWorkerProcess
>         "Stop the worker process"
>        running := false. "let it terminate itself"

This doesn't work either (and is actually the reason that the  
original, broken pattern described by Andreas was used).  Consider the  
following:

inst := MyClass new.
inst startWorkerProcess; stopWorkerProcess; startWorkerProcess

If the first worker process happens not to notice that 'running' was  
set to false for a moment, then you will have two processes running.  
By comparing against the worker process, you avoid this problem.

Josh



>
>
> Yes, you will need an additional inst var - 'running', but i think
> it's reasonable: controlling a process in context of scheduler
> operations, where you need it's handle, and controlling when it's
> should terminate graciously (by setting running flag to false), is
> different things.
>
> --
> Best regards,
> Igor Stasenko AKA sig.
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Martin Beck-3
In reply to this post by Randal L. Schwartz
Randal L. Schwartz wrote:

>   a := [something] fork
>
> can never be guaranteed to put something into a before something
> starts running.  it would violate what fork is doing.  If you want
> a lower priority, forkAt: or simply create it suspended until ready.
>

Correct, and as Andreas said, his solution uses the current
implementation of the squeak scheduler. In theory, there is no
guarantee, that a new process with lower priority runs after my process
just because it has lower priority. This completely depends on the used
scheduler. So perhaps it would be better to fix the particular places in
code to use #newProcess and #resume.

Perhaps [ doSomething ] fork is simply too beautiful for this use case...

However, again another prove for preferring share-nothing-concurrency:
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf

:)

Regards,
Martin

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Joshua Gargus-2
On 05/02/2008, Joshua Gargus <[hidden email]> wrote:

>
> On Feb 4, 2008, at 6:21 PM, Igor Stasenko wrote:
>
> > On 04/02/2008, Andreas Raab <[hidden email]> wrote:
> >> Hi -
> >>
> >> In my never-ending quest for questionable behavior in multi-threaded
> >> situations just today I ran into a pattern which is dangerously
> >> common
> >> in our code. It basically goes like this:
> >>
> >
> > Hmm, IMO, you wanting to kill two rabbits in one shot..
> >
> > Why not write like following:
> >
> > MyClass>>startWorkerProcess
> >         "worker is an instance variable"
> >        running := true.
> >        worker := [self runWorkerProcess] fork.
> >
> > MyClass>>runWorkerProcess
> >         "Run the worker process"
> >         [running] whileTrue:[
> >                 "...do the work..."
> >         ].
> >
> > MyClass>>stopWorkerProcess
> >         "Stop the worker process"
> >        running := false. "let it terminate itself"
>
> This doesn't work either (and is actually the reason that the
> original, broken pattern described by Andreas was used).  Consider the
> following:
>
> inst := MyClass new.
> inst startWorkerProcess; stopWorkerProcess; startWorkerProcess
>
> If the first worker process happens not to notice that 'running' was
> set to false for a moment, then you will have two processes running.
> By comparing against the worker process, you avoid this problem.
>

It's really depends on implementation and your needs.
If you want to make sure that after issuing stopWorkerProcess a worker
process don't running, you can modify a stopWorkerProcess to look
like:

MyClass>>stopWorkerProcess
         "Stop the worker process"
       running := false. "let it terminate itself"
       self waitForProcessToFinish

Or, you can place the wait at the beginning of startWorkerProcess method.
There are vague set of options how you can control the execution. But
it's false to assume that you can avoid such details when dealing with
it.
You _should_ know, how things rolling and how they fit your tasks,
otherwise you will break everything.


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Joshua Gargus-2
In reply to this post by Joshua Gargus-2

On Feb 4, 2008, at 5:47 PM, Joshua Gargus wrote:
>
>  I'm a little uncomfortable with the notion of not giving processes  
> the priority explicitly requested by the programmer.

I obviously didn't read Andreas's proposal closely enough.  With more  
than a glance, it is clear that a lower-priority helper process is  
used only for a moment to start up the real process at the priority  
requested by the user.

The only problem that I can see with the proposal is that under "heavy  
load", there may always be a runnable process of higher priority so  
that the helper process is starved.  Would this be a problem for  
anyone in practice?  You could always work around it this with:

desiredPriority := Processor activePriority.
[worker := [self runWorkerProcess] forkAt: desiredPriority] forkAt:  
desiredPriority+1

Josh

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Joshua Gargus-2
In reply to this post by Igor Stasenko

On Feb 5, 2008, at 8:49 AM, Igor Stasenko wrote:

> On 05/02/2008, Joshua Gargus <[hidden email]> wrote:
>>
>> On Feb 4, 2008, at 6:21 PM, Igor Stasenko wrote:
>>
>>> On 04/02/2008, Andreas Raab <[hidden email]> wrote:
>>>> Hi -
>>>>
>>>> In my never-ending quest for questionable behavior in multi-
>>>> threaded
>>>> situations just today I ran into a pattern which is dangerously
>>>> common
>>>> in our code. It basically goes like this:
>>>>
>>>
>>> Hmm, IMO, you wanting to kill two rabbits in one shot..
>>>
>>> Why not write like following:
>>>
>>> MyClass>>startWorkerProcess
>>>        "worker is an instance variable"
>>>       running := true.
>>>       worker := [self runWorkerProcess] fork.
>>>
>>> MyClass>>runWorkerProcess
>>>        "Run the worker process"
>>>        [running] whileTrue:[
>>>                "...do the work..."
>>>        ].
>>>
>>> MyClass>>stopWorkerProcess
>>>        "Stop the worker process"
>>>       running := false. "let it terminate itself"
>>
>> This doesn't work either (and is actually the reason that the
>> original, broken pattern described by Andreas was used).  Consider  
>> the
>> following:
>>
>> inst := MyClass new.
>> inst startWorkerProcess; stopWorkerProcess; startWorkerProcess
>>
>> If the first worker process happens not to notice that 'running' was
>> set to false for a moment, then you will have two processes running.
>> By comparing against the worker process, you avoid this problem.
>>
>
> It's really depends on implementation and your needs.
> If you want to make sure that after issuing stopWorkerProcess a worker
> process don't running, you can modify a stopWorkerProcess to look
> like:
>
> MyClass>>stopWorkerProcess
>         "Stop the worker process"
>       running := false. "let it terminate itself"
>       self waitForProcessToFinish
>
> Or, you can place the wait at the beginning of startWorkerProcess  
> method.
> There are vague set of options how you can control the execution. But
> it's false to assume that you can avoid such details when dealing with
> it.
> You _should_ know, how things rolling and how they fit your tasks,
> otherwise you will break everything.
>
>
> --
> Best regards,
> Igor Stasenko AKA sig.
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Joshua Gargus-2
In reply to this post by Igor Stasenko
Oops, sorry about the last message, I hit "reply" too soon by accident,

On Feb 5, 2008, at 8:49 AM, Igor Stasenko wrote:

> On 05/02/2008, Joshua Gargus <[hidden email]> wrote:
>>
>> On Feb 4, 2008, at 6:21 PM, Igor Stasenko wrote:
>>
>>> On 04/02/2008, Andreas Raab <[hidden email]> wrote:
>>>> Hi -
>>>>
>>>> In my never-ending quest for questionable behavior in multi-
>>>> threaded
>>>> situations just today I ran into a pattern which is dangerously
>>>> common
>>>> in our code. It basically goes like this:
>>>>
>>>
>>> Hmm, IMO, you wanting to kill two rabbits in one shot..
>>>
>>> Why not write like following:
>>>
>>> MyClass>>startWorkerProcess
>>>        "worker is an instance variable"
>>>       running := true.
>>>       worker := [self runWorkerProcess] fork.
>>>
>>> MyClass>>runWorkerProcess
>>>        "Run the worker process"
>>>        [running] whileTrue:[
>>>                "...do the work..."
>>>        ].
>>>
>>> MyClass>>stopWorkerProcess
>>>        "Stop the worker process"
>>>       running := false. "let it terminate itself"
>>
>> This doesn't work either (and is actually the reason that the
>> original, broken pattern described by Andreas was used).  Consider  
>> the
>> following:
>>
>> inst := MyClass new.
>> inst startWorkerProcess; stopWorkerProcess; startWorkerProcess
>>
>> If the first worker process happens not to notice that 'running' was
>> set to false for a moment, then you will have two processes running.
>> By comparing against the worker process, you avoid this problem.
>>
>
> It's really depends on implementation and your needs.

Of course.

>
> If you want to make sure that after issuing stopWorkerProcess a worker
> process don't running, you can modify a stopWorkerProcess to look
> like:
>
> MyClass>>stopWorkerProcess
>         "Stop the worker process"
>       running := false. "let it terminate itself"
>       self waitForProcessToFinish

(implementation-specific comment, not really on-topic for the thread)
This is undesirable because you block the "controller" process while  
it could be doing other useful work.

>
>
> Or, you can place the wait at the beginning of startWorkerProcess  
> method.
> There are vague set of options how you can control the execution. But
> it's false to assume that you can avoid such details when dealing with
> it.

I don't think that anybody is making that assumption.

>
> You _should_ know, how things rolling and how they fit your tasks,
> otherwise you will break everything.

Sounds perfect, except for these little things called "bugs".  :-)

Josh


>
>
>
> --
> Best regards,
> Igor Stasenko AKA sig.
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

timrowledge
In reply to this post by Joshua Gargus-2

On 5-Feb-08, at 9:10 AM, Joshua Gargus wrote:

>
> On Feb 4, 2008, at 5:47 PM, Joshua Gargus wrote:
>>
>> I'm a little uncomfortable with the notion of not giving processes  
>> the priority explicitly requested by the programmer.
>
> I obviously didn't read Andreas's proposal closely enough.  With  
> more than a glance, it is clear that a lower-priority helper process  
> is used only for a moment to start up the real process at the  
> priority requested by the user.

Dang! You're right! Reminder to self - avoid commenting after a quick  
look at densely written code.

I agree that there is surely still a moderate likelihood problem here  
in that the helper process, being a lower priority, is not guaranteed  
to run anytime soon. If there are several processes at the current  
priority then they *all* have to get suspended before the helper can  
run and complete the fork. Your suggestion might solve that. I think.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
To iterate is human; to recurse, divine.



Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Joshua Gargus-2
On 05/02/2008, Joshua Gargus <[hidden email]> wrote:
[skip]

> >
> > If you want to make sure that after issuing stopWorkerProcess a worker
> > process don't running, you can modify a stopWorkerProcess to look
> > like:
> >
> > MyClass>>stopWorkerProcess
> >         "Stop the worker process"
> >       running := false. "let it terminate itself"
> >       self waitForProcessToFinish
>
> (implementation-specific comment, not really on-topic for the thread)
> This is undesirable because you block the "controller" process while
> it could be doing other useful work.
>
Again, this may or may not suit your needs. If your worker process
operating with another state, which should be accessed exclusively,
best way is to wait until it finishes before running another process,
rather than enclose every bit of code with semaphores and critical
sections.


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
In reply to this post by Bert Freudenberg
Bert Freudenberg wrote:

>> [BTW, I'm gonna drop out of this thread since it's clear that there is
>> too much opposition for such a change to get into Squeak. Which is
>> fine by me - I'll wait until you will get bitten in some really cruel
>> and unusual ways and at that point you might be ready to understand
>> why this fix is valuable. Personally, I think that changes that take
>> out an unusual case of non-determinism like here are always worth it -
>> if behavior is deterministic you can test it and fix it. If it's not
>> you might get lucky a hundred times in a row. And in the one critical
>> situation it will bite you].
>
> Well, you should give us a bit more than a few hours ;) Until now most
> posters did not even understand the proposal.

That's part of the reason why I won't pursue these changes here. To me
these changes are just as important as the ones that I posted for Delay
and Semaphore. However, unless one understands the kinds of problems
that are caused by the current code it is pointless to argue that fixing
them is important - I'm sure that unless people had been bitten by Delay
and Semaphore we would have the same kinds of debates with all sorts of
well-meant advise on how you "ought" to write your code ;-)

[The obvious problem with this advice is that these fixes are not
necessarily only to fix *my* code but that of *other* people. I only got
started down this path after I saw similar patterns with three different
sets of author initials on them. In other words, the problem is far more
than any individuals shortcoming and fixing it in general means that it
will be fixed for any new people working on our projects]

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Paolo Bonzini-2

> That's part of the reason why I won't pursue these changes here. To me
> these changes are just as important as the ones that I posted for Delay
> and Semaphore. However, unless one understands the kinds of problems
> that are caused by the current code it is pointless to argue that fixing
> them is important - I'm sure that unless people had been bitten by Delay
> and Semaphore we would have the same kinds of debates with all sorts of
> well-meant advise on how you "ought" to write your code ;-)

It's not that I don't think it's important.  I think the *bugs* are
important to fix, but that the root cause just *cannot* be fixed.  It's
just that:

1) the many people who made the same mistake maybe were just
cutting'n'pasting buggy code;

2) especially, the fix is not 100% safe unless I'm mistaken.

Paolo

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
Paolo Bonzini wrote:

>> That's part of the reason why I won't pursue these changes here. To me
>> these changes are just as important as the ones that I posted for
>> Delay and Semaphore. However, unless one understands the kinds of
>> problems that are caused by the current code it is pointless to argue
>> that fixing them is important - I'm sure that unless people had been
>> bitten by Delay and Semaphore we would have the same kinds of debates
>> with all sorts of well-meant advise on how you "ought" to write your
>> code ;-)
>
> It's not that I don't think it's important.  I think the *bugs* are
> important to fix, but that the root cause just *cannot* be fixed.

This completely depends on your definition of "root cause" and "cannot".
For me, it's the fact that fork will behave in 99.99% of the cases in
one way and in 0.01% in a different way. That kind of non-determinism is
probably the root cause for many lingering bugs in our system and it
*can* be eliminated.

> It's just that:
>
> 1) the many people who made the same mistake maybe were just
> cutting'n'pasting buggy code;

That is of course a possibility but unless you think the majority of
people recognized the bug in the code snippet I posted, I fail to see
how this makes a difference.

> 2) especially, the fix is not 100% safe unless I'm mistaken.

What do you mean by "100% safe"? It is 100% deterministic (which is what
I care about); I'm not sure what you mean when you use the term "safe" here.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Paolo Bonzini-2

>> 2) especially, the fix is not 100% safe unless I'm mistaken.
>
> What do you mean by "100% safe"? It is 100% deterministic (which is what
> I care about); I'm not sure what you mean when you use the term "safe"
> here.

It is not.  Whether the low-priority process actually starts depends on
external factors.  If you have two priority-40 processes, they might
prevent the priority-39 process to start and resume the forked process.
  Of course, unless I'm mistaken.

Paolo

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
Paolo Bonzini wrote:
>>> 2) especially, the fix is not 100% safe unless I'm mistaken.
>>
>> What do you mean by "100% safe"? It is 100% deterministic (which is
>> what I care about); I'm not sure what you mean when you use the term
>> "safe" here.
>
> It is not.

Err, it is not what? Deterministic? Or safe? The point about it being
deterministic did not relate to when exactly the process would resume
(no real-time guarantee) but rather that it would resume
deterministically in relation to its parent process (in this case, only
after the parent process got suspended).

> Whether the low-priority process actually starts depends on
> external factors.  If you have two priority-40 processes, they might
> prevent the priority-39 process to start and resume the forked process.

Correct. And it is an interesting question to discuss how the system
*should* behave if it's exceeding its capabilities (i.e., running at
100% CPU). But I'll leave that discussion for a different day.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

John Brant-2
Andreas Raab wrote:

> Paolo Bonzini wrote:
>>>> 2) especially, the fix is not 100% safe unless I'm mistaken.
>>>
>>> What do you mean by "100% safe"? It is 100% deterministic (which is
>>> what I care about); I'm not sure what you mean when you use the term
>>> "safe" here.
>>
>> It is not.
>
> Err, it is not what? Deterministic? Or safe? The point about it being
> deterministic did not relate to when exactly the process would resume
> (no real-time guarantee) but rather that it would resume
> deterministically in relation to its parent process (in this case, only
> after the parent process got suspended).

I don't have the image in front of me, but what about when the process
is running at the bottom priority?

What I'm guessing is happening in your case, is that some higher
priority process is interrupting the "parent" process after it has made
the forked process runnable, but before it has assigned the variable.
When the "parent" process is interrupted, it is inserted at the end of
runnable processes at that priority. Therefore, when the higher priority
process yields control, instead of yielding control back to the "parent"
process, it yields control to the forked process.

If my guess is correct and you want deterministic behavior, you could
change the process scheduler code so that when a higher priority process
interrupts a lower priority process, the lower priority process is
inserted at the beginning of the runnable processes at that priority
instead of the end.

Of course, this change would break the simple time sliced scheduler hack:
        [[(Delay forMilliseconds: timeslice) wait] repeat]
                forkAt: Processor highestPriority
Also, some processes that worked before, might starve now.


John Brant

123456