#fork and deterministic resumption of the resulting process

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
109 messages Options
123456
Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Blake-5
On Wed, 06 Feb 2008 14:38:43 -0800, Michael van der Gulik  
<[hidden email]> wrote:

> On 2/7/08, Igor Stasenko <[hidden email]> wrote:
>> On 06/02/2008, nicolas cellier <[hidden email]> wrote:
>> > Michael van der Gulik a écrit :
>> > >
>> > > When that day comes, my code will run faster and your code will  
>> break.
>> > >
>> >
>> > I see, sustainable development. Your writing code for our children.
>> >
>>
>> Oh, come on, how you can be so blind?
>
> Sig: I think that people are now just saying rubbish to provoke a
> reaction. I wouldn't bother replying.

As much as I am a fan of a solid code base and reusable code, actual  
history shows very little survives that long, and that which does survive  
is often a problem.

Programming for a not-yet-extant paradigm, environment, etc., may not be  
the wisest use of resources.

Not trollin', just sayin'.

        ===Blake===

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Yoshiki Ohshima-2
In reply to this post by Michael van der Gulik-2
  Michael,

> I think that people are now just saying rubbish to provoke a
> reaction. I wouldn't bother replying.

  Wow, does this "people" include yourself?  What was this email
about?

http://lists.squeakfoundation.org/pipermail/squeak-dev/2008-February/125110.html

  You apologized for making a provoking comment there.  Why again?

  And yes, as Andreas wrote, I don't think you understand the problem
Andreas was trying to solve.  And yes, it was started under the
assumption that the current VM behaves in the way it does.  And yes,
Bert has a point: we have the control over the behavior, and
determinism under the assumption has a real practical merit.  Please
re-visit the problem domain this discussion was in.

-- Yoshiki

  BTW, you apologized for making a direct comment but didn't mention
that claiming Andreas' code was buggy and incorrect was wrong.  Is
that still so?

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2
On 2/7/08, Yoshiki Ohshima <[hidden email]> wrote:

>   Michael,
>
> > I think that people are now just saying rubbish to provoke a
> > reaction. I wouldn't bother replying.
>
>   Wow, does this "people" include yourself?  What was this email
> about?
>
> http://lists.squeakfoundation.org/pipermail/squeak-dev/2008-February/125110.html
>
>   You apologized for making a provoking comment there.  Why again?


It was not intended to be a provoking comment.

>
>   And yes, as Andreas wrote, I don't think you understand the problem


I understand the problem and his fix.

>   BTW, you apologized for making a direct comment but didn't mention
> that claiming Andreas' code was buggy and incorrect was wrong.  Is
> that still so?


Andreas is a great programmer, but the example he posted had a bug in
it and the proposed fix was incorrect.

I'll continue this discussion after I've released a stable,
multi-threaded VM in which Andreas's example breaks.

Gulik.

--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2
On 2/7/08, Michael van der Gulik <[hidden email]> wrote:

> On 2/7/08, Yoshiki Ohshima <[hidden email]> wrote:
> >   Michael,
> >
> > > I think that people are now just saying rubbish to provoke a
> > > reaction. I wouldn't bother replying.
> >
> >   Wow, does this "people" include yourself?  What was this email
> > about?
> >
> >
> http://lists.squeakfoundation.org/pipermail/squeak-dev/2008-February/125110.html
> >
> >   You apologized for making a provoking comment there.  Why again?
>
>
> It was not intended to be a provoking comment.

*ahem* "a provocative comment".

Gulik.


--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

stephane ducasse
In reply to this post by Andreas.Raab
hi andreas

I know far too well this feeling and often because of your statement  
about what I'm doing :)
This is important to get feedback on really difficult points. So do  
not leave the discussion.

I'm bad in concurrent programming so normally I read thread to try to  
understand and fix my brain emptiness :)
Stef

On Feb 6, 2008, at 12:47 AM, Andreas Raab wrote:

> Michael van der Gulik wrote:
>> Andreas's original code was buggy and his proposed fix was incorrect.
>
> If I would need any more proof that people don't get what I'm trying  
> to fix, this would do it ;-) Seriously, you *really* don't get what  
> I'm trying to address with the fix.
>
> Anyway, I don't care. Croquet has that fix and it's your choice to  
> ignore it and deal with the consequences. And with that I'm out of  
> this thread for real and apologize for the pointless waste of  
> bandwidth.
>
> Cheers,
>  - Andreas
>
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Nicolas Cellier-3
In reply to this post by Michael van der Gulik-2
Michael van der Gulik a écrit :

> Andreas is a great programmer, but the example he posted had a bug in
> it and the proposed fix was incorrect.
>

The patch is correct in its Squeak context.

A feature taken for granted by many programmers as indicated by existing
code base was working in 99.9%, but was not guaranteed 100%.

If this feature seems natural and its implementation does not break
anything, why not change the API and make it conform to common expectations?

Andreas either added a new contract, or repaired a broken contract.

Your concerns are about
- portability to other dialect
- compatibility with future parallel thread

The first might not be major, other Smalltalk can be patched too (if
needed).

The second cause is noble, but maybe not wise, you are putting too much
constraints on code for an eventual event that might not happen any time
soon. How can you foresee an API that still isn't defined? And if you
can, can we?
A french would call that "tirer des plans sur la comète".


> I'll continue this discussion after I've released a stable,
> multi-threaded VM in which Andreas's example breaks.
>
> Gulik.
>

With different API contracts and constraints a lot of code would break.

Either, there will be a major rewrite of libraries, or you will have to
provide a backward compatibility for elder API.

What matters is that code is bad today, because programmer expectations
do not meet Kernel implementation. And Andreas solved this nicely I think.

Maybe we are all wrong because you are able to provide a clever
concurrent parallel multithread implementation 99% compatible with
existing code base, even core BlockContext and Process class, maybe you
are a few years ahead, but until this is proven, we'll assume Andreas is
right.

Sorry, I don't like crying with the wolves, but your repeated assertions
deserved a response.

Friendly.

Nicolas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
On 07/02/2008, nicolas cellier <[hidden email]> wrote:

> Michael van der Gulik a écrit :
>
> > Andreas is a great programmer, but the example he posted had a bug in
> > it and the proposed fix was incorrect.
> >
>
> The patch is correct in its Squeak context.
>
> A feature taken for granted by many programmers as indicated by existing
> code base was working in 99.9%, but was not guaranteed 100%.
>
> If this feature seems natural and its implementation does not break
> anything, why not change the API and make it conform to common expectations?
>
> Andreas either added a new contract, or repaired a broken contract.
>
Please, correct me if i'm wrong:
Andreas proposing that given code will work

workerProcess := [ ... ] fork.

while in block, you accessing a workerProcess variable and expect it
to be initialized.

Now, if that's true, then some of us might assume that he can do
something like this:

MyClass>>forkAndReturnForkedProcess

^ [ .... ] fork.

and in another method write:

MyAnotherClass>>forkProcess
^ forkedProcess := myClass forkAndReturnForkedProcess.

and he should expect that this should work as well?

And then, third developer cames and writes another method , which
relies on such behavior and so on..
So, where this 'determinism' chain can be stopped, where you don't
have any guarantees if forked process accessing
initialized/uninitialized state?
How i can estimate, where to stop with such kind of determinism?
Isn't it would be better to have 'determinism' in following: if you
forked process, you should make sure that you have everything properly
initialized before doing fork?

> Your concerns are about
> - portability to other dialect
> - compatibility with future parallel thread
>
> The first might not be major, other Smalltalk can be patched too (if
> needed).
>
> The second cause is noble, but maybe not wise, you are putting too much
> constraints on code for an eventual event that might not happen any time
> soon. How can you foresee an API that still isn't defined? And if you
> can, can we?
> A french would call that "tirer des plans sur la comète".
>
>
> > I'll continue this discussion after I've released a stable,
> > multi-threaded VM in which Andreas's example breaks.
> >
> > Gulik.
> >
>
> With different API contracts and constraints a lot of code would break.
>
> Either, there will be a major rewrite of libraries, or you will have to
> provide a backward compatibility for elder API.
>
Hmm, why we should pursue with major rewrite each time?
Why don't call it #determinatedFork (or whatever) and don't touch
original method?

> What matters is that code is bad today, because programmer expectations
> do not meet Kernel implementation. And Andreas solved this nicely I think.
>
I don't agree with that. I never expected that fork should work as
Andreas proposing.

> Maybe we are all wrong because you are able to provide a clever
> concurrent parallel multithread implementation 99% compatible with
> existing code base, even core BlockContext and Process class, maybe you
> are a few years ahead, but until this is proven, we'll assume Andreas is
> right.
>

I don't think it's the case, where we need to break compatibility.


> Sorry, I don't like crying with the wolves, but your repeated assertions
> deserved a response.
>
> Friendly.
>
> Nicolas
>

And finally, do you really think, that hiding concurrency issues from
the eyes of developer helps him to write good thread-safe code?
I think, it's doing exactly opposite: makes him think safe and comfortable.
But then, where problem fill finally knock to his door, he will not
know what to do, because API having only 99% guarantees of something
that it will work, leaving 1% to whom???


--
Best regards,
Igor Stasenko AKA sig.


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Nicolas Cellier-3
Igor Stasenko a écrit :

> Please, correct me if i'm wrong:
> Andreas proposing that given code will work
>
> workerProcess := [ ... ] fork.
>
> while in block, you accessing a workerProcess variable and expect it
> to be initialized.
>
> Now, if that's true, then some of us might assume that he can do
> something like this:
>
> MyClass>>forkAndReturnForkedProcess
>
> ^ [ .... ] fork.
>
> and in another method write:
>
> MyAnotherClass>>forkProcess
> ^ forkedProcess := myClass forkAndReturnForkedProcess.
>
> and he should expect that this should work as well?
>

How I understand it, above this code will work.
Because forkedProcess won't have a chance to start until activeProcess
is blocked on a Semaphore wait.

> And then, third developer cames and writes another method , which
> relies on such behavior and so on..
> So, where this 'determinism' chain can be stopped, where you don't
> have any guarantees if forked process accessing
> initialized/uninitialized state?
> How i can estimate, where to stop with such kind of determinism?
> Isn't it would be better to have 'determinism' in following: if you
> forked process, you should make sure that you have everything properly
> initialized before doing fork?
>

The simpler, the better. That's why i like Andreas solution.

>> With different API contracts and constraints a lot of code would break.
>>
>> Either, there will be a major rewrite of libraries, or you will have to
>> provide a backward compatibility for elder API.
>>
>
> Hmm, why we should pursue with major rewrite each time?
> Why don't call it #determinatedFork (or whatever) and don't touch
> original method?
>

A #randomFork API has no value per se.
A #deterministicFork can simplify coding.

Of course, in a multi-processor VM, randomFork would be the default, and
deterministicFork might still exist but be more costly (no parallelism).

>> What matters is that code is bad today, because programmer expectations
>> do not meet Kernel implementation. And Andreas solved this nicely I think.
>>
> I don't agree with that. I never expected that fork should work as
> Andreas proposing.
>

Then Andreas patch won't change anything for you.
It's an easy way to make some existing code work.

I perfectly understand your point.
You say, we'd better change existing code.

Because Andreas implementation relies on correct sequencing based only
on Process priority, it cannot be extended to multi-processor case.
So at the very least, this deterministicFork would not work without a
rewrite.

It would be easier to mark explicitely existing code relying on this
feature, and rewrite only those parts latter...

With Andreas change, we choose the other alternative, be lazy now, and
delay this hard work to review every #fork, until we have such a VM
available.

>> Maybe we are all wrong because you are able to provide a clever
>> concurrent parallel multithread implementation 99% compatible with
>> existing code base, even core BlockContext and Process class, maybe you
>> are a few years ahead, but until this is proven, we'll assume Andreas is
>> right.
>>
>
> I don't think it's the case, where we need to break compatibility.
>

I do not believe that you will be able to provide image compatibility,
simply because a lot of code has been written with single-Processor
model in mind.

But maybe I'm wrong, you looked at this part of code longer than me.

>
> And finally, do you really think, that hiding concurrency issues from
> the eyes of developer helps him to write good thread-safe code?
> I think, it's doing exactly opposite: makes him think safe and comfortable.

I do not like this argument. It's like saying, coding is complex, and we
must keep it complex to prevent vulgus pecum to put its nose in. Not
Smalltalk philosophy as I understood it.

If you say: using proposed explicit #resume solution when needed would
be as simple, more portable and future-proofed, that's an argument i'm
more inclined to ear.

> But then, where problem fill finally knock to his door, he will not
> know what to do, because API having only 99% guarantees of something
> that it will work, leaving 1% to whom???
>
Now, that it can be 100% with 5 lines of code, I say we should afford
that facility.

It all depends on the horizon of availability of a multi-processor VM
with all atomicity problems resolved. I tend to not believe in it
anytime soon, and i tend to not believe it would run an image without
rewriting some parts. That's maybe where we most disagree because the
hard work you put into it.

Cheers

Nicolas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

John Brant-2
In reply to this post by Nicolas Cellier-3
nicolas cellier wrote:
> Michael van der Gulik a écrit :
>
>> Andreas is a great programmer, but the example he posted had a bug in
>> it and the proposed fix was incorrect.
>>
>
> The patch is correct in its Squeak context.

Unless I've missed some correction to the patch, the patch isn't
correct. For example, you'll get an "Invalid priority" walkback if you
try to evaluate the following using the patch:
        [[Transcript show: 'works'; cr; flush] fork]
                forkAt: Processor lowestPriority

Furthermore, it allows lower priority processes to run before a process
forked at a higher priority. For example, here's an example, that should
open an inspector on #(1 2 3), but with the patch, it opens on inspector
on #(1 3 2). The process forked at the lowest priority is run before the
one forked at a higher priority.

[| queue |
queue := SharedQueue new.
[queue nextPut: 3] forkAt: Processor lowestPriority.
queue nextPut: 1.
[queue nextPut: 2] fork.
(Delay forSeconds: 1) wait. "Hack to let the blocks finish"
(Array
        with: queue next
        with: queue next
        with: queue next) inspect] forkAt: Processor lowestPriority + 1


To me, the existing behavior is deterministic. When you fork a process
at the same priority as the running parent process, then the new forked
process is added to the list of runnable processes at that priority. The
parent process continues to run until either a higher priority process
becomes runnable or it yields control. If a higher priority process
becomes runnable, then the parent process is suspended and added to the
end of the runnable processes at its priority. Since it is added at the
end of the runnable processes list, the forked process will resume
before the parent resumes.


John Brant

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Nicolas Cellier-3
John Brant a écrit :

> nicolas cellier wrote:
>> Michael van der Gulik a écrit :
>>
>>> Andreas is a great programmer, but the example he posted had a bug in
>>> it and the proposed fix was incorrect.
>>>
>>
>> The patch is correct in its Squeak context.
>
> Unless I've missed some correction to the patch, the patch isn't
> correct. For example, you'll get an "Invalid priority" walkback if you
> try to evaluate the following using the patch:
>     [[Transcript show: 'works'; cr; flush] fork]
>         forkAt: Processor lowestPriority
>

Yes it's a flaw. Easy to correct however.

> Furthermore, it allows lower priority processes to run before a process
> forked at a higher priority. For example, here's an example, that should
> open an inspector on #(1 2 3), but with the patch, it opens on inspector
> on #(1 3 2). The process forked at the lowest priority is run before the
> one forked at a higher priority.
>
> [| queue |
> queue := SharedQueue new.
> [queue nextPut: 3] forkAt: Processor lowestPriority.
> queue nextPut: 1.
> [queue nextPut: 2] fork.
> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
> (Array
>     with: queue next
>     with: queue next
>     with: queue next) inspect] forkAt: Processor lowestPriority + 1
>
>

OK I like it.
Nothing in Andreas implementation would guaranty any
children-determinism, that's a good point.

Your example is worse because results are counter intuitive.
That's a definitive argument.

So i take my vote back, and say only idiots don't change their mind.
Sorry Andreas, you'd better use an explicit "newProcess and an explicit
#resume as suggested.

Nicolas

> To me, the existing behavior is deterministic. When you fork a process
> at the same priority as the running parent process, then the new forked
> process is added to the list of runnable processes at that priority. The
> parent process continues to run until either a higher priority process
> becomes runnable or it yields control. If a higher priority process
> becomes runnable, then the parent process is suspended and added to the
> end of the runnable processes at its priority. Since it is added at the
> end of the runnable processes list, the forked process will resume
> before the parent resumes.
>
>
> John Brant
>
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Yoshiki Ohshima-2
In reply to this post by John Brant-2
  (Hmm, while I'm writing this, I saw an email from nicolas.  Well,
what a sychronicity; he also uses the phrase "counter intuitive".  But
the argument is different somehow... So let me post this anyway...)

> Unless I've missed some correction to the patch, the patch isn't
> correct. For example, you'll get an "Invalid priority" walkback if you
> try to evaluate the following using the patch:
> [[Transcript show: 'works'; cr; flush] fork]
> forkAt: Processor lowestPriority

  Yes, the #lowestPriority case would be a problem.  The
#lowestPriority would be renamed to #reallyLowestPriority and new
#lowestPriority would return #reallyLowestPriority+1?

> Furthermore, it allows lower priority processes to run before a process
> forked at a higher priority. For example, here's an example, that should
> open an inspector on #(1 2 3), but with the patch, it opens on inspector
> on #(1 3 2). The process forked at the lowest priority is run before the
> one forked at a higher priority.

  It depends on the interpretation of "should" above.  So, one of
points sig and Michael made was that in the future we may have
parallel-processing VM and a different scheduler.  In that case, a
process with higher priority would mean that it gets scheduled more
often than lower one, but not necessarily it prevents a lower one from
running, right?  If we take that variation of scheduler into account,
"nextPut: 3" may be scheduled before nextPut: 3 and it is not
"incorrect".  (From one point of view, a program that relies on a
particular implementation of scheduling is wrong.)

> [| queue |
> queue := SharedQueue new.
> [queue nextPut: 3] forkAt: Processor lowestPriority.
> queue nextPut: 1.
> [queue nextPut: 2] fork.
> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
> (Array
> with: queue next
> with: queue next
> with: queue next) inspect] forkAt: Processor lowestPriority + 1
>
>
> To me, the existing behavior is deterministic.

  I think when Andreas said "99.99+%", I think he actually counded it.
With the presense of higher priority network activity and other
interrupts, he reported that original implementation wasn't
deterministic, and I rather take his words.  (I'm not going to try
something 10,000 times by myself, though.)

  The result #(1 3 2) may look "counter-intuitive", but the idea was
"rather to be counter-intuitive than non-deterministic".

  There was a little project to provide 2D distributed conferencing
system with video, audio, slides, and text chat in Squeak a few years
back.  Back then, they didn't have the reliable Semaphore patch (nor
Andreas in the team).  Sure enough, the system never become fully
stable and eventually the project died.  (They rewrote a clone system
in Flex and it works really well, actually.)

  In the future, somebody might try to implement another kind of
distributed interactive system.  Whoever that person might be, he
would certainly want to have this in his image...

-- Yoshiki

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2


On Feb 8, 2008 1:14 PM, Yoshiki Ohshima <[hidden email]> wrote:
 (Hmm, while I'm writing this, I saw an email from nicolas.  Well,
what a sychronicity; he also uses the phrase "counter intuitive".  But
the argument is different somehow... So let me post this anyway...)

> Unless I've missed some correction to the patch, the patch isn't
> correct. For example, you'll get an "Invalid priority" walkback if you
> try to evaluate the following using the patch:
>       [[Transcript show: 'works'; cr; flush] fork]
>               forkAt: Processor lowestPriority

 Yes, the #lowestPriority case would be a problem.  The
#lowestPriority would be renamed to #reallyLowestPriority and new
#lowestPriority would return #reallyLowestPriority+1?


Would you seriously consider changing this in the squeak.org image?

Gulik.

--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Mathieu SUEN
In reply to this post by John Brant-2

On Feb 8, 2008, at 12:19 AM, John Brant wrote:

> [| queue |
> queue := SharedQueue new.
> [queue nextPut: 3] forkAt: Processor lowestPriority.
> queue nextPut: 1.
> [queue nextPut: 2] fork.
> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
> (Array
> with: queue next
> with: queue next
> with: queue next) inspect] forkAt: Processor lowestPriority + 1

I am confused why the first fork run before the second fork with the  
Andreas patch?

        Mth





Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Yoshiki Ohshima-2
In reply to this post by Michael van der Gulik-2
>      Yes, the #lowestPriority case would be a problem.  The
>     #lowestPriority would be renamed to #reallyLowestPriority and new
>     #lowestPriority would return #reallyLowestPriority+1?
>
> Would you seriously consider changing this in the squeak.org image?

  I don't know the right answer to this question.  I understand that
most of people don't need it and then don't want to have it.  OTOH, I
don't mind to have it and that would prevent a few people in the
future stumble on the problem...

-- Yoshiki

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2


On Feb 8, 2008 1:32 PM, Yoshiki Ohshima <[hidden email]> wrote:
>      Yes, the #lowestPriority case would be a problem.  The
>     #lowestPriority would be renamed to #reallyLowestPriority and new
>     #lowestPriority would return #reallyLowestPriority+1?
>
> Would you seriously consider changing this in the squeak.org image?

 I don't know the right answer to this question.  I understand that
most of people don't need it and then don't want to have it.  OTOH, I
don't mind to have it and that would prevent a few people in the
future stumble on the problem...


In that case, the lowest priority idle thread needs to be of a lower priority than #reallyLowestPriority, so I propose also adding #actualReallyLowestPriority to be the lowest priority, then #reallyLowestPriority to be #actualReallyLowestPriority+1 and #lowestPriority to be #actualReallyLowestPriority+2.

Gulik.

--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Nicolas Cellier-3
In reply to this post by Yoshiki Ohshima-2
Yoshiki Ohshima a écrit :

>   (Hmm, while I'm writing this, I saw an email from nicolas.  Well,
> what a sychronicity; he also uses the phrase "counter intuitive".  But
> the argument is different somehow... So let me post this anyway...)
>
>> Unless I've missed some correction to the patch, the patch isn't
>> correct. For example, you'll get an "Invalid priority" walkback if you
>> try to evaluate the following using the patch:
>> [[Transcript show: 'works'; cr; flush] fork]
>> forkAt: Processor lowestPriority
>
>   Yes, the #lowestPriority case would be a problem.  The
> #lowestPriority would be renamed to #reallyLowestPriority and new
> #lowestPriority would return #reallyLowestPriority+1?
>

Hmm i'd rather thought do a #randomFork in this case.


>> Furthermore, it allows lower priority processes to run before a process
>> forked at a higher priority. For example, here's an example, that should
>> open an inspector on #(1 2 3), but with the patch, it opens on inspector
>> on #(1 3 2). The process forked at the lowest priority is run before the
>> one forked at a higher priority.
>
>   It depends on the interpretation of "should" above.  So, one of
> points sig and Michael made was that in the future we may have
> parallel-processing VM and a different scheduler.  In that case, a
> process with higher priority would mean that it gets scheduled more
> often than lower one, but not necessarily it prevents a lower one from
> running, right?  If we take that variation of scheduler into account,
> "nextPut: 3" may be scheduled before nextPut: 3 and it is not
> "incorrect".  (From one point of view, a program that relies on a
> particular implementation of scheduling is wrong.)
>

Yes of course, no assumption based on priority possible in that case.

Some expectations are made on priority in current image.
Especially, highestPriority means unpreemptively. But unpreemptively
also means exclusively in single Processor. Not portable to multi-processor.

>> [| queue |
>> queue := SharedQueue new.
>> [queue nextPut: 3] forkAt: Processor lowestPriority.
>> queue nextPut: 1.
>> [queue nextPut: 2] fork.
>> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
>> (Array
>> with: queue next
>> with: queue next
>> with: queue next) inspect] forkAt: Processor lowestPriority + 1
>>
>>
>> To me, the existing behavior is deterministic.
>
>   I think when Andreas said "99.99+%", I think he actually counded it.
> With the presense of higher priority network activity and other
> interrupts, he reported that original implementation wasn't
> deterministic, and I rather take his words.  (I'm not going to try
> something 10,000 times by myself, though.)
>
>   The result #(1 3 2) may look "counter-intuitive", but the idea was
> "rather to be counter-intuitive than non-deterministic".
>

Yes, Andreas contract was only between parent and child, and is
therefore not broken by this example.

Current implementation is such that 3 is always the last one.
Andreas implementation is such that 1 is always the first one.

Order of others two is not deterministic (the 0.01% cases).

This example is just a trick, showing how expectations based on priority
can be confusing ("preuve par l'absurde").

It's putting in code what Igor was trying to tell: it might be better to
not expect anything and rely for example on a good and simple
#newProcess #resume pair which perfectly fits Andreas need.

That's why I'm not sure at all his patch is needed now.
It's a clever brilliant patch.
But enforcing some expectations that are arguable.
I better understand counter arguments.
Only my opinion...

Nicolas

>   There was a little project to provide 2D distributed conferencing
> system with video, audio, slides, and text chat in Squeak a few years
> back.  Back then, they didn't have the reliable Semaphore patch (nor
> Andreas in the team).  Sure enough, the system never become fully
> stable and eventually the project died.  (They rewrote a clone system
> in Flex and it works really well, actually.)
>
>   In the future, somebody might try to implement another kind of
> distributed interactive system.  Whoever that person might be, he
> would certainly want to have this in his image...
>
> -- Yoshiki
>
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

John Brant-2
In reply to this post by Mathieu SUEN
Mathieu Suen wrote:

>
> On Feb 8, 2008, at 12:19 AM, John Brant wrote:
>
>> [| queue |
>> queue := SharedQueue new.
>> [queue nextPut: 3] forkAt: Processor lowestPriority.
>> queue nextPut: 1.
>> [queue nextPut: 2] fork.
>> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
>> (Array
>>     with: queue next
>>     with: queue next
>>     with: queue next) inspect] forkAt: Processor lowestPriority + 1
>
> I am confused why the first fork run before the second fork with the
> Andreas patch?

Without the patch, we get the following steps:
(I've labeled the blocks based on what they output to the shared queue
1, 2, or 3)
        A) the 1 block is evaluated at priority lowestPriority + 1
        B) the 3 block is forked at priority lowestPriority
        C) the 1 block continues to run since it is at a higher priority than
the 3 block and it outputs 1 on the queue
        D) the 2 block is forked at the same priority as the 1 block and placed
on the runnable process list for lowestPriority + 1
        E) the 1 block continues to run until it is either interrupted by a
higher priority process or it hits the Delay>>wait
        F) the 2 block is evaluated since it has a higher priority than the 3
block; this outputs 2 on the queue and terminates
        G) if the 1 block was interrupted before getting to Delay>>wait, it
resumes and runs to the Delay>>wait
        H) the 3 block is evaluated since 2 has terminated and 1 is waiting on
the Delay -- it outputs 3 to the queue and terminates
        I) finally 1 resumes after the delay, and opens an inspector on the result

With the patch we get (first three steps are the same):
        A) the 1 block is evaluated at priority lowestPriority + 1
        B) the 3 block is forked at priority lowestPriority
        C) the 1 block continues to run since it is at a higher priority than
the 3 block and it outputs 1 on the queue
        D) a new process for the 2 block is created, but not marked as
runnable; another process (2's helper) is forked at priority
lowestPriority that will mark 2 as runnable when it runs
        E) the 1 block continues to run until it hits the Delay>>wait
        F) at this point we have the 3 block and 2's helper process waiting at
priority lowestPriority, and 3 is first in the list, so the 3 block
evaluates putting 3 on the queue and then terminates
        G) 2's helper process runs and resumes the 2 block
        H) the 2 block runs and outputs 2 on the queue, then it terminates
        I) 2's helper terminates
        J) finally 1 resumes after the delay, and opens an inspector on the result

BTW, we could get the correct #(1 2 3), if a higher priority process
interrupted step F before the 3 was placed on the queue.


John Brant

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Nicolas Cellier-3
In reply to this post by Mathieu SUEN
Mathieu Suen a écrit :

>
> On Feb 8, 2008, at 12:19 AM, John Brant wrote:
>
>> [| queue |
>> queue := SharedQueue new.
>> [queue nextPut: 3] forkAt: Processor lowestPriority.
>> queue nextPut: 1.
>> [queue nextPut: 2] fork.
>> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
>> (Array
>>     with: queue next
>>     with: queue next
>>     with: queue next) inspect] forkAt: Processor lowestPriority + 1
>
> I am confused why the first fork run before the second fork with the
> Andreas patch?
>
>     Mth
>

Because of the helperProcess trick.
Draw a stack of process (FIFO) for each priority.

(next put: 3) is added on FIFO of priority 0.
(nextPut: 2) is at priority 1, but it is not added immediately.
The helpProcess that launch it is added at priority 0, therefore after
(nextPut: 3).

So 1 is added to the list, then 3 (unless interrupted by higher priority
- 0.01%), then helperProcess does start (nextPut: 2), and 2 is finally
added...

Nicolas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

John Brant-2
In reply to this post by Yoshiki Ohshima-2
Yoshiki Ohshima wrote:

>> Furthermore, it allows lower priority processes to run before a process
>> forked at a higher priority. For example, here's an example, that should
>> open an inspector on #(1 2 3), but with the patch, it opens on inspector
>> on #(1 3 2). The process forked at the lowest priority is run before the
>> one forked at a higher priority.
>
>   It depends on the interpretation of "should" above.  So, one of
> points sig and Michael made was that in the future we may have
> parallel-processing VM and a different scheduler.  In that case, a
> process with higher priority would mean that it gets scheduled more
> often than lower one, but not necessarily it prevents a lower one from
> running, right?  If we take that variation of scheduler into account,
> "nextPut: 3" may be scheduled before nextPut: 3 and it is not
> "incorrect".  (From one point of view, a program that relies on a
> particular implementation of scheduling is wrong.)

If we consider a parallel-processing VM, then Andreas' patch won't come
close to working, since it won't guarantee that the forked process isn't
started too early. Therefore, I wasn't considering a parallel processing
scheduler. However, I think my statement is true for the current
schedulers in VW, VA, Dolphin, and Squeak. All of them have a variation
of a scheduler that has only a single active process and implements this
rule:
        active process' priority >= all runnable processes' priorities


John Brant

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Mathieu SUEN
In reply to this post by John Brant-2

On Feb 8, 2008, at 2:20 AM, John Brant wrote:

> Mathieu Suen wrote:
>> On Feb 8, 2008, at 12:19 AM, John Brant wrote:
>>> [| queue |
>>> queue := SharedQueue new.
>>> [queue nextPut: 3] forkAt: Processor lowestPriority.
>>> queue nextPut: 1.
>>> [queue nextPut: 2] fork.
>>> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
>>> (Array
>>>    with: queue next
>>>    with: queue next
>>>    with: queue next) inspect] forkAt: Processor lowestPriority + 1
>> I am confused why the first fork run before the second fork with  
>> the Andreas patch?
>
> Without the patch, we get the following steps:
> (I've labeled the blocks based on what they output to the shared  
> queue 1, 2, or 3)
> A) the 1 block is evaluated at priority lowestPriority + 1
> B) the 3 block is forked at priority lowestPriority
> C) the 1 block continues to run since it is at a higher priority  
> than the 3 block and it outputs 1 on the queue
> D) the 2 block is forked at the same priority as the 1 block and  
> placed on the runnable process list for lowestPriority + 1
> E) the 1 block continues to run until it is either interrupted by a  
> higher priority process or it hits the Delay>>wait
> F) the 2 block is evaluated since it has a higher priority than the  
> 3 block; this outputs 2 on the queue and terminates
> G) if the 1 block was interrupted before getting to Delay>>wait, it  
> resumes and runs to the Delay>>wait
> H) the 3 block is evaluated since 2 has terminated and 1 is waiting  
> on the Delay -- it outputs 3 to the queue and terminates
> I) finally 1 resumes after the delay, and opens an inspector on the  
> result
>
> With the patch we get (first three steps are the same):
> A) the 1 block is evaluated at priority lowestPriority + 1
> B) the 3 block is forked at priority lowestPriority
> C) the 1 block continues to run since it is at a higher priority  
> than the 3 block and it outputs 1 on the queue
> D) a new process for the 2 block is created, but not marked as  
> runnable; another process (2's helper) is forked at priority  
> lowestPriority that will mark 2 as runnable when it runs
> E) the 1 block continues to run until it hits the Delay>>wait
> F) at this point we have the 3 block and 2's helper process waiting  
> at priority lowestPriority, and 3 is first in the list, so the 3  
> block evaluates putting 3 on the queue and then terminates
> G) 2's helper process runs and resumes the 2 block
> H) the 2 block runs and outputs 2 on the queue, then it terminates
> I) 2's helper terminates
> J) finally 1 resumes after the delay, and opens an inspector on the  
> result
>
> BTW, we could get the correct #(1 2 3), if a higher priority process  
> interrupted step F before the 3 was placed on the queue.
>
>


Thanks I was stuck on the fact the #forkAt: have also the patch. So I  
was reasoning with the idea that #forkAt: fork a helper thread.

> John Brant
>

        Mth





123456