#fork and deterministic resumption of the resulting process

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
109 messages Options
123456
Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
Hi John -

If you consider these points as real concerns, both issues are easily
addressed. I'm just not going to waste time on something that (given the
way this thread is going) may just be a rhetorical response ;-)

Cheers,
   - Andreas

John Brant wrote:

> nicolas cellier wrote:
>> Michael van der Gulik a écrit :
>>
>>> Andreas is a great programmer, but the example he posted had a bug in
>>> it and the proposed fix was incorrect.
>>>
>>
>> The patch is correct in its Squeak context.
>
> Unless I've missed some correction to the patch, the patch isn't
> correct. For example, you'll get an "Invalid priority" walkback if you
> try to evaluate the following using the patch:
>     [[Transcript show: 'works'; cr; flush] fork]
>         forkAt: Processor lowestPriority
>
> Furthermore, it allows lower priority processes to run before a process
> forked at a higher priority. For example, here's an example, that should
> open an inspector on #(1 2 3), but with the patch, it opens on inspector
> on #(1 3 2). The process forked at the lowest priority is run before the
> one forked at a higher priority.
>
> [| queue |
> queue := SharedQueue new.
> [queue nextPut: 3] forkAt: Processor lowestPriority.
> queue nextPut: 1.
> [queue nextPut: 2] fork.
> (Delay forSeconds: 1) wait. "Hack to let the blocks finish"
> (Array
>     with: queue next
>     with: queue next
>     with: queue next) inspect] forkAt: Processor lowestPriority + 1
>
>
> To me, the existing behavior is deterministic. When you fork a process
> at the same priority as the running parent process, then the new forked
> process is added to the list of runnable processes at that priority. The
> parent process continues to run until either a higher priority process
> becomes runnable or it yields control. If a higher priority process
> becomes runnable, then the parent process is suspended and added to the
> end of the runnable processes at its priority. Since it is added at the
> end of the runnable processes list, the forked process will resume
> before the parent resumes.
>
>
> John Brant
>
>


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Yoshiki Ohshima-2
In reply to this post by Michael van der Gulik-2
At Fri, 8 Feb 2008 14:11:10 +1300,
Michael van der Gulik wrote:

>
>
> On Feb 8, 2008 1:32 PM, Yoshiki Ohshima <[hidden email]> wrote:
>
>     >      Yes, the #lowestPriority case would be a problem.  The
>     >     #lowestPriority would be renamed to #reallyLowestPriority and new
>     >     #lowestPriority would return #reallyLowestPriority+1?
>     >
>     > Would you seriously consider changing this in the squeak.org image?
>    
>      I don't know the right answer to this question.  I understand that
>     most of people don't need it and then don't want to have it.  OTOH, I
>     don't mind to have it and that would prevent a few people in the
>     future stumble on the problem...
>
> In that case, the lowest priority idle thread needs to be of a lower priority than #reallyLowestPriority, so I propose
> also adding #actualReallyLowestPriority to be the lowest priority, then #reallyLowestPriority to be #
> actualReallyLowestPriority+1 and #lowestPriority to be #actualReallyLowestPriority+2.

  Ah, so by saying *this*, you're referring to the method name, not
the idea.  Sorry for misunderstanding.

  No, we don't have to have #reallyLowestPriority method, sure.

-- Yoshiki

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Michael van der Gulik-2
On 08/02/2008, Michael van der Gulik <[hidden email]> wrote:

>
>
>
> On Feb 8, 2008 1:32 PM, Yoshiki Ohshima <[hidden email]> wrote:
> >
> > >      Yes, the #lowestPriority case would be a problem.  The
> > >     #lowestPriority would be renamed to #reallyLowestPriority and new
> > >     #lowestPriority would return #reallyLowestPriority+1?
> > >
> > > Would you seriously consider changing this in the squeak.org image?
> >
> >  I don't know the right answer to this question.  I understand that
> > most of people don't need it and then don't want to have it.  OTOH, I
> > don't mind to have it and that would prevent a few people in the
> > future stumble on the problem...
> >
>
>
> In that case, the lowest priority idle thread needs to be of a lower
> priority than #reallyLowestPriority, so I propose also adding
> #actualReallyLowestPriority to be the lowest priority, then
> #reallyLowestPriority to be #actualReallyLowestPriority+1 and
> #lowestPriority to be #actualReallyLowestPriority+2.
>
>

I'd like to run this code:

block := [:counter |
       counter > 0 ifTrue: [
            [ block value: counter -1 ] forkAt: (Processor currentPriority -1 )
      ]
  ].

block value:1000.

In result it should spawn 1000 processes climbing down priority levels.

Just want to note, that if we using relative priority values , then we
should care that given example will work, and get rid of the absolute
priority values.
Processor lowestPriority then should mean the process which running
with lowest priority.
And so, you can schedule new process with even lower priority, if you like so.
Same for the #highestPriority - it should be a highest known value of
priority which used by running processes.


> Gulik.
>
> --
> http://people.squeakfoundation.org/person/mikevdg
> http://gulik.pbwiki.com/
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Yoshiki Ohshima-2
On 08/02/2008, Yoshiki Ohshima <[hidden email]> wrote:

I'd like to emphasize this (please, increase the font size, when you
reading following phrase):

...  a program that relies on a particular implementation of
scheduling is wrong.

> -- Yoshiki
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
Igor Stasenko wrote:
> I'd like to emphasize this (please, increase the font size, when you
> reading following phrase):
>
> ...  a program that relies on a particular implementation of
> scheduling is wrong.

Sorry but that's complete nonsense. If you really believe this then I'll
challenge you to write a program that you believe is "correct" for the
following problem: Create a program that will write the numbers 1
through five from five concurrent threads into an array in the order 1,
2, 3, 4, 5. Something that today you could do for example via:

result := Array new: 5.
1 to: 5 do:[:i|
   [result at: i put: i] forkAt: Processor activePriority+1.
].

*Regardless* of your implementation I will define a scheduling mechanism
that breaks your program, e.g., make it behave "wrong". In other words
if you believe what you write above you won't *ever* be able to write
correct multi-threaded code.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Yoshiki Ohshima-2
In reply to this post by John Brant-2
  John,

> If we consider a parallel-processing VM, then Andreas' patch won't come
> close to working, since it won't guarantee that the forked process isn't
> started too early. Therefore, I wasn't considering a parallel processing
> scheduler.

  Neither the program that assumes to get #(1 2 3) from your example
code.  So, these are "par", roughly speaking.

  And, the patch makes the behavior easier to reason if we take
current implementation.  So there is some value.

-- Yoshiki

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Yoshiki Ohshima-2
In reply to this post by Igor Stasenko
> I'd like to emphasize this (please, increase the font size, when you
> reading following phrase):
>
> ...  a program that relies on a particular implementation of
> scheduling is wrong.

  There is still some mismatch.

  Sure, at one level, you can say that if you want to ensure some
ordering in a concurrent program, use a proper concurrency control
mechanism; don't rely on the implementation detail.

  But at another level, you can say that if you write your own
scheduler, use the full-knowledge of it.  There scheduler is just
another module of your system.  If using that knowledge makes the
product rock-solid, there is nothing wrong with it.

  And, with Andreas patch, normal programmer doesn't have to assume
the scheduling ordering.  His patch doesn't prevent people from using
a proper concurrency mechanism.

  In general, I agree that your statement above is a good principle,
but in current Squeak context, there is another path, too.

-- Yoshiki

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Bert Freudenberg
In reply to this post by Igor Stasenko
On Feb 7, 2008, at 22:44 , Igor Stasenko wrote:

> And finally, do you really think, that hiding concurrency issues from
> the eyes of developer helps him to write good thread-safe code?
> I think, it's doing exactly opposite: makes him think safe and  
> comfortable.

By that reasoning - how about making the image crash on each  
doesNotUnderstand error? That will teach them! The all purpose  
beginners language C does that, too, and actually goes out if its way  
to not be safe nor comfortable, so surely this must be good.

- Bert -



Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Andreas.Raab
On 08/02/2008, Andreas Raab <[hidden email]> wrote:

> Igor Stasenko wrote:
> > I'd like to emphasize this (please, increase the font size, when you
> > reading following phrase):
> >
> > ...  a program that relies on a particular implementation of
> > scheduling is wrong.
>
> Sorry but that's complete nonsense. If you really believe this then I'll
> challenge you to write a program that you believe is "correct" for the
> following problem: Create a program that will write the numbers 1
> through five from five concurrent threads into an array in the order 1,
> 2, 3, 4, 5. Something that today you could do for example via:
>
> result := Array new: 5.
> 1 to: 5 do:[:i|
>    [result at: i put: i] forkAt: Processor activePriority+1.
> ].
>
I'll write only a block code, if you don't mind:

[ i == 1 ifTrue:[ result at: i put: i] ifFalse: [ [ (result at: i-1)
isNil ] whileTrue: [].
result at: i put: i ] ]

.. this will work with assumption that each block keeps own, separate
copy of i in it's own context. Otherwise there is need to add code to
ensure this.

> *Regardless* of your implementation I will define a scheduling mechanism
> that breaks your program, e.g., make it behave "wrong". In other words
> if you believe what you write above you won't *ever* be able to write
> correct multi-threaded code.
>

Unless you define a scheduling mechanism, which performs program
actions in random order (not in sequence, which developer defined).

Now answer me, please, is such kind of tasks (where you need to
perform action only after previous is complete) worth using concurrent
threads of execution?
This is apparently the case, when you using wrong tool to solve the problem.
So, it's not really matter if my code will work, or will not, it's simply wrong.


> Cheers,
>    - Andreas
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Bert Freudenberg
On 08/02/2008, Bert Freudenberg <[hidden email]> wrote:

> On Feb 7, 2008, at 22:44 , Igor Stasenko wrote:
>
> > And finally, do you really think, that hiding concurrency issues from
> > the eyes of developer helps him to write good thread-safe code?
> > I think, it's doing exactly opposite: makes him think safe and
> > comfortable.
>
> By that reasoning - how about making the image crash on each
> doesNotUnderstand error? That will teach them! The all purpose
> beginners language C does that, too, and actually goes out if its way
> to not be safe nor comfortable, so surely this must be good.
>
You are right.
But i just wanted to show, that for developing a concurrent program,
developer should 'switch' the mind, to look everywhere in his code,
where he needs to be very accurate.
And if you make him keep thinking that he's coding just a plain
non-concurrent code, then he will never get the correct results.
He should learn from very starting, that developing a concurrent
program is way too different than non-concurrent and requires
different approaches.

> - Bert -
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
> I'll write only a block code, if you don't mind:

Sure.

> [ i == 1 ifTrue:[ result at: i put: i] ifFalse: [ [ (result at: i-1)
> isNil ] whileTrue: [].
> result at: i put: i ] ]
>
> .. this will work with assumption that each block keeps own, separate
> copy of i in it's own context. Otherwise there is need to add code to
> ensure this.

This doesn't even come close. You can easily see this simply by assuming
that a theoretical scheduler schedules later threads with a slightly
higher priority and runs them like todays scheduler. So it would be
roughly equivalent to the following:

1 to: 5 do:[:i|
   [ i == 1 ifTrue:[ result at: i put: i] ifFalse: [ [ (result at: i-1)
isNil ] whileTrue: [].
   result at: i put: i ] ] forkAt: Processor activePriority - 10 + i.
].

This simply won't do. Try again please. (BTW, for clarification, I
expect that when the "program" completes that the result array is filled
in).

>> *Regardless* of your implementation I will define a scheduling mechanism
>> that breaks your program, e.g., make it behave "wrong". In other words
>> if you believe what you write above you won't *ever* be able to write
>> correct multi-threaded code.
>
> Unless you define a scheduling mechanism, which performs program
> actions in random order (not in sequence, which developer defined).
>
> Now answer me, please, is such kind of tasks (where you need to
> perform action only after previous is complete) worth using concurrent
> threads of execution?

It's not. But that's not my point. You claimed you can write "correct"
code without assuming anything about scheduling. I'm simply disproving
you. I'm willing to use any other non-trivial example you can come up
with (with "trivial" being defined as non-interacting processes). I am
only using the simplest set of interacting processes I could possibly
think of - a sequence.

Cheers,
   - Andreas

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Paolo Bonzini-2
In reply to this post by John Brant-2
>>> Andreas is a great programmer, but the example he posted had a bug in
>>> it and the proposed fix was incorrect.
>>
>> The patch is correct in its Squeak context.
>
> Unless I've missed some correction to the patch, the patch isn't
> correct.

Here is code for my issue instead.

| queue stop s |
queue := SharedQueue new.
stop := false.
s := Semaphore new.
[ s signal.
   [ stop ] whileFalse: [ queue nextPut: true. Processor yield ] ] fork.
s wait.
[ (Delay forMilliseconds: 500) wait. stop := true ] fork.
[ stop ] whileFalse: [ queue nextPut: false. Processor yield ].

Without the patch *and with any scheduler that executes same-priority
processes fairly* the program is ensured to finish.  With the patch, the
program might not finish.  The two producer processes might ping-pong
control to each other, and the delay won't even be started.

Paolo

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
In reply to this post by Andreas.Raab
On 08/02/2008, Andreas Raab <[hidden email]> wrote:

> Igor Stasenko wrote:
> > I'll write only a block code, if you don't mind:
>
> Sure.
>
> > [ i == 1 ifTrue:[ result at: i put: i] ifFalse: [ [ (result at: i-1)
> > isNil ] whileTrue: [].
> > result at: i put: i ] ]
> >
> > .. this will work with assumption that each block keeps own, separate
> > copy of i in it's own context. Otherwise there is need to add code to
> > ensure this.
>
> This doesn't even come close. You can easily see this simply by assuming
> that a theoretical scheduler schedules later threads with a slightly
> higher priority and runs them like todays scheduler. So it would be
> roughly equivalent to the following:
>
> 1 to: 5 do:[:i|
>    [ i == 1 ifTrue:[ result at: i put: i] ifFalse: [ [ (result at: i-1)
> isNil ] whileTrue: [].
>    result at: i put: i ] ] forkAt: Processor activePriority - 10 + i.
> ].
>
> This simply won't do. Try again please. (BTW, for clarification, I
> expect that when the "program" completes that the result array is filled
> in).
>
Well, i expect that your potential scheduler gives a chance to run
processes at _any_ priority (doing something sometimes to switch
active processes). Otherwise we're in not the parallel domain anymore.
If you want , at some point to get all results filled, you can use
Semaphore, or plain counter (just make sure you get in to ABA mess
with it):

sema := Semaphore new.
 1 to: 5 do:[:i|
    [ i == 1 ifFalse: [ [ (result at: i-1)  isNil ] whileTrue: [] ].
     result at: i put: i. sema signal ] forkAt: Processor
activePriority - 10 + i.
].

[ sema excessSignals < 5 ] whileTrue: [].
<here you got your array filled>

> >> *Regardless* of your implementation I will define a scheduling mechanism
> >> that breaks your program, e.g., make it behave "wrong". In other words
> >> if you believe what you write above you won't *ever* be able to write
> >> correct multi-threaded code.
> >
> > Unless you define a scheduling mechanism, which performs program
> > actions in random order (not in sequence, which developer defined).
> >
> > Now answer me, please, is such kind of tasks (where you need to
> > perform action only after previous is complete) worth using concurrent
> > threads of execution?
>
> It's not. But that's not my point. You claimed you can write "correct"
> code without assuming anything about scheduling. I'm simply disproving
> you. I'm willing to use any other non-trivial example you can come up
> with (with "trivial" being defined as non-interacting processes). I am
> only using the simplest set of interacting processes I could possibly
> think of - a sequence.
>
The only assumption about scheduling, that it allows processes to run
in parallel.
E.g., if you have N processes and K seconds of time, there is some K,
when each process will have a chance to perform at least single
action.

> Cheers,
>    - Andreas
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2
In reply to this post by Andreas.Raab


On Feb 8, 2008 7:17 PM, Andreas Raab <[hidden email]> wrote:
Igor Stasenko wrote:
> I'd like to emphasize this (please, increase the font size, when you
> reading following phrase):
>
> ...  a program that relies on a particular implementation of
> scheduling is wrong.

Sorry but that's complete nonsense. If you really believe this then I'll
challenge you to write a program that you believe is "correct" for the
following problem: Create a program that will write the numbers 1
through five from five concurrent threads into an array in the order 1,
2, 3, 4, 5. Something that today you could do for example via:

result := Array new: 5.
1 to: 5 do:[:i|
  [result at: i put: i] forkAt: Processor activePriority+1.
].

*Regardless* of your implementation I will define a scheduling mechanism
that breaks your program, e.g., make it behave "wrong". In other words
if you believe what you write above you won't *ever* be able to write
correct multi-threaded code.


Complete with Squeak-isms:

n := 5.
result := Array new: n.
semaphores := Array new: n.
1 to: n do: [ :each | semaphores at: each put: Semaphore new ].

1 to: n do: [ :each | [
        (semaphores at: each) wait.
        result at: each put: each.
        (each >= semaphores size) ifFalse: [
            (semaphores at: each+1) signal.
        ]
    ] fixTemps fork.
].
(semaphores at: 1) signal.

I'm mostly sure it's correct, but as with most concurrent code there's probably a bug there somewhere. It's an algorithm that's unable to take advantage of any concurrency because you make the requirement that items get added sequentially.

If items could be added in any order, then the following should work:

n := 15.
result := Array new: n.

1 to: n do: [ :each | [
        result at: each put: each. "Should be an atomic operation"
    ] fixTemps fork.
].

Of course, even on a multi-CPU system, the following would execute faster because of the overhead of making Processes:

result := (1 to: 5) asArray.

This next version should fill the array, efficiently using the hypothetical CPU cores available:

go
    | anArray numProcesses step |
    anArray := Array new: 60000. "Assume an even larger number for realistic examples"
    numProcesses := 32.   
    step := anArray size // numProcesses.
    1 to: anArray size by: step do: [ :i |
        [   
            i to: (i+numProcesses-1) do: [ :j |
                anArray at: j put: j ]
        ] fixTemps fork.
    ].
    ^ anArray.

However, it doesn't work except for small numbers. I'd be happy if somebody would be able to provide a fixed version; I can't work it out.

Gulik.


--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Andreas.Raab
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
> Well, i expect that your potential scheduler gives a chance to run
> processes at _any_ priority (doing something sometimes to switch
> active processes). Otherwise we're in not the parallel domain anymore.

Ah. Now we're starting with assumptions ;-)

> If you want , at some point to get all results filled, you can use
> Semaphore, or plain counter (just make sure you get in to ABA mess
> with it):
>
> sema := Semaphore new.
>  1 to: 5 do:[:i|
>     [ i == 1 ifFalse: [ [ (result at: i-1)  isNil ] whileTrue: [] ].
>      result at: i put: i. sema signal ] forkAt: Processor
> activePriority - 10 + i.
> ].
>
> [ sema excessSignals < 5 ] whileTrue: [].
> <here you got your array filled>

That's a little better. Still not good enough though. Now let's assume
that my theoretical scheduler has limits for how many threads it can
have scheduled at the same time (because of memory limits, cores etc).
Not unusal at all. Would you like to try again?

Cheers,
   - Andreas


Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
On 08/02/2008, Andreas Raab <[hidden email]> wrote:
> Igor Stasenko wrote:
> > Well, i expect that your potential scheduler gives a chance to run
> > processes at _any_ priority (doing something sometimes to switch
> > active processes). Otherwise we're in not the parallel domain anymore.
>
> Ah. Now we're starting with assumptions ;-)

Not really, as i said, if you claim that your scheduler able to run
two (or more) parallel processes, it should be true. Otherwise, i
can't call it a scheduler of parallel processes :)

>
> > If you want , at some point to get all results filled, you can use
> > Semaphore, or plain counter (just make sure you get in to ABA mess
> > with it):
> >
> > sema := Semaphore new.
> >  1 to: 5 do:[:i|
> >     [ i == 1 ifFalse: [ [ (result at: i-1)  isNil ] whileTrue: [] ].
> >      result at: i put: i. sema signal ] forkAt: Processor
> > activePriority - 10 + i.
> > ].
> >
> > [ sema excessSignals < 5 ] whileTrue: [].
> > <here you got your array filled>
>
> That's a little better. Still not good enough though. Now let's assume
> that my theoretical scheduler has limits for how many threads it can
> have scheduled at the same time (because of memory limits, cores etc).
> Not unusal at all. Would you like to try again?
>

Hmm, do you want me to change the code because scheduler can't handle
problem itself?
That is exactly what i'm against: there should be API, which gives me
a feeling that my tasks are running in parallel, no matter what it's
doing inside.
My code should not depend from it: my task is to write concurrent
program, a scheduler task is to make it possible :)
Isn't low level implementation serves to reach higher level of
abstraction, allowing programmer to write code without deep knowledge
of what is happening at low levels?

> Cheers,
>    - Andreas
>

--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Michael van der Gulik-2
In reply to this post by Michael van der Gulik-2


On Feb 8, 2008 10:38 PM, Michael van der Gulik <[hidden email]> wrote:




go
    | anArray numProcesses step |
    anArray := Array new: 60000. "Assume an even larger number for realistic examples"
    numProcesses := 32.   
    step := anArray size // numProcesses.
    1 to: anArray size by: step do: [ :i |
        [   
            i to: (i+numProcesses-1) do: [ :j |
                anArray at: j put: j ]
        ] fixTemps fork.
    ].
    ^ anArray.

However, it doesn't work except for small numbers. I'd be happy if somebody would be able to provide a fixed version; I can't work it out.



That was a really obvious, stupid bug. Try again:

    anArray := Array new: 3200000.
    numProcesses := 64.  
    step := anArray size // numProcesses.
    1 to: anArray size by: step do: [ :i |
        [  
            i to: i+step-1 do: [ :j |
                anArray at: j put: j ]
        ] fixTemps fork.
    ].


Gulik.

--
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
On 08/02/2008, Michael van der Gulik <[hidden email]> wrote:

>
>
> On Feb 8, 2008 10:38 PM, Michael van der Gulik <[hidden email]> wrote:
> >
> >
> >
> >
> > go
> >     | anArray numProcesses step |
> >     anArray := Array new: 60000. "Assume an even larger number for
> realistic examples"
> >     numProcesses := 32.
> >     step := anArray size // numProcesses.
> >     1 to: anArray size by: step do: [ :i |
> >         [
> >             i to: (i+numProcesses-1) do: [ :j |
> >                 anArray at: j put: j ]
> >         ] fixTemps fork.
> >     ].
> >     ^ anArray.
> >
> > However, it doesn't work except for small numbers. I'd be happy if
> somebody would be able to provide a fixed version; I can't work it out.
> >
> >
> >
> >
>
>
> That was a really obvious, stupid bug. Try again:
>
>     anArray := Array new: 3200000.
>     numProcesses := 64.
>     step := anArray size // numProcesses.
>      1 to: anArray size by: step do: [ :i |
>         [
>             i to: i+step-1 do: [ :j |
>                 anArray at: j put: j ]
>         ] fixTemps fork.
>     ].
>
>
> Gulik.
>

2 Andreas: please note, that numerous Mike's code having many
assertions (because of different concurrency issues), but not a single
assumption about how scheduler is handling things :)

> --
> http://people.squeakfoundation.org/person/mikevdg
> http://gulik.pbwiki.com/
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Igor Stasenko
And some notes about priority.
I don't know much details how current squeak scheduler arranging things,
i'm just want to point out, that you using relative priority in your
samples and in your proposed patch, by writing  Processor
activePriority +/- n.
This, in own turn uncovers another problem with current scheduler
implementation: it's not designed to use relative priorities in mind.
As people pointed out:
Processor highestPriority + 1
or:
Processor lowestPriority -1
makes no sense, and lead to errors or unpredicted behavior.
So, first, i think, that if you really want to use relative priority
values, you should make changes to current implementation do to it
well and without errors.

Now, let's say if you make thing in the way, as i proposed in post
before (use highest/lowest priorities to depend only from current set
of running processes), the only thing which is left is to introduce a
scale.
I proposing a logarithmic time slicing:
a process with priority N doing roughly twice as much operations as
process with priority N-1 for a given period of time.


--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: #fork and deterministic resumption of the resulting process

Randal L. Schwartz
In reply to this post by Paolo Bonzini-2
>>>>> "Paolo" == Paolo Bonzini <[hidden email]> writes:

Paolo> Without the patch *and with any scheduler that executes same-priority
Paolo> processes fairly* the program is ensured to finish.  With the patch, the
Paolo> program might not finish.  The two producer processes might ping-pong control
Paolo> to each other, and the delay won't even be started.

A workaround that handles badly written beginner code but eventually
ruins the semantics for properly written expert code doesn't sound
very good.  It's like ensuring that every motorcycle sold automatically
comes with training wheels.  Ouch.

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<[hidden email]> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

123456