Pharo process scheduling

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Pharo process scheduling

alistairgrant
Hi Everyone,

I'm trying to get a better grip on process scheduling in Pharo but
don't understand the behaviour I'm seeing.

My understanding is that the VM (which does the real work of
scheduling the processes) is a pre-emptive co-operative model,
meaning:

- Higher priority processes are always scheduled ahead of lower
priority processes.
- Processes at the same priority get the CPU until something is done
to explicitly stop processing, e.g.:
-- terminate the process, wait on semaphore (I/O, timer, mutex, etc.),
#yield the processor

But given the following:

| p x a index |

p := 35.
a := Array new: 20.
index := 1.
[ 1 to: 10 do: [ :i |
    [ a at: index put: i.
    index := index + 1.
    x := 0.
    300000000 timesRepeat: [ x := x + 1 ].
    a at: index put: i.
    index := index + 1. ] forkAt: p named: 'Proc', i asString ].
] forkAt: p+1.
a.


The code above was written to be as simple as possible (it isn't going
to be creating streams, have semaphores etc. called somewhere deep
within).  The 300000000 number was manually determined as something
that will run for a few seconds on my laptop.

Since the forking process is higher priority than the 10 worker
processes, I expect that all 10 processes will be created before the
first gets any CPU, and they will then be executed in order (since
there is nothing obvious to cause a context switch), and the results
to be:

#(1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10)


But it appears that each process gets some time as soon as it is
forked despite being a lower priority than the forking process, and
then the order becomes non-deterministic:

#(1 2 3 4 5 6 7 8 9 10 7 10 9 1 2 3 4 5 6 8)
#(1 2 3 4 5 6 7 8 9 10 5 9 2 10 4 7 1 6 3 8)
#(1 2 3 4 5 6 7 8 9 10 6 1 8 9 3 4 7 10 5 2)


Changing p := 45 (so it is higher priority than the morphic UI
process) doesn't have any effect.

Can someone explain what I'm missing in my understanding?

Thanks!
Alistair

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

Henrik Sperre Johansen
| p x a index first |

p := 35.
a := Array new: 21.
index := 1.
first := nil.
[ 1 to: 10 do: [ :i |
    [
        first == nil ifTrue: [ first := i.
                a at: 21 put: first. ].
        a at: index put: i.
    index := index + 1.
    x := 0.
    300000000 timesRepeat: [ x := x + 1 ].
    a at: index put: i.
    index := index + 1. ] forkAt: p named: 'Proc', i asString ].
        first == nil ifTrue: [ first := -1.
                a at: 21 put: first.  ].
] forkAt: p+1.
a.

If you inspect it, 21 will be -1, so yes, the processes are created before
any get execution time.
They are added to the suspension list in order, and pulled from it in order,
so, assuming suspension points which halt the work, they will be added 1 2 3
4 5 6 7 8 9 10 to a.
All non-special bytecoded message sends, are potential suspension points.
(if some t > N has been spent in current process, it will be suspended, and
next process with higher or same priority resumed)
If you inspect timesRepeat:, it contains such a message send (#value to
aBlock).
The non-determinism of which process finishes first, depends on differences
in how far each process got in the assigned time slices (due to clock
granularity when checking t > N).

If you remove the suspension point (technically, at:put: is also a
suspension point, but let's assume it won't be far enough along in the time
slice when the first at:put: happens), and set p high enough that the
process won't be interrupted by UI thread (when a process is suspended, it
is put to the back of the thread, so you'd still see 1 2 3 4 5 6 7 8 9 10),
you get the behaviour you originally expected:

| p x a index first |

p := 50.
a := Array new: 21.
index := 1.
first := nil.
[ 1 to: 10 do: [ :i |
    [
        first == nil ifTrue: [ first := i.
                a at: 21 put: first. ].
        a at: index put: i.
    index := index + 1.
    x := 0.
    [x < 300000000] whileTrue: [ x := x + 1 ].
    a at: index put: i.
    index := index + 1. ] forkAt: p named: 'Proc', i asString ].
        first == nil ifTrue: [ first := -1.
                a at: 21 put: first.  ].
] forkAt: p+1.
a.


Cheers,
Henry



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

Henrik Sperre Johansen
Henrik Sperre Johansen wrote
> If you remove the suspension point (technically, at:put: is also a
> suspension point, but let's assume it won't be far enough along in the
> time
> slice when the first at:put: happens), and set p high enough that the
> process won't be interrupted by UI thread (when a process is suspended, it
> is put to the back of the thread, so you'd still see 1 2 3 4 5 6 7 8 9
> 10),
> you get the behaviour you originally expected

( 1 1 2 3 3 4 5 5 6 7 7 8 9 9 10 2 4 6 8 10 -1 )
well, close enough, anyways ;)

Cheers,
Henry



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

alistairgrant
Hi Henrik,

Thanks very much for your reply.


You wrote:
| p x a index first |

p := 35.
a := Array new: 21.
index := 1.
first := nil.
[ 1 to: 10 do: [ :i |
    [
        first == nil ifTrue: [ first := i.
                a at: 21 put: first. ].
        a at: index put: i.
    index := index + 1.
    x := 0.
    300000000 timesRepeat: [ x := x + 1 ].
    a at: index put: i.
    index := index + 1. ] forkAt: p named: 'Proc', i asString ].
        first == nil ifTrue: [ first := -1.
                a at: 21 put: first.  ].
] forkAt: p+1.
a.


Good point, I didn't think this one through properly.  And changing
both #fork:'s to priority 80 produces the expected:

#(1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 -1)

So far, so good.



> when a process is suspended, it is put to the back of the thread

And this was my misunderstanding.  I thought that preempting a process
left it at the front of the queue (for its priority).

The following produces what I was expecting:


| p x a index first |

Smalltalk vm processPreemptionYields: false.
p := 20.
a := Array new: 21.
index := 1.
first := nil.
[ 1 to: 10 do: [ :i |
    [
        first == nil ifTrue: [ first := i.
                a at: 21 put: first. ].
        a at: index put: i.
    index := index + 1.
    x := 0.
    300000000 timesRepeat: [ x := x + 1 ].
    a at: index put: i.
    index := index + 1. ] forkAt: p named: 'Proc', i asString ].
        first == nil ifTrue: [ first := -1.
                a at: 21 put: first.  ].
] forkAt: p+1.
a.



Thanks again!
Alistair

On Mon, 29 Jul 2019 at 12:53, Henrik Sperre Johansen
<[hidden email]> wrote:

>
> Henrik Sperre Johansen wrote
> > If you remove the suspension point (technically, at:put: is also a
> > suspension point, but let's assume it won't be far enough along in the
> > time
> > slice when the first at:put: happens), and set p high enough that the
> > process won't be interrupted by UI thread (when a process is suspended, it
> > is put to the back of the thread, so you'd still see 1 2 3 4 5 6 7 8 9
> > 10),
> > you get the behaviour you originally expected
>
> ( 1 1 2 3 3 4 5 5 6 7 7 8 9 9 10 2 4 6 8 10 -1 )
> well, close enough, anyways ;)
>
> Cheers,
> Henry
>
>
>
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
>

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

alistairgrant
For the record, a more detailed discussion of the same thing:
http://forum.world.st/The-Trunk-Kernel-mt-1009-mcz-td4887889.html

Thanks to Andrei Chis for the link (offline).

Cheers
Alistair

On Mon, 29 Jul 2019 at 14:00, Alistair Grant <[hidden email]> wrote:

>
> Hi Henrik,
>
> Thanks very much for your reply.
>
>
> You wrote:
> | p x a index first |
>
> p := 35.
> a := Array new: 21.
> index := 1.
> first := nil.
> [ 1 to: 10 do: [ :i |
>     [
>         first == nil ifTrue: [ first := i.
>                 a at: 21 put: first. ].
>         a at: index put: i.
>     index := index + 1.
>     x := 0.
>     300000000 timesRepeat: [ x := x + 1 ].
>     a at: index put: i.
>     index := index + 1. ] forkAt: p named: 'Proc', i asString ].
>         first == nil ifTrue: [ first := -1.
>                 a at: 21 put: first.  ].
> ] forkAt: p+1.
> a.
>
>
> Good point, I didn't think this one through properly.  And changing
> both #fork:'s to priority 80 produces the expected:
>
> #(1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 -1)
>
> So far, so good.
>
>
>
> > when a process is suspended, it is put to the back of the thread
>
> And this was my misunderstanding.  I thought that preempting a process
> left it at the front of the queue (for its priority).
>
> The following produces what I was expecting:
>
>
> | p x a index first |
>
> Smalltalk vm processPreemptionYields: false.
> p := 20.
> a := Array new: 21.
> index := 1.
> first := nil.
> [ 1 to: 10 do: [ :i |
>     [
>         first == nil ifTrue: [ first := i.
>                 a at: 21 put: first. ].
>         a at: index put: i.
>     index := index + 1.
>     x := 0.
>     300000000 timesRepeat: [ x := x + 1 ].
>     a at: index put: i.
>     index := index + 1. ] forkAt: p named: 'Proc', i asString ].
>         first == nil ifTrue: [ first := -1.
>                 a at: 21 put: first.  ].
> ] forkAt: p+1.
> a.
>
>
>
> Thanks again!
> Alistair
>
> On Mon, 29 Jul 2019 at 12:53, Henrik Sperre Johansen
> <[hidden email]> wrote:
> >
> > Henrik Sperre Johansen wrote
> > > If you remove the suspension point (technically, at:put: is also a
> > > suspension point, but let's assume it won't be far enough along in the
> > > time
> > > slice when the first at:put: happens), and set p high enough that the
> > > process won't be interrupted by UI thread (when a process is suspended, it
> > > is put to the back of the thread, so you'd still see 1 2 3 4 5 6 7 8 9
> > > 10),
> > > you get the behaviour you originally expected
> >
> > ( 1 1 2 3 3 4 5 5 6 7 7 8 9 9 10 2 4 6 8 10 -1 )
> > well, close enough, anyways ;)
> >
> > Cheers,
> > Henry
> >
> >
> >
> > --
> > Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
> >

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

Henrik Sperre Johansen
In reply to this post by alistairgrant
Alistair Grant wrote
> The following produces what I was expecting:
>
> | p x a index first |
>
> Smalltalk vm processPreemptionYields: false.
>
>
> Thanks again!
> Alistair

Well... what your example shows, is that with processes
p50 (pri50) periodic, becoming runnable in time slot 2)
p40.1 (taking 3 time slots to complete)
p40.2 (taking 3 time slots to complete)

When preemption yields, you get processor time allocations like this:
[p40.1][p50][p40.2][p40.1][p40.2][p40.1][p40.2]
With preemption NOT yielding, you get processor time allocations like this:
[p40.1][p50][p40.1][p40.1][p40.2][p40.2][p40.2]

Which clearly contradicts the statements in the linked post
"Nothing changes within priorities.  What the setting controls is what
happens when a process is preempted by a higher-priority process.  The old
code did an implicit yield of the preempted process, destroying the
guarantee of cooperative scheduling within a priority."

Which (to me) would imply slot allocations when not yielding would be like
this:
[p40.1][p50][p40.1][p40.2][p40.1][p40.2][p40.2]

Maybe it's what is intended and the statements are inaccurate, or Pharo is
lacking some related changes, but it doesn't make for a good experience when
you have multiple processes requiring "real-time" execution running at the
same priority if you have to add explicit Processor yield's everywhere.  

Cheers,
Henry




--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

Henrik Sperre Johansen
Henrik Sperre Johansen wrote
> Which clearly contradicts the statements in the linked post
> "Nothing changes within priorities.  What the setting controls is what
> happens when a process is preempted by a higher-priority process.  The old
> code did an implicit yield of the preempted process, destroying the
> guarantee of cooperative scheduling within a priority."

Or it could mean the understanding was that processes at the same priority
would have to explicitly yield for other processes of the same priority to
run also with preemptionyields = true.
... which, seems to be true.

It's just that we never see that in practice, even running at pri 80, as the
DelaySemaphoreScheduler (which runs at 80) is at the head of the list, and
gets woken every now and then.
Reducing it's pri to 70, and then doing the same test with threads at pri80
(but containing plenty of suspension points), we indeed get
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10

TIL, I guess :)

Cheers,
Henry



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

alistairgrant
Hi Henrik,

I have to admit that I'm having a bit of trouble following what you're
saying, so I'll make a few comments that I hope are helpful...

Just to start off, my understanding of the term "pre-emptive,
cooperative" processing is:

- a higher priority process will interrupt a lower priority process and
  run until it voluntarily waits (or is suspended or terminated).
- "cooperative" should be read as "unless the processes cooperate, one
  bad actor can stop anyone else from getting the CPU".


On Tue, Jul 30, 2019 at 05:32:13AM -0500, Henrik Sperre Johansen wrote:

> Alistair Grant wrote
> > The following produces what I was expecting:
> >
> > | p x a index first |
> >
> > Smalltalk vm processPreemptionYields: false.
> >
> >
> > Thanks again!
> > Alistair
>
> Well... what your example shows, is that with processes
> p50 (pri50) periodic, becoming runnable in time slot 2)
> p40.1 (taking 3 time slots to complete)
> p40.2 (taking 3 time slots to complete)
>
> When preemption yields, you get processor time allocations like this:
> [p40.1][p50][p40.2][p40.1][p40.2][p40.1][p40.2]

I think this might be demonstrating an incorrect expectation on your
part.

Your use of the term "time slot" seems to imply that, within a given
process priority, each process can expect roughly equal CPU time.  
However that isn't what is implied.  With processPreemptionYields: true
*any* higher priority process becoming runnable will cause the currently
executing process to be moved to the back of the queue.  In practice,
where processes are running at a priority of less than 60, this means
that with every mouse move, keyboard input, timer event, etc., the
active process will be moved to the back of the queue.  The amount of
time allocated to each process is effectively random.

So in the above example, assuming that the p50 process only becomes
runnable once, and no other high priority process becomes runnable, we
can expect:

[p40.1][p50][p40.2][p40.1]


> With preemption NOT yielding, you get processor time allocations like this:
> [p40.1][p50][p40.1][p40.1][p40.2][p40.2][p40.2]

I'd expect:

[p40.1][p50][p40.1][p40.2]

(running the example code used earlier in VW also supports this)


> Which clearly contradicts the statements in the linked post
> "Nothing changes within priorities.

I haven't looked up the original post, but it seems out of
context.


> What the setting controls is what
> happens when a process is preempted by a higher-priority process.  The old
> code did an implicit yield of the preempted process, destroying the
> guarantee of cooperative scheduling within a priority."

This matches my understanding (remember that "cooperative" means that a
process has to voluntarily relinquish the CPU).


> Which (to me) would imply slot allocations when not yielding would be like
> this:
> [p40.1][p50][p40.1][p40.2][p40.1][p40.2][p40.2]
>
> Maybe it's what is intended and the statements are inaccurate, or Pharo is
> lacking some related changes, but it doesn't make for a good experience when
> you have multiple processes requiring "real-time" execution running at the
> same priority if you have to add explicit Processor yield's everywhere.  

Actually, this matches exactly my understanding of real-time
processing.  E.g. in VAX/VMS there were two tiers of process priority:

- the low priority tier, e.g. 1 - 40 (I can't remember the actual
  numbers) were time sliced.  So if multiple processes were at the same
  priority, they could expect roughly equal amounts of CPU time.
- the high priority tier, e.g. 41 - 80, were cooperative.  So if
  multiple processes were at the same priority, the second one had to
  wait for the first to voluntarily give up the CPU.
  The expectation in this case was that any time a real-time process had
  work to do, the time required would be reasonably short, and that it
  was important to let it have the CPU until it had completed that work.



On Tue, Jul 30, 2019 at 05:53:52AM -0500, Henrik Sperre Johansen wrote:

> Henrik Sperre Johansen wrote
> > Which clearly contradicts the statements in the linked post
> > "Nothing changes within priorities.  What the setting controls is what
> > happens when a process is preempted by a higher-priority process.  The old
> > code did an implicit yield of the preempted process, destroying the
> > guarantee of cooperative scheduling within a priority."
>
> Or it could mean the understanding was that processes at the same priority
> would have to explicitly yield for other processes of the same priority to
> run also with preemptionyields = true.
> ... which, seems to be true.
>
> It's just that we never see that in practice, even running at pri 80, as the
> DelaySemaphoreScheduler (which runs at 80) is at the head of the list, and
> gets woken every now and then.
> Reducing it's pri to 70, and then doing the same test with threads at pri80
> (but containing plenty of suspension points), we indeed get
> 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10

You get this result even if you don't lower the priority of the timer
process.  If the sample code is run at priority 80 it will continue
until it has finished, and the DelaySemaphoreScheduler has to wait for
it to finish (which has other bad side effects, but we don't care about
those here).


> TIL, I guess :)
>
> Cheers,
> Henry


Hope this helps,
Alistair

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

ducasse
Thank you guy for the discussion. 
This is still on my todo to have version in english of the chapter we wrote for the squeak book.

Stef

On 30 Jul 2019, at 20:14, Alistair Grant <[hidden email]> wrote:

Hi Henrik,

I have to admit that I'm having a bit of trouble following what you're 
saying, so I'll make a few comments that I hope are helpful...

Just to start off, my understanding of the term "pre-emptive, 
cooperative" processing is:

- a higher priority process will interrupt a lower priority process and 
 run until it voluntarily waits (or is suspended or terminated).
- "cooperative" should be read as "unless the processes cooperate, one 
 bad actor can stop anyone else from getting the CPU".


On Tue, Jul 30, 2019 at 05:32:13AM -0500, Henrik Sperre Johansen wrote:
Alistair Grant wrote
The following produces what I was expecting:

| p x a index first |

Smalltalk vm processPreemptionYields: false.


Thanks again!
Alistair

Well... what your example shows, is that with processes
p50 (pri50) periodic, becoming runnable in time slot 2)
p40.1 (taking 3 time slots to complete)
p40.2 (taking 3 time slots to complete)

When preemption yields, you get processor time allocations like this:
[p40.1][p50][p40.2][p40.1][p40.2][p40.1][p40.2]

I think this might be demonstrating an incorrect expectation on your 
part.

Your use of the term "time slot" seems to imply that, within a given 
process priority, each process can expect roughly equal CPU time.  
However that isn't what is implied.  With processPreemptionYields: true 
*any* higher priority process becoming runnable will cause the currently 
executing process to be moved to the back of the queue.  In practice, 
where processes are running at a priority of less than 60, this means 
that with every mouse move, keyboard input, timer event, etc., the 
active process will be moved to the back of the queue.  The amount of 
time allocated to each process is effectively random.

So in the above example, assuming that the p50 process only becomes 
runnable once, and no other high priority process becomes runnable, we 
can expect:

[p40.1][p50][p40.2][p40.1]


With preemption NOT yielding, you get processor time allocations like this:
[p40.1][p50][p40.1][p40.1][p40.2][p40.2][p40.2]

I'd expect:

[p40.1][p50][p40.1][p40.2]

(running the example code used earlier in VW also supports this)


Which clearly contradicts the statements in the linked post
"Nothing changes within priorities.

I haven't looked up the original post, but it seems out of 
context.


What the setting controls is what
happens when a process is preempted by a higher-priority process.  The old
code did an implicit yield of the preempted process, destroying the
guarantee of cooperative scheduling within a priority."

This matches my understanding (remember that "cooperative" means that a 
process has to voluntarily relinquish the CPU).


Which (to me) would imply slot allocations when not yielding would be like
this:
[p40.1][p50][p40.1][p40.2][p40.1][p40.2][p40.2]

Maybe it's what is intended and the statements are inaccurate, or Pharo is
lacking some related changes, but it doesn't make for a good experience when
you have multiple processes requiring "real-time" execution running at the
same priority if you have to add explicit Processor yield's everywhere.  

Actually, this matches exactly my understanding of real-time 
processing.  E.g. in VAX/VMS there were two tiers of process priority:

- the low priority tier, e.g. 1 - 40 (I can't remember the actual 
 numbers) were time sliced.  So if multiple processes were at the same 
 priority, they could expect roughly equal amounts of CPU time.
- the high priority tier, e.g. 41 - 80, were cooperative.  So if 
 multiple processes were at the same priority, the second one had to 
 wait for the first to voluntarily give up the CPU.
 The expectation in this case was that any time a real-time process had 
 work to do, the time required would be reasonably short, and that it 
 was important to let it have the CPU until it had completed that work.



On Tue, Jul 30, 2019 at 05:53:52AM -0500, Henrik Sperre Johansen wrote:
Henrik Sperre Johansen wrote
Which clearly contradicts the statements in the linked post
"Nothing changes within priorities.  What the setting controls is what
happens when a process is preempted by a higher-priority process.  The old
code did an implicit yield of the preempted process, destroying the
guarantee of cooperative scheduling within a priority."

Or it could mean the understanding was that processes at the same priority
would have to explicitly yield for other processes of the same priority to
run also with preemptionyields = true.
... which, seems to be true.

It's just that we never see that in practice, even running at pri 80, as the
DelaySemaphoreScheduler (which runs at 80) is at the head of the list, and
gets woken every now and then.
Reducing it's pri to 70, and then doing the same test with threads at pri80
(but containing plenty of suspension points), we indeed get 
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10

You get this result even if you don't lower the priority of the timer 
process.  If the sample code is run at priority 80 it will continue 
until it has finished, and the DelaySemaphoreScheduler has to wait for 
it to finish (which has other bad side effects, but we don't care about 
those here).


TIL, I guess :)

Cheers, 
Henry


Hope this helps,
Alistair

Reply | Threaded
Open this post in threaded view
|

Re: Pharo process scheduling

Ben Coman
In reply to this post by alistairgrant


On Mon, 29 Jul 2019 at 17:40, Alistair Grant <[hidden email]> wrote:
Hi Everyone,

I'm trying to get a better grip on process scheduling in Pharo but
don't understand the behaviour I'm seeing.

My understanding is that the VM (which does the real work of
scheduling the processes) is a pre-emptive co-operative model,
meaning:

- Higher priority processes are always scheduled ahead of lower
priority processes.
- Processes at the same priority get the CPU until something is done
to explicitly stop processing, e.g.:
-- terminate the process, wait on semaphore (I/O, timer, mutex, etc.),
#yield the processor

But given the following:

| p x a index |

p := 35.
a := Array new: 20.
index := 1.
[ 1 to: 10 do: [ :i |
    [ a at: index put: i.
    index := index + 1.
    x := 0.
    300000000 timesRepeat: [ x := x + 1 ].
    a at: index put: i.
    index := index + 1. ] forkAt: p named: 'Proc', i asString ].
] forkAt: p+1.
a.


The code above was written to be as simple as possible (it isn't going
to be creating streams, have semaphores etc. called somewhere deep
within).  The 300000000 number was manually determined as something
that will run for a few seconds on my laptop.

Since the forking process is higher priority than the 10 worker
processes, I expect that all 10 processes will be created before the
first gets any CPU, and they will then be executed in order (since
there is nothing obvious to cause a context switch), and the results
to be:

#(1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10)

Wow, I find I've been working under a mis-understanding for years.  I thought this was how Pharo operated.
I just checked Squeak. It gives this result since it has...   Smalltalk vm processPreemptionYields "==> false"
Pharo has... Smalltalk vm processPreemptionYields "==> true" since at least Pharo 2.0.

Some pertitant background...

So would Pharo having "processPreemptionYields: false" be beneficial to match Squeak & VisualWorks?
btw, what does "safer" [1] mean?

Or... even though we are a long way from running multi-CPU, would a system working with "processPreemptionYields=>false"
be more ready to run threaded on multi-CPU?  Or is that too far away to be concerned with?

How would assumptions around  "processPreemptionYields=>false"  be affected by FFI callbacks ?

cheers -ben