On Sat, 11 Jan 2020 at 06:31, Sven Van Caekenberghe <[hidden email]> wrote: Hi Ben,
which is an identical result if the 'Original process' traces are filtered out. From this it would seem that the code in p2 continues after signal and only later does p1 get past its wait. Yes, a #signal does not transfer execution unless the waiting-process that received the signal is a higher priority. Within the same priority, it just makes waiting-process runnable, and the highest-priority-runnable-process is the one that is run. Playing with the priorities we can change that order (apparently); The yield made no difference because it only facilitates other processes at-the-SAME-priority getting a chance to run. Yield doesn't put the current-process to sleep, it just moves the process to the back of its-priority-runQueue. It gets to run again before any lower priority process gets a chance to run. Yielding will never allow a lower-priority-process to run. For a lower-priority process to run, the current-process needs to sleep rather than yield. Compare... | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. p1 := [ trace value: 'Process 1a waits for signal on semaphore'. semaphore wait. trace value: 'Process 1b received signal' ] forkAt: 30. p2 := [ trace value: 'Process 2a signals semaphore'. semaphore signal. trace value: 'Process 2b continues' ] forkAt: 20. trace value: 'Original process pre-yield'. Processor yield. trace value: 'Original process post-yield'. ==> '@40 Original process pre-yield' '@40 Original process post-yield' '@30 Process 1a waits for signal on semaphore' '@20 Process 2a signals semaphore' '@30 Process 1b received signal' '@20 Process 2b continues' with... | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. p1 := [ trace value: 'Process 1a waits for signal on semaphore'. semaphore wait. trace value: 'Process 1b received signal' ] forkAt: 30. p2 := [ trace value: 'Process 2a signals semaphore'. semaphore signal. trace value: 'Process 2b continues' ] forkAt: 20. trace value: 'Original process pre-delay'. 1 milliSecond wait. trace value: 'Original process post-delay'. ==> '@40 Original process pre-delay' '@30 Process 1a waits for signal on semaphore' '@20 Process 2a signals semaphore' '@30 Process 1b received signal' '@20 Process 2b continues' '@40 Original process post-delay' Stef, on further consideration I think your first examples should not-have p1 and p2 the same priority. Scheduling of same-priority processes and how they interact with the UI thread is an extra level of complexity that may be better done shortly after. Not needing to trace "Original process" in the first example gives less for the reader to digest So your first example might compare... | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. p1 := [ trace value: 'Process 1a waits for signal on semaphore'. semaphore wait. trace value: 'Process 1b received signal' ] forkAt: 20. p2 := [ trace value: 'Process 2a signals semaphore'. semaphore signal. trace value: 'Process 2b continues' ] forkAt: 30. ==> '@30 Process 1a waits for signal on semaphore' '@20 Process 2a signals semaphore' '@30 Process 1b received signal' '@20 Process 2b continues' with the priority order swapped... | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. p1 := [ trace value: 'Process 1a waits for signal on semaphore'. semaphore wait. trace value: 'Process 1b received signal' ] forkAt: 30. p2 := [ trace value: 'Process 2a signals semaphore'. semaphore signal. trace value: 'Process 2b continues' ] forkAt: 20. ==> '@30 Process 2a signals semaphore' '@30 Process 2b continues' '@20 Process 1a waits for signal on semaphore' '@20 Process 1b received signal' cheers -ben |
I thought that I could have the following strategy. Give a first simple version, them revisiting it after. With the idea that even if the reader meta model is a bit slanted after the first example but they get the result right then after the full explanation they should get it right instead of overhelming them with the full details at first. So I need to concentrate to have the full outline clear. Any way thanks for the discussion. Pedagogy is sometimes following not straight paths.
|
On Sat, 11 Jan 2020 at 18:01, ducasse <[hidden email]> wrote:
At first glance I thought using #fork would a simpler example than using #forkAt:, but the former interacts with the implicit priority of existing UI process, while the latter is explicit and so actually makes a simpler example. cheers -ben |
Yes this was really fun for me to discover that in newProcess
newProcess "Answer a Process running the code in the receiver. The process is not scheduled. IMPORTANT! Debug stepping this deep infrastructure may lock your Image If you are not sure what you are doing, close the debugger now." <primitive: 19> "Simulation guard" ^Process forContext: [self value. Processor terminateActive] asContext priority: Processor activePriority
|
In reply to this post by ducasse
On Thu, 9 Jan 2020 at 13:01, ducasse <[hidden email]> wrote:
> > Hi > > I wanted to explain > > | semaphore p1 p2 | > semaphore := Semaphore new. > p1 := [ semaphore wait. > 'p1' crTrace ] fork. > > p2 := [semaphore signal. > 'p2' crTrace ] fork. > > displays p2 and p1. > but I would like explain clearly but it depends on the semantics of signal. The way this is phrased seems to imply that 'p2' will always be displayed before 'p1', however in Pharo this is not guaranteed (when the processes are at the same priority, as they are this example). As Eliot implied in another reply, Pharo has #processPreemptionYields set to true, which means that any time a higher priority process preempts, the current process will be moved to the back of the queue. So in the case above, after p2 signals the semaphore, if a timer was delivered or keystroke pressed, p2 would be suspended and moved to the back of the queue. When the timer / keystroke / etc. had finished processing p1 would be at the front of the queue and would complete first. Since time and input events are (for practical purposes) unpredictable it means that the execution order of processes at a given priority is also unpredictable. While this isn't likely to happen in the example above, I have seen it regularly with TaskIt and multiple entries being run concurrently. I agree with Eliot that changing #processPreemptionYields to true by default would be an improvement in Pharo. It would make it easier to predict what is happening in a complex environment. Running the following variant, and then typing in to another window, demonstrates the behaviour: | semaphore p1 p2 | semaphore := Semaphore new. [ 100 timesRepeat: [ p1 := [ | z | semaphore wait. z := SmallInteger maxVal. 10000000 timesRepeat: [ z := z + 1 ]. 'p1' crTrace ] fork. p2 := [ | z | 1 second wait. semaphore signal. z := SmallInteger maxVal. 10000000 timesRepeat: [ z := z + 1 ]. 'p2' crTrace ] fork. 1 second wait. ] ] fork. The tail of transcript: 'p2' 'p1' 'p1' 'p1' 'p1' 'p2' 'p2' 'p2' 'p1' 'p1' 'p2' 'p1' 'p2' 'p2' 'p1' 'p1' 'p2' 'p1' Cheers, Alistair |
Hi alistair
I will reread and rephrase the chapters.
What you should see is that we cannot explain everything in a single chapter. At least I cannot.
If this would be that easy. :) Now what would be a consequence: we should revisit all the processes of the system and understand if/where they should yield. Because now there is an implicit yield. So in an ideal world the new semantics is probably better. Now are we ready to get new bugs and chase them when an old logic like for example an hidden process in calypso worked under the assumption that it was implicitly yielding after preemption? This is the question that we asked ourselves and we do not really know. So far Pharo worked this way during 12 years with this semantics (I does not mean that we cannot change because of our Motto - but the point is to understand the impact and control). Contrary to what people may think we are not changing without assessing impact. So to us this is not an easy decision (while doing it take one single line of one assignment).
I will investigate what I can do with it.
|
In reply to this post by Ben Coman
Hi Ben,
> On 11 Jan 2020, at 10:50, Ben Coman <[hidden email]> wrote: > > > > On Sat, 11 Jan 2020 at 06:31, Sven Van Caekenberghe <[hidden email]> wrote: > Hi Ben, > > Great approach, though I would make one change to make your example completely copy/paste runnable. > > Stef's original example: > > | trace semaphore p1 p2 | > > semaphore := Semaphore new. > > trace := [ :message | > ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. > > p1 := [ > semaphore wait. > trace value: 'Process 1' ] fork. > > p2 := [ > semaphore signal. > trace value: 'Process 2' ] fork. > > trace value: 'Original process pre-yield'. > Processor yield. > trace value: 'Original process post-yield'. > > Gives: > > '[40] Original process pre-yield' > '[40] Process 2' > '[40] Original process post-yield' > '[40] Process 1' > > But not running the yield section gives: > > '[40] Process 2' > '[40] Process 1' > > which is an identical result if the 'Original process' traces are filtered out. > > > From this it would seem that the code in p2 continues after signal and only later does p1 get past its wait. > > Yes, a #signal does not transfer execution unless the waiting-process that received the signal is a higher priority. > Within the same priority, it just makes waiting-process runnable, and the highest-priority-runnable-process is the one that is run. OK, I can understand that, the question remains what happens when the processes have equal priorities. > Playing with the priorities we can change that order (apparently); > > | trace semaphore p1 p2 | > > semaphore := Semaphore new. > > trace := [ :message | > ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. > > p1 := [ > semaphore wait. > trace value: 'Process 1' ] forkAt: 30. > > p2 := [ > semaphore signal. > trace value: 'Process 2' ] forkAt: 20. > > Gives: > > '[30] Process 1' > '[20] Process 2' > > Again, the yield section makes no difference. So something else happened. > > The yield made no difference because it only facilitates other processes at-the-SAME-priority getting a chance to run. > Yield doesn't put the current-process to sleep, it just moves the process to the back of its-priority-runQueue. It gets to run again before any lower priority process gets a chance to run. > > Yielding will never allow a lower-priority-process to run. > For a lower-priority process to run, the current-process needs to sleep rather than yield. These are clear statements. > Compare... > | trace semaphore p1 p2 | > semaphore := Semaphore new. > trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. > p1 := [ > trace value: 'Process 1a waits for signal on semaphore'. > semaphore wait. > trace value: 'Process 1b received signal' ] forkAt: 30. > p2 := [ > trace value: 'Process 2a signals semaphore'. > semaphore signal. > trace value: 'Process 2b continues' ] forkAt: 20. > trace value: 'Original process pre-yield'. > Processor yield. > trace value: 'Original process post-yield'. > > ==> > '@40 Original process pre-yield' > '@40 Original process post-yield' > '@30 Process 1a waits for signal on semaphore' > '@20 Process 2a signals semaphore' > '@30 Process 1b received signal' > '@20 Process 2b continues' > > with... > | trace semaphore p1 p2 | > semaphore := Semaphore new. > trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. > p1 := [ > trace value: 'Process 1a waits for signal on semaphore'. > semaphore wait. > trace value: 'Process 1b received signal' ] forkAt: 30. > p2 := [ > trace value: 'Process 2a signals semaphore'. > semaphore signal. > trace value: 'Process 2b continues' ] forkAt: 20. > trace value: 'Original process pre-delay'. > 1 milliSecond wait. > trace value: 'Original process post-delay'. > > ==> > '@40 Original process pre-delay' > '@30 Process 1a waits for signal on semaphore' > '@20 Process 2a signals semaphore' > '@30 Process 1b received signal' > '@20 Process 2b continues' > '@40 Original process post-delay' OK, good example: I think/hope I understand. Now, these further examples only strengthen my believe that it is simply impossible to talk about semaphores without talking about (the complexities) of process scheduling. Semaphores exist as a means to coordinate processes, hence when using them you have to understand what (will) happen, and apparently that is quite complex. In any case, thanks again for the explanations, Sven > Stef, on further consideration I think your first examples should not-have p1 and p2 the same priority. > Scheduling of same-priority processes and how they interact with the UI thread is an extra level of complexity that may be better done shortly after. > Not needing to trace "Original process" in the first example gives less for the reader to digest > > So your first example might compare... > | trace semaphore p1 p2 | > semaphore := Semaphore new. > trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. > p1 := [ > trace value: 'Process 1a waits for signal on semaphore'. > semaphore wait. > trace value: 'Process 1b received signal' ] forkAt: 20. > p2 := [ > trace value: 'Process 2a signals semaphore'. > semaphore signal. > trace value: 'Process 2b continues' ] forkAt: 30. > > ==> > '@30 Process 1a waits for signal on semaphore' > '@20 Process 2a signals semaphore' > '@30 Process 1b received signal' > '@20 Process 2b continues' > > > with the priority order swapped... > | trace semaphore p1 p2 | > semaphore := Semaphore new. > trace := [ :message | ('@{1} {2}' format: { Processor activePriority. message }) crLog ]. > p1 := [ > trace value: 'Process 1a waits for signal on semaphore'. > semaphore wait. > trace value: 'Process 1b received signal' ] forkAt: 30. > p2 := [ > trace value: 'Process 2a signals semaphore'. > semaphore signal. > trace value: 'Process 2b continues' ] forkAt: 20. > > ==> > '@30 Process 2a signals semaphore' > '@30 Process 2b continues' > '@20 Process 1a waits for signal on semaphore' > '@20 Process 1b received signal' > > > cheers -ben |
In reply to this post by alistairgrant
Hi Alistair,
> On 12 Jan 2020, at 09:33, Alistair Grant <[hidden email]> wrote: > > On Thu, 9 Jan 2020 at 13:01, ducasse <[hidden email]> wrote: >> >> Hi >> >> I wanted to explain >> >> | semaphore p1 p2 | >> semaphore := Semaphore new. >> p1 := [ semaphore wait. >> 'p1' crTrace ] fork. >> >> p2 := [semaphore signal. >> 'p2' crTrace ] fork. >> >> displays p2 and p1. >> but I would like explain clearly but it depends on the semantics of signal. > > The way this is phrased seems to imply that 'p2' will always be > displayed before 'p1', however in Pharo this is not guaranteed (when > the processes are at the same priority, as they are this example). > > As Eliot implied in another reply, Pharo has #processPreemptionYields > set to true, which means that any time a higher priority process > preempts, the current process will be moved to the back of the queue. > > So in the case above, after p2 signals the semaphore, if a timer was > delivered or keystroke pressed, p2 would be suspended and moved to the > back of the queue. When the timer / keystroke / etc. had finished > processing p1 would be at the front of the queue and would complete > first. > > Since time and input events are (for practical purposes) unpredictable > it means that the execution order of processes at a given priority is > also unpredictable. > > While this isn't likely to happen in the example above, I have seen it > regularly with TaskIt and multiple entries being run concurrently. > > I agree with Eliot that changing #processPreemptionYields to true by > default would be an improvement in Pharo. It would make it easier to > predict what is happening in a complex environment. I don't understand, in your second paragraph you say 'Pharo has #processPreemptionYields set to true' and now you say it should become the default. Is that already the case or not then ? > Running the following variant, and then typing in to another window, > demonstrates the behaviour: I am not sure what you want to demonstrate: that it is totally random depending on external factors ;-) ? Which is pretty bad: how should semaphores be used (safely) ? What are good examples of real world correct semaphore usage ? Right now, all the explanations around scheduling of processes and their priorities make it seem as if the answer is 'it all depends' and 'there is no way to be 100% sure what will happen'. Sven > | semaphore p1 p2 | > semaphore := Semaphore new. > [ 100 timesRepeat: [ > p1 := [ | z | > semaphore wait. > z := SmallInteger maxVal. > 10000000 timesRepeat: [ z := z + 1 ]. > 'p1' crTrace ] fork. > > p2 := [ | z | 1 second wait. > semaphore signal. > z := SmallInteger maxVal. > 10000000 timesRepeat: [ z := z + 1 ]. > 'p2' crTrace ] fork. > 1 second wait. > ] ] fork. > > > The tail of transcript: > > 'p2' > 'p1' > 'p1' > 'p1' > 'p1' > 'p2' > 'p2' > 'p2' > 'p1' > 'p1' > 'p2' > 'p1' > 'p2' > 'p2' > 'p1' > 'p1' > 'p2' > 'p1' > > > > Cheers, > Alistair |
In reply to this post by ducasse
Hi Stef,
On Sun, 12 Jan 2020 at 11:28, ducasse <[hidden email]> wrote: > > Hi alistair > > > Hi > > I wanted to explain > > | semaphore p1 p2 | > semaphore := Semaphore new. > p1 := [ semaphore wait. > 'p1' crTrace ] fork. > > p2 := [semaphore signal. > 'p2' crTrace ] fork. > > displays p2 and p1. > but I would like explain clearly but it depends on the semantics of signal. > > > The way this is phrased seems to imply that 'p2' will always be > displayed before 'p1', however in Pharo this is not guaranteed (when > the processes are at the same priority, as they are this example). > > > No this is not what I implied. > I will reread and rephrase the chapters. > > As Eliot implied in another reply, Pharo has #processPreemptionYields > set to true, which means that any time a higher priority process > preempts, the current process will be moved to the back of the queue. > > > Yes this is explained in the next chapter. > What you should see is that we cannot explain everything in a single chapter. > At least I cannot. Agreed. Maybe just a footnote indicating that process scheduling will be explained in later chapters. > So in the case above, after p2 signals the semaphore, if a timer was > delivered or keystroke pressed, p2 would be suspended and moved to the > back of the queue. When the timer / keystroke / etc. had finished > processing p1 would be at the front of the queue and would complete > first. > > Since time and input events are (for practical purposes) unpredictable > it means that the execution order of processes at a given priority is > also unpredictable. > > While this isn't likely to happen in the example above, I have seen it > regularly with TaskIt and multiple entries being run concurrently. > > I agree with Eliot that changing #processPreemptionYields to true by > default would be an improvement in Pharo. It would make it easier to > predict what is happening in a complex environment. As Sven kindly pointed out, I meant to say set #processPreemptionYields to false. > If this would be that easy. :) > Now what would be a consequence: we should revisit all the processes of the system > and understand if/where they should yield. Because now there is an implicit yield. > So in an ideal world the new semantics is probably better. Now are we ready to get new bugs > and chase them when an old logic like for example an hidden process in calypso worked > under the assumption that it was implicitly yielding after preemption? > This is the question that we asked ourselves and we do not really know. > So far Pharo worked this way during 12 years with this semantics (I does not mean that we cannot > change because of our Motto - but the point is to understand the impact and control). > Contrary to what people may think we are not changing without assessing impact. > So to us this is not an easy decision (while doing it take one single line of one assignment). I also wasn't implying that we just go ahead and change it. But if it is a possibility then I'll try changing it in my image and see how it helps with tracking down inter-process interactions that we're currently experiencing, and if it does introduce any showstopper issues. Cheers, Alistair |
In reply to this post by Sven Van Caekenberghe-2
Hi Sven,
In line below... Cheers, Alistair (on phone) On Sun., 12 Jan. 2020, 13:00 Sven Van Caekenberghe, <[hidden email]> wrote: > > Hi Alistair, > > > On 12 Jan 2020, at 09:33, Alistair Grant <[hidden email]> wrote: > > > > > > I agree with Eliot that changing #processPreemptionYields to true by > > default would be an improvement in Pharo. It would make it easier to > > predict what is happening in a complex environment. > > I don't understand, in your second paragraph you say 'Pharo has #processPreemptionYields set to true' and now you say it should become the default. Is that already the case or not then ? Oops, typo, sorry. I meant to say 'false'. (I shouldn't ever reply in a hurry). > > > Running the following variant, and then typing in to another window, > > demonstrates the behaviour: > > I am not sure what you want to demonstrate: that it is totally random depending on external factors ;-) ? If processPreemptionYields false the output should be: ... p2 p1 p2 p1 p2 p1 ... i.e. it is regular. What we're actually seeing is: ... 'p1' 'p2' 'p1' 'p2' 'p2' ... Which showing that p2 might get to signal multiple times before p1 gets to complete a single loop. > Which is pretty bad: how should semaphores be used (safely) ? What are good examples of real world correct semaphore usage ? Given a set of processes at the same priority the amount of time allocated to any one of the processes is unpredictable. All the semaphore usage is working as described. My point wasn't that the semaphores are being used incorrectly, but that you can't make assumptions about the order in which processes will complete if they are CPU bound. > Right now, all the explanations around scheduling of processes and their priorities make it seem as if the answer is 'it all depends' and 'there is no way to be 100% sure what will happen'. The semaphore operations are clear, but which process, within a single process priority, gets the most CPU time is less so. HTH, Alistair |
In reply to this post by alistairgrant
Yes this is clearly something that we should evaluate. Now we have more urgent problems to fix :). At least with the concurrent booklet we will set the stage and create material that everybody can read and understand. S.
|
In reply to this post by Sven Van Caekenberghe-2
On Sun, 12 Jan 2020 at 20:00, Sven Van Caekenberghe <[hidden email]> wrote: Hi Alistair,
Bad depends on the assumptions you are working with. The issue is its generally promoted that our scheduling is "preemptive-across-priorities, cooperative-within-priorities" but thats not entirely true for Pharo, which is "preemptive-across-priorities, mostly-cooperative-within-priorities". The former is arguably a simpler model to reason about, and having consistent implicit-behaviour between same-priority-processes lessens the need for Semaphores between them. However if you naively "assume" the former
you may get burnt
in Pharo
since behaviour between
same-priority-processes is random depending on "when" higher priority processes are scheduled. But if you "assume" the latter (i.e. that your process can be preempted any time) you'd use Semaphores as-needed and have no problems. So to reply directly to your last line. Semaphores can always be used safely. Its poor assumptions about when Semaphores aren't required that is bad. Now a new consideration for whether Pharo
might change the default
processPreemptionYields to false is ThreadedFFI. Presumably it will be common for a callback to be defined at same priority as an in-image process. I can't quite think through the implications myself. So a question... if a callback is a lower-priority than the current process, does it wait before grabbing the VM lock (IIUC how that is meant to work)?
Reasoning about processes at different-priorities its easy and explicit. Between processes at the same-priority you are correct, currently 'there is no way to be 100% sure what will happen' (without Semaphores).
Examples showing
same-priority processes interacting like its cooperative will lead to student confusion when their results differ from the book. Currently Pharo must be taught with examples presuming a fully preemptive system (at restricted locations like backward jumps). cheers -ben P.S. Now I wonder about the impact of upcoming Idle-VM. Currently same-priority-processes are effectively round-robin scheduled because the high priority DelayScheduler triggers often, bumping the current process to the back of its runQueue. When it triggers less often, anything relying on this implicit behaviour may act differently. |
Hi Ben,
On Sun, 12 Jan 2020 at 15:26, Ben Coman <[hidden email]> wrote: > > Now a new consideration for whether Pharo might change the default processPreemptionYields to false > is ThreadedFFI. Presumably it will be common for a callback to be defined at same priority as an in-image process. > I can't quite think through the implications myself. > So a question... if a callback is a lower-priority than the current process, does it wait before grabbing the VM lock (IIUC how that is meant to work)? The version of Threaded FFI I'm using at the moment is about a month old, but assuming nothing has changed... The callback queue is currently run at priority 70 (vs the UI process at 40). The reasoning as explained by Esteban (from memory) is that you may want callbacks to do some small amount of work and respond quickly. > P.S. Now I wonder about the impact of upcoming Idle-VM. Currently same-priority-processes are effectively round-robin scheduled > because the high priority DelayScheduler triggers often, bumping the current process to the back of its runQueue. > When it triggers less often, anything relying on this implicit behaviour may act differently. Another reason to consider #processPreemptionYields set to false :-) Cheers, Alistair |
In reply to this post by alistairgrant
Hi Alastair,
On Jan 12, 2020, at 12:34 AM, Alistair Grant <[hidden email]> wrote:
You mean to write that “I agree with Eliot that changing #processPreemptionYields to false by default would be an improvement in Pharo. It would make it easier to predict what is happening in a complex environment.”
Cheers, Alistair! _,,,^..^,,,_ (phone) |
Hi Eliot,
On Sun, 12 Jan 2020 at 18:16, Eliot Miranda <[hidden email]> wrote: > >> Hi Alastair, >> >> On Jan 12, 2020, at 12:34 AM, Alistair Grant <[hidden email]> wrote: >> >> I agree with Eliot that changing #processPreemptionYields to true by >> default would be an improvement in Pharo. It would make it easier to >> predict what is happening in a complex environment > > > You mean to write that > > “I agree with Eliot that changing #processPreemptionYields to false by > default would be an improvement in Pharo. It would make it easier to > predict what is happening in a complex environment.” > > Preemption by a higher priority process should not cause a yield. Yes, sorry about that. Cheers, Alistair |
In reply to this post by alistairgrant
> > If processPreemptionYields false the output should be: > > ... > p2 > p1 > p2 > p1 > p2 > p1 > ... > > i.e. it is regular. What we're actually seeing is: > > ... > 'p1' > 'p2' > 'p1' > 'p2' > 'p2' > ... > > Which showing that p2 might get to signal multiple times before p1 > gets to complete a single loop. Yes this is why I prefer the new preemption semantics. The old semantics looks like bringing more “chaos”. On one hand, it may give a chance to some processes to execute but I wonder also if code did not get more complex to protect against extra race conditions (since the old way could schedule more randomly). Now we were discussing some weeks ago about the risks to use the new semantics and also if we can identify the processes that we will have to change to explicitly yield. S. |
While working on the booklet this week-end I was wondering (now I’m too dead to think and going to sleep)
whether there is a semantics difference between resuming a suspended process using the message resume and “resuming a process” as a side effect of a wait signalled. S. |
Free forum by Nabble | Edit this page |