Hi
I wanted to explain | semaphore p1 p2 | semaphore := Semaphore new. p1 := [ semaphore wait. 'p1' crTrace ] fork. p2 := [semaphore signal. 'p2' crTrace ] fork. displays p2 and p1. but I would like explain clearly but it depends on the semantics of signal. - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists. Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? signal "Primitive. Send a signal through the receiver. If one or more processes have been suspended trying to receive a signal, allow the first one to proceed. If no process is waiting, remember the excess signal. Essential. See Object documentation whatIsAPrimitive." <primitive: 85> self primitiveFailed "self isEmpty ifTrue: [excessSignals := excessSignals+1] ifFalse: [Processor resume: self removeFirstLink]" I wanted to know what is really happening when a semaphore is signalled. Now resume: does not exist on Processor. I will look in the VM code. S S. |
here is what I’m writing in the book.
Let us image that we have the following two processes and one semaphore. [[[ | semaphore p1 p2 | semaphore := Semaphore new. p1 := [ semaphore wait. 'p1' crTrace ] fork. p2 := [ semaphore signal. 'p2' crTrace ] fork. ]]] - The first process is waiting on the semaphore. As soon as the semaphore will be signalled, ==p1== is signal also the semaphore. - The second process is just signalling the semaphore and printing. Now the question is to understand what is the generated trace which is ==p2== then ==p1==. Let us explain why. - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedules it by adding it to the end of the suspended lists. ==p2== continues its execution. Let us slightly change the example to show how priorities are involved during the signalling process. [[[ | semaphore p1 p2 | semaphore := Semaphore new. p1 := [ semaphore wait. 'p1' crTrace ] forkAt: 30. p2 := [semaphore signal. 'p2' crTrace ] forkAt: 20. ]] As a conclusion when the semaphore is signalled, the first waiting process of the semaphore is removed from the semaphore list, and resumed following the preemption rules of the scheduler. > On 9 Jan 2020, at 13:01, ducasse <[hidden email]> wrote: > > Hi > > I wanted to explain > > | semaphore p1 p2 | > semaphore := Semaphore new. > p1 := [ semaphore wait. > 'p1' crTrace ] fork. > > p2 := [semaphore signal. > 'p2' crTrace ] fork. > > displays p2 and p1. > but I would like explain clearly but it depends on the semantics of signal. > > > - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. > - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists. > > > Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? > > > signal > "Primitive. Send a signal through the receiver. If one or more processes > have been suspended trying to receive a signal, allow the first one to > proceed. If no process is waiting, remember the excess signal. Essential. > See Object documentation whatIsAPrimitive." > > <primitive: 85> > self primitiveFailed > > "self isEmpty > ifTrue: [excessSignals := excessSignals+1] > ifFalse: [Processor resume: self removeFirstLink]" > > > I wanted to know what is really happening when a semaphore is signalled. > Now resume: does not exist on Processor. > > I will look in the VM code. > > > S > > > > > > S. |
In reply to this post by ducasse
On Thu, Jan 9, 2020 at 5:03 AM ducasse <[hidden email]> wrote: Hi Since Smalltalk does not have a preemptive scheduler, neither p1 nor p2 will start to run until something else happens after the execution of p1 := [...] fork. p2 := [...] fork. So for example, if there is Processor yield then p1 can start to run. So you need to add code to your example to be able to determine what will happen. The easiest thing would be to delay long enough that both can run. 1 millisecond is more than enough. Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? I would say these three things, something like this: "A semaphore is a queue (implemented as a linked list) and an excess signals count, which is a non-negative integer. On instance creation a new semaphore is empty and has a zero excess signals count. A semaphore created for mutual exclusion is empty and has an excess signals count of one." "When a process waits on a semaphore, if the semaphore's excess signals count is non-zero, then the excess signal count is decremented, and the process proceeds. But if the semaphore has a zero excess signals count then the process is unscheduled and added to the end of the semaphore, after any other processes that are queued on the semaphore." "When a semaphore is signaled, if it is not empty, the first process is removed from it and added to the runnable processes in the scheduler. If the semaphore is empty its excess signals count is incremented. Given these three statements it is easy to see how they work, how to use them for mutual exclusion, etc.
_,,,^..^,,,_ best, Eliot |
Maybe to add this into the class comment, this is the most concise and clear description of how it works i've ever seen пт, 10 янв. 2020 г., 8:13 Eliot Miranda <[hidden email]>:
|
Actually, it is just a, albeit concise, description of how Semaphores are implemented.
It does not help much in understanding them, in learning how they can/should be used, for what purposes and how code behaves. Understanding of Process, priorities and Scheduling are also needed for a more complete understanding. This is not a simple subject. Read https://en.wikipedia.org/wiki/Semaphore_(programming) and see how well you understand the subject. In short, it does not answer Stef's concrete question(s). > On 10 Jan 2020, at 06:30, Danil Osipchuk <[hidden email]> wrote: > > Maybe to add this into the class comment, this is the most concise and clear description of how it works i've ever seen > > пт, 10 янв. 2020 г., 8:13 Eliot Miranda <[hidden email]>: > > > On Thu, Jan 9, 2020 at 5:03 AM ducasse <[hidden email]> wrote: > Hi > > I wanted to explain > > | semaphore p1 p2 | > semaphore := Semaphore new. > p1 := [ semaphore wait. > 'p1' crTrace ] fork. > > p2 := [semaphore signal. > 'p2' crTrace ] fork. > > displays p2 and p1. > but I would like explain clearly but it depends on the semantics of signal. > > > - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. > - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists. > > Since Smalltalk does not have a preemptive scheduler, neither p1 nor p2 will start to run until something else happens after the execution of p1 := [...] fork. p2 := [...] fork. So for example, if there is Processor yield then p1 can start to run. > > So you need to add code to your example to be able to determine what will happen. The easiest thing would be to delay long enough that both can run. 1 millisecond is more than enough. > > Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? > > I would say these three things, something like this: > > "A semaphore is a queue (implemented as a linked list) and an excess signals count, which is a non-negative integer. On instance creation a new semaphore is empty and has a zero excess signals count. A semaphore created for mutual exclusion is empty and has an excess signals count of one." > > "When a process waits on a semaphore, if the semaphore's excess signals count is non-zero, then the excess signal count is decremented, and the process proceeds. But if the semaphore has a zero excess signals count then the process is unscheduled and added to the end of the semaphore, after any other processes that are queued on the semaphore." > > "When a semaphore is signaled, if it is not empty, the first process is removed from it and added to the runnable processes in the scheduler. If the semaphore is empty its excess signals count is incremented. > > Given these three statements it is easy to see how they work, how to use them for mutual exclusion, etc. > > > > signal > "Primitive. Send a signal through the receiver. If one or more processes > have been suspended trying to receive a signal, allow the first one to > proceed. If no process is waiting, remember the excess signal. Essential. > See Object documentation whatIsAPrimitive." > > <primitive: 85> > self primitiveFailed > > "self isEmpty > ifTrue: [excessSignals := excessSignals+1] > ifFalse: [Processor resume: self removeFirstLink]" > > > I wanted to know what is really happening when a semaphore is signalled. > Now resume: does not exist on Processor. > > I will look in the VM code. > > > S > > > > > > S. > > > > -- > _,,,^..^,,,_ > best, Eliot |
I didn't claim expertise on the subject (although I use semaphores extensively), nor its simplicity, nor that the implementation description should be the only guide on its usage (hence 'to add..., how it works' wording) Said that, to me it is the case, when a clear description of what is going on aids a lot. Instead of trying to define some rules and scenarios abstractly - to help a user to reason about the system behavior (isn't Stef was willing to look into VM code for this reason?). To me both scenarios of Stef could be explained that in first case the 'signal' process is not getting preempted by the 'wait' process of the same priority, while in second the preemption happens upon return from primitive (hopefully my memory serves me well and my understanding is correct). A tangent note on comments in general -- I've noticed more than once, that people tend to produce far clearer descriptions in exchanges like this -- when discussing matter with others. When a person is in documentation/comment writing mode he/she is sort of tenses up in a formal state and often produces something not very helpful. Current class comment of Semaphore is a perfect example, if I were not familiar with the concept from other sources, I would not be able to make any sense of it. So, I would suggest to use opportunities like this to improve comments/docs when a bit of knowledge shows up in a discussion. regards, Danil пт, 10 янв. 2020 г. в 13:09, Sven Van Caekenberghe <[hidden email]>: Actually, it is just a, albeit concise, description of how Semaphores are implemented. |
For example, whether a Semaphore would queue waiting process by order of registration (thru a linked list for example) or by order of priority (thru a Heap for example), would completely change its behavior. So isn't that kind of implementation detail SUPER important, especially when hidden in VM? Also, describing HOW it works is very often used as a mean to explain (and make understand) a higher level feature. IMO, understanding a feature from one implementation is as useful as understanding a feature by examples of usage. Even when implementation is plain Smalltalk, it's already an added value to give main guidelines to help reading code (see this particular class or message for understanding the core...), so when it's in VM... Le ven. 10 janv. 2020 à 12:59, Danil Osipchuk <[hidden email]> a écrit :
|
In reply to this post by Eliot Miranda-2
For greater visibility and comprehension, it might be useful to early in the manual define a utility method... Object>>crTracePriority self crTrace: '[', Processor activePriority printString, ']', self printString On Fri, 10 Jan 2020 at 13:13, Eliot Miranda <[hidden email]> wrote:
This is a good point. It may be useful for the example to be expanded to... | semaphore p1 p2 | semaphore := Semaphore new. p1 := [ semaphore wait. 'Process 1' crTracePriority ] fork. p2 := [semaphore signal. 'Process 2' crTracePriority ] fork. 'Original process pre-yield'
crTracePriority
. 1 milliSeconds wait. 'Original process post-yield'
crTracePriority
. which would produce==> [40]'Original process pre-yield' [40]'Process 2' [40]'Process 1' [40]'Original process post-yield' with other examples producing... [40]'Original process pre-yield' [30]'Process 1' [20]'Process 2' [40]'Original process post-yield' [60]'Process 2' [50]'Process 1' [40]'Original process pre-yield' [40]'Original process post-yield'
For quick reference, here is some relevant VM code (the StackInterpreter code is a little simpler...) StackInterpreter class >> initializePrimitiveTable [ ... "Control Primitives (80-89)" (85 primitiveSignal) (86 primitiveWait) ... ] InterpreterPrimitives >> primitiveWait [ | sema excessSignals activeProc | sema := self stackTop. "rcvr" excessSignals := self fetchInteger: ExcessSignalsIndex ofObject: sema. excessSignals > 0 ifTrue: [self storeInteger: ExcessSignalsIndex ofObject: sema withValue: excessSignals - 1] ifFalse: [activeProc := self activeProcess. self addLastLink: activeProc toList: sema. self transferTo: self wakeHighestPriority] ] InterpreterPrimitives >> primitiveSignal [ "Synchronously signal the semaphore. This may change the active process as a result." self synchronousSignal: self stackTop ] StackInterpreter >> synchronousSignal: aSemaphore [ "Signal the given semaphore from within the interpreter. Answer if the current process was preempted." | excessSignals | <inline: false> (self isEmptyList: aSemaphore) ifTrue: ["no process is waiting on this semaphore" excessSignals := self fetchInteger: ExcessSignalsIndex ofObject: aSemaphore. self storeInteger: ExcessSignalsIndex ofObject: aSemaphore withValue: excessSignals + 1. ^false]. objectMemory ensureSemaphoreUnforwardedThroughContext: aSemaphore. ^self resume: (self removeFirstLinkOfList: aSemaphore) preemptedYieldingIf: preemptionYields ] cheers -ben |
In reply to this post by Danil Osipchuk-2
Done.
It was so simple that I preferred to do it myself so like that I will look like a great Pharo contributor.
|
In reply to this post by Nicolas Cellier
Yes this is why in my chapter on Exception I show the VM code while some people told me that it was not interesting.
And this is why in the current chapter on semaphore I have a section on the implementation. Now it does not mean that the we cannot have a higher view too :).
|
Administrator
|
In reply to this post by ducasse
ducasse wrote
> It was so simple that I preferred to do it myself so like that I will look > like a great Pharo contributor. :) ----- Cheers, Sean -- Sent from: http://forum.world.st/Pharo-Smalltalk-Developers-f1294837.html
Cheers,
Sean |
> ducasse wrote >> It was so simple that I preferred to do it myself so like that I will look >> like a great Pharo contributor. > > :) :) Sometimes I do not know if my humour is strange or not :) But strange humour is better than no humour :) Stef PS: I read too many gary larson comics to be safe :) |
In reply to this post by ducasse
Hi Steph, On Jan 10, 2020, at 12:42 PM, ducasse <[hidden email]> wrote:
First we have a native critical section which is not a queuing semaphore, but a queueing lock, which is in one of three states. It is a replacement for the old Mutex class. It is not in Pharo yet but you can easily adapt the Squeak implementation (see below). Let me give a similar definition. A native critical section is a queue that can have an owner. A new native critical section is empty and unowned. A process attempts to enter the native critical section via primitiveEnterCriticalSection. This occurs in one of three ways. - If the native critical section is unowned (which implies it is empty) then the process becomes the owner of the native critical section, the primitive answers false (meaning that it was previously unowned or owned by some other process), and the process proceeds. - if the native critical section is already owned by the process then the process remains the owner, the primitive answers true (meaning that it is already owned) and the process proceeds. - if the native critical section is already owned by some other process then the process is suspended and added to the end of the native critical section‘s queue, where it will wait until a primitiveExitCriticalSection resumes it and makes it owner. A process leaves a native critical section via primitiveExitCriticalSection. It is the process's responsibility to use primitiveExitCriticalSection only when it entered via a primitiveEnterCriticalSection that answered false. If the native critical section is empty then on executing primitiveExitCriticalSection it becomes unowned. If the native critical section is not empty then on executing primitiveExitCriticalSection it becomes owned by the first process in its queue, the process is removed from the queue and is scheduled, proceeding from the primitiveEnterCriticalSection which caused it to block with primitiveEnterCriticalSection answering false. In addition a process may test and set its ownership of a native critical section without danger of blocking. A process tests and sets its ownership of a native critical section via primitiveTestAndSetOwnershipOfCriticalSection. On executing primitiveTestAndSetOwnershipOfCriticalSection, if the native critical section is unowned then the process becomes its owner and primitiveTestAndSetOwnershipOfCriticalSection answers false. If the native critical section is owned by the process primitiveTestAndSetOwnershipOfCriticalSection answers true. If the native critical section is owned by some other process then primitiveTestAndSetOwnershipOfCriticalSection answers nil. Using primitiveExitCriticalSection and primitiveExitCriticalSection allows for efficient implemntation of rentrant critical sections: critical: aBlock "Evaluate aBlock protected by the receiver." <criticalSection> ^self primitiveEnterCriticalSection ifTrue: [aBlock value] ifFalse: [aBlock ensure: [self primitiveExitCriticalSection]] Adding primitiveTestAndSetOwnershipOfCriticalSection makes it easy to implement and understand the following: critical: aBlock ifLocked: lockedBlock "Answer the evaluation of aBlock protected by the receiver. If it is already in a critical section on behalf of some other process answer the evaluation of lockedBlock." <criticalSection> ^self primitiveTestAndSetOwnershipOfCriticalSection ifNil: [lockedBlock value] ifNotNil: [:alreadyOwner| alreadyOwner ifTrue: [aBlock value] ifFalse: [aBlock ensure: [self primitiveExitCriticalSection]]] Once a process owns a critical section it can enter the criutical section as many times as it wants. With the Semaphore it can only enter once per signal. Of course we have constructed the class Mutex to operate similarly to a native critical section, but it is inefficient and not entirely safe (because we rely on implementation-defined behaviour to be able to assign the Mutex's owner without being preempted. In Squeak we already replaced the old Mutex with a Mutex built using the native crittical section reoresentation and primitives. The file-in is attached. An implementation note is that semaphores and native crtitical sections look very similar; they are both queues so their first and second and instance variables are firstLink & lastLink, inherited from LinkedList. A Semaphore's third inst var is excessSignals, its excess signals count. A Mutex's third inst var is is owner. HTH P.S. This is an interesting exercise. What we have done in specifying behavior here is focus on processes. The documentation on the primitive methods in the system focus on the semaphore or native critical section. What (I think) programmers want is to understand how the process behaves, not understand how the semaphore or native critical section works. So documenting things from a process perspective is more useful. P.P.S. If you compare the performance of the constructed Mutex against the native Mitex please report the results. P.P.P.S. We had tio step carefully to replace the old Mutex with the new one. I can't remember her the details, but we handled it with Monticello load scripts and we can find the details if you need them
|
In reply to this post by Ben Coman
Thanks ben, this is a nice idea to illustrate what is happening I will use it.
Thanks for the VM code :). I like it too. I will add a note also mentioning that indeed there is a process to execute the snippets that the reader will write and execute :). Stef PS: what makes me laugh a bit is that people talk about documentation when there are nearly none on the topics on which I write. I write to be able to fully forget everything and free my brain. I remember writing on bloc, exceptions, when there was only obscure texts or none as if mail discussions would make a book. At least this is not the level I want. To me I would like to have a book like Smalltalk and its implementation but about Pharo. This was a book! So I will write it piece by piece.
|
In reply to this post by Eliot Miranda-2
On Fri, Jan 10, 2020 at 2:01 PM Eliot Miranda <[hidden email]> wrote:
Smalltalk has a real-time preemptive-across-priorities, cooperative-within-priorities scheduling model. No process at the same priority as the active process can preempt the active process. Instead it must wait until the active process yields (which moves a process to the back of its run queue, allowing all other runnable processes at its priority a chance to run until it will run again), is suspended (on a semaphore or mutex), or explicitly suspends (via the suspend primitive). So when the original scheduler puts a process at the end of its run queue when a higher priority process preempts it that introduces an implicit yield, which violates the contract, a contract that can be used to implement cheapjack-free mutual exclusion between processes of the same priority. So the improvement, selected by a vm flag, is to cause preemption to add a process to the front of its run queue, maintaining the order and preserving the contract. _,,,^..^,,,_ best, Eliot |
In reply to this post by Nicolas Cellier
Hi Nicolas,
Sure the implementation description is super important, and did not say that. All I said is that it is an implementation description and not a high level one. I just checked the descriptions starting page 257 of the Smalltalk-80 The Language book, as well as VisualWorks's 1.0 Cookbook page 1029 - these are of a *totally* different, much higher level. That is what we need the most. Sven > On 10 Jan 2020, at 18:29, Nicolas Cellier <[hidden email]> wrote: > > > For example, whether a Semaphore would queue waiting process by order of registration (thru a linked list for example) or by order of priority (thru a Heap for example), would completely change its behavior. > So isn't that kind of implementation detail SUPER important, especially when hidden in VM? > > Also, describing HOW it works is very often used as a mean to explain (and make understand) a higher level feature. > IMO, understanding a feature from one implementation is as useful as understanding a feature by examples of usage. > Even when implementation is plain Smalltalk, it's already an added value to give main guidelines to help reading code (see this particular class or message for understanding the core...), so when it's in VM... > > Le ven. 10 janv. 2020 à 12:59, Danil Osipchuk <[hidden email]> a écrit : > I didn't claim expertise on the subject (although I use semaphores extensively), nor its simplicity, nor that the implementation description should be the only guide on its usage (hence 'to add..., how it works' wording) > Said that, to me it is the case, when a clear description of what is going on aids a lot. Instead of trying to define some rules and scenarios abstractly - to help a user to reason about the system behavior (isn't Stef was willing to look into VM code for this reason?). > > To me both scenarios of Stef could be explained that in first case the 'signal' process is not getting preempted by the 'wait' process of the same priority, while in second the preemption happens upon return from primitive (hopefully my memory serves me well and my understanding is correct). > > A tangent note on comments in general -- I've noticed more than once, that people tend to produce far clearer descriptions in exchanges like this -- when discussing matter with others. > When a person is in documentation/comment writing mode he/she is sort of tenses up in a formal state and often produces something not very helpful. Current class comment of Semaphore is a perfect example, if I were not familiar with the concept from other sources, I would not be able to make any sense of it. So, I would suggest to use opportunities like this to improve comments/docs when a bit of knowledge shows up in a discussion. > > > > regards, > Danil > > пт, 10 янв. 2020 г. в 13:09, Sven Van Caekenberghe <[hidden email]>: > Actually, it is just a, albeit concise, description of how Semaphores are implemented. > > It does not help much in understanding them, in learning how they can/should be used, for what purposes and how code behaves. > > Understanding of Process, priorities and Scheduling are also needed for a more complete understanding. > > This is not a simple subject. > > Read https://en.wikipedia.org/wiki/Semaphore_(programming) and see how well you understand the subject. > > In short, it does not answer Stef's concrete question(s). > > > On 10 Jan 2020, at 06:30, Danil Osipchuk <[hidden email]> wrote: > > > > Maybe to add this into the class comment, this is the most concise and clear description of how it works i've ever seen > > > > пт, 10 янв. 2020 г., 8:13 Eliot Miranda <[hidden email]>: > > > > > > On Thu, Jan 9, 2020 at 5:03 AM ducasse <[hidden email]> wrote: > > Hi > > > > I wanted to explain > > > > | semaphore p1 p2 | > > semaphore := Semaphore new. > > p1 := [ semaphore wait. > > 'p1' crTrace ] fork. > > > > p2 := [semaphore signal. > > 'p2' crTrace ] fork. > > > > displays p2 and p1. > > but I would like explain clearly but it depends on the semantics of signal. > > > > > > - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. > > - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists. > > > > Since Smalltalk does not have a preemptive scheduler, neither p1 nor p2 will start to run until something else happens after the execution of p1 := [...] fork. p2 := [...] fork. So for example, if there is Processor yield then p1 can start to run. > > > > So you need to add code to your example to be able to determine what will happen. The easiest thing would be to delay long enough that both can run. 1 millisecond is more than enough. > > > > Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? > > > > I would say these three things, something like this: > > > > "A semaphore is a queue (implemented as a linked list) and an excess signals count, which is a non-negative integer. On instance creation a new semaphore is empty and has a zero excess signals count. A semaphore created for mutual exclusion is empty and has an excess signals count of one." > > > > "When a process waits on a semaphore, if the semaphore's excess signals count is non-zero, then the excess signal count is decremented, and the process proceeds. But if the semaphore has a zero excess signals count then the process is unscheduled and added to the end of the semaphore, after any other processes that are queued on the semaphore." > > > > "When a semaphore is signaled, if it is not empty, the first process is removed from it and added to the runnable processes in the scheduler. If the semaphore is empty its excess signals count is incremented. > > > > Given these three statements it is easy to see how they work, how to use them for mutual exclusion, etc. > > > > > > > > signal > > "Primitive. Send a signal through the receiver. If one or more processes > > have been suspended trying to receive a signal, allow the first one to > > proceed. If no process is waiting, remember the excess signal. Essential. > > See Object documentation whatIsAPrimitive." > > > > <primitive: 85> > > self primitiveFailed > > > > "self isEmpty > > ifTrue: [excessSignals := excessSignals+1] > > ifFalse: [Processor resume: self removeFirstLink]" > > > > > > I wanted to know what is really happening when a semaphore is signalled. > > Now resume: does not exist on Processor. > > > > I will look in the VM code. > > > > > > S > > > > > > > > > > > > S. > > > > > > > > -- > > _,,,^..^,,,_ > > best, Eliot > > |
In reply to this post by Ben Coman
Hi Ben,
Great approach, though I would make one change to make your example completely copy/paste runnable. Stef's original example: | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. p1 := [ semaphore wait. trace value: 'Process 1' ] fork. p2 := [ semaphore signal. trace value: 'Process 2' ] fork. trace value: 'Original process pre-yield'. Processor yield. trace value: 'Original process post-yield'. Gives: '[40] Original process pre-yield' '[40] Process 2' '[40] Original process post-yield' '[40] Process 1' But not running the yield section gives: '[40] Process 2' '[40] Process 1' From this it would seem that the code in p2 continues after signal and only later does p1 get past its wait. Playing with the priorities we can change that order (apparently); | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. p1 := [ semaphore wait. trace value: 'Process 1' ] forkAt: 30. p2 := [ semaphore signal. trace value: 'Process 2' ] forkAt: 20. Gives: '[30] Process 1' '[20] Process 2' Again, the yield section makes no difference. So something else happened. The other way around: | trace semaphore p1 p2 | semaphore := Semaphore new. trace := [ :message | ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. p1 := [ semaphore wait. trace value: 'Process 1' ] forkAt: 50. p2 := [ semaphore signal. trace value: 'Process 2' ] forkAt: 60. Gives: '[60] Process 2' '[50] Process 1' Obviously the details about scheduling, order and priorities are really important to understanding this behaviour, and we should be able to explain this is simple terms, so that normal people can use this correctly. Sven > On 10 Jan 2020, at 19:02, Ben Coman <[hidden email]> wrote: > > For greater visibility and comprehension, it might be useful to early in the manual define a utility method... > Object>>crTracePriority > self crTrace: '[', Processor activePriority printString, ']', self printString > > On Fri, 10 Jan 2020 at 13:13, Eliot Miranda <[hidden email]> wrote: > > > On Thu, Jan 9, 2020 at 5:03 AM ducasse <[hidden email]> wrote: > Hi > > I wanted to explain > > | semaphore p1 p2 | > semaphore := Semaphore new. > p1 := [ semaphore wait. > 'p1' crTrace ] fork. > > p2 := [semaphore signal. > 'p2' crTrace ] fork. > > displays p2 and p1. > but I would like explain clearly but it depends on the semantics of signal. > > > - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. > - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists. > > Since Smalltalk does not have a preemptive scheduler, neither p1 nor p2 will start to run until something else happens after the execution of p1 := [...] fork. p2 := [...] fork. So for example, if there is Processor yield then p1 can start to run. > > So you need to add code to your example to be able to determine what will happen. The easiest thing would be to delay long enough that both can run. 1 millisecond is more than enough. > > This is a good point. It may be useful for the example to be expanded to... > > | semaphore p1 p2 | > semaphore := Semaphore new. > p1 := [ semaphore wait. > 'Process 1' crTracePriority ] fork. > > p2 := [semaphore signal. > 'Process 2' crTracePriority ] fork. > > 'Original process pre-yield' crTracePriority . > 1 milliSeconds wait. > 'Original process post-yield' crTracePriority . > > which would produce==> > > [40]'Original process pre-yield' > [40]'Process 2' > [40]'Process 1' > [40]'Original process post-yield' > > with other examples producing... > > [40]'Original process pre-yield' > [30]'Process 1' > [20]'Process 2' > [40]'Original process post-yield' > > [60]'Process 2' > [50]'Process 1' > [40]'Original process pre-yield' > [40]'Original process post-yield' > > > > Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? > > I would say these three things, something like this: > > "A semaphore is a queue (implemented as a linked list) and an excess signals count, which is a non-negative integer. On instance creation a new semaphore is empty and has a zero excess signals count. A semaphore created for mutual exclusion is empty and has an excess signals count of one." > > "When a process waits on a semaphore, if the semaphore's excess signals count is non-zero, then the excess signal count is decremented, and the process proceeds. But if the semaphore has a zero excess signals count then the process is unscheduled and added to the end of the semaphore, after any other processes that are queued on the semaphore." > > "When a semaphore is signaled, if it is not empty, the first process is removed from it and added to the runnable processes in the scheduler. If the semaphore is empty its excess signals count is incremented. > > Given these three statements it is easy to see how they work, how to use them for mutual exclusion, etc. > > > > signal > "Primitive. Send a signal through the receiver. If one or more processes > have been suspended trying to receive a signal, allow the first one to > proceed. If no process is waiting, remember the excess signal. Essential. > See Object documentation whatIsAPrimitive." > > <primitive: 85> > self primitiveFailed > > "self isEmpty > ifTrue: [excessSignals := excessSignals+1] > ifFalse: [Processor resume: self removeFirstLink]" > > > I wanted to know what is really happening when a semaphore is signalled. > Now resume: does not exist on Processor. > > I will look in the VM code. > > > For quick reference, here is some relevant VM code (the StackInterpreter code is a little simpler...) > > https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/StackInterpreter.class.st#L1422-L1432 > StackInterpreter class >> initializePrimitiveTable [ > ... > "Control Primitives (80-89)" > (85 primitiveSignal) > (86 primitiveWait) > ... > ] > > https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/InterpreterPrimitives.class.st#L5385-L5396 > InterpreterPrimitives >> primitiveWait [ > | sema excessSignals activeProc | > sema := self stackTop. "rcvr" > excessSignals := self fetchInteger: ExcessSignalsIndex ofObject: sema. > excessSignals > 0 > ifTrue: > [self storeInteger: ExcessSignalsIndex ofObject: sema withValue: excessSignals - 1] > ifFalse: > [activeProc := self activeProcess. > self addLastLink: activeProc toList: sema. > self transferTo: self wakeHighestPriority] > ] > > > https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/InterpreterPrimitives.class.st#L4045-L4049 > InterpreterPrimitives >> primitiveSignal [ > "Synchronously signal the semaphore. > This may change the active process as a result." > self synchronousSignal: self stackTop > ] > > > https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/StackInterpreter.class.st#L21662-L21679 > StackInterpreter >> synchronousSignal: aSemaphore [ > "Signal the given semaphore from within the interpreter. > Answer if the current process was preempted." > | excessSignals | > <inline: false> > (self isEmptyList: aSemaphore) ifTrue: > ["no process is waiting on this semaphore" > excessSignals := self fetchInteger: ExcessSignalsIndex ofObject: aSemaphore. > self storeInteger: ExcessSignalsIndex > ofObject: aSemaphore > withValue: excessSignals + 1. > ^false]. > > objectMemory ensureSemaphoreUnforwardedThroughContext: aSemaphore. > > ^self resume: (self removeFirstLinkOfList: aSemaphore) > preemptedYieldingIf: preemptionYields > ] > > > cheers -ben |
Thanks Sven
I started to use the suggestions of Ben. And I’m thinking that I could show my version then step back and explain that indeed there is another process (the UI one). I will digest your example and produce a new version of the booklet and people can give feedback. S. > On 10 Jan 2020, at 23:30, Sven Van Caekenberghe <[hidden email]> wrote: > > Hi Ben, > > Great approach, though I would make one change to make your example completely copy/paste runnable. > > Stef's original example: > > | trace semaphore p1 p2 | > > semaphore := Semaphore new. > > trace := [ :message | > ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. > > p1 := [ > semaphore wait. > trace value: 'Process 1' ] fork. > > p2 := [ > semaphore signal. > trace value: 'Process 2' ] fork. > > trace value: 'Original process pre-yield'. > Processor yield. > trace value: 'Original process post-yield'. > > Gives: > > '[40] Original process pre-yield' > '[40] Process 2' > '[40] Original process post-yield' > '[40] Process 1' > > But not running the yield section gives: > > '[40] Process 2' > '[40] Process 1' > > From this it would seem that the code in p2 continues after signal and only later does p1 get past its wait. > > Playing with the priorities we can change that order (apparently); > > | trace semaphore p1 p2 | > > semaphore := Semaphore new. > > trace := [ :message | > ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. > > p1 := [ > semaphore wait. > trace value: 'Process 1' ] forkAt: 30. > > p2 := [ > semaphore signal. > trace value: 'Process 2' ] forkAt: 20. > > Gives: > > '[30] Process 1' > '[20] Process 2' > > Again, the yield section makes no difference. So something else happened. > > The other way around: > > | trace semaphore p1 p2 | > > semaphore := Semaphore new. > > trace := [ :message | > ('[{1}] {2}' format: { Processor activeProcess priority. message }) crLog ]. > > p1 := [ > semaphore wait. > trace value: 'Process 1' ] forkAt: 50. > > p2 := [ > semaphore signal. > trace value: 'Process 2' ] forkAt: 60. > > Gives: > > '[60] Process 2' > '[50] Process 1' > > Obviously the details about scheduling, order and priorities are really important to understanding this behaviour, and we should be able to explain this is simple terms, so that normal people can use this correctly. > > Sven > >> On 10 Jan 2020, at 19:02, Ben Coman <[hidden email]> wrote: >> >> For greater visibility and comprehension, it might be useful to early in the manual define a utility method... >> Object>>crTracePriority >> self crTrace: '[', Processor activePriority printString, ']', self printString >> >> On Fri, 10 Jan 2020 at 13:13, Eliot Miranda <[hidden email]> wrote: >> >> >> On Thu, Jan 9, 2020 at 5:03 AM ducasse <[hidden email]> wrote: >> Hi >> >> I wanted to explain >> >> | semaphore p1 p2 | >> semaphore := Semaphore new. >> p1 := [ semaphore wait. >> 'p1' crTrace ] fork. >> >> p2 := [semaphore signal. >> 'p2' crTrace ] fork. >> >> displays p2 and p1. >> but I would like explain clearly but it depends on the semantics of signal. >> >> >> - ==p1== is scheduled and its execution starts to wait on the semaphore, so it is removed from the run queue of the scheduler and added to the waiting list of the semaphore. >> - ==p2== is scheduled and it signals the semaphore. The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists. >> >> Since Smalltalk does not have a preemptive scheduler, neither p1 nor p2 will start to run until something else happens after the execution of p1 := [...] fork. p2 := [...] fork. So for example, if there is Processor yield then p1 can start to run. >> >> So you need to add code to your example to be able to determine what will happen. The easiest thing would be to delay long enough that both can run. 1 millisecond is more than enough. >> >> This is a good point. It may be useful for the example to be expanded to... >> >> | semaphore p1 p2 | >> semaphore := Semaphore new. >> p1 := [ semaphore wait. >> 'Process 1' crTracePriority ] fork. >> >> p2 := [semaphore signal. >> 'Process 2' crTracePriority ] fork. >> >> 'Original process pre-yield' crTracePriority . >> 1 milliSeconds wait. >> 'Original process post-yield' crTracePriority . >> >> which would produce==> >> >> [40]'Original process pre-yield' >> [40]'Process 2' >> [40]'Process 1' >> [40]'Original process post-yield' >> >> with other examples producing... >> >> [40]'Original process pre-yield' >> [30]'Process 1' >> [20]'Process 2' >> [40]'Original process post-yield' >> >> [60]'Process 2' >> [50]'Process 1' >> [40]'Original process pre-yield' >> [40]'Original process post-yield' >> >> >> >> Now this sentence "The semaphore takes the first waiting process (==p1==) and reschedule it by adding it to the end of the suspended lists.” is super naive. Is the semaphore signalling scheduled? or not? >> >> I would say these three things, something like this: >> >> "A semaphore is a queue (implemented as a linked list) and an excess signals count, which is a non-negative integer. On instance creation a new semaphore is empty and has a zero excess signals count. A semaphore created for mutual exclusion is empty and has an excess signals count of one." >> >> "When a process waits on a semaphore, if the semaphore's excess signals count is non-zero, then the excess signal count is decremented, and the process proceeds. But if the semaphore has a zero excess signals count then the process is unscheduled and added to the end of the semaphore, after any other processes that are queued on the semaphore." >> >> "When a semaphore is signaled, if it is not empty, the first process is removed from it and added to the runnable processes in the scheduler. If the semaphore is empty its excess signals count is incremented. >> >> Given these three statements it is easy to see how they work, how to use them for mutual exclusion, etc. >> >> >> >> signal >> "Primitive. Send a signal through the receiver. If one or more processes >> have been suspended trying to receive a signal, allow the first one to >> proceed. If no process is waiting, remember the excess signal. Essential. >> See Object documentation whatIsAPrimitive." >> >> <primitive: 85> >> self primitiveFailed >> >> "self isEmpty >> ifTrue: [excessSignals := excessSignals+1] >> ifFalse: [Processor resume: self removeFirstLink]" >> >> >> I wanted to know what is really happening when a semaphore is signalled. >> Now resume: does not exist on Processor. >> >> I will look in the VM code. >> >> >> For quick reference, here is some relevant VM code (the StackInterpreter code is a little simpler...) >> >> https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/StackInterpreter.class.st#L1422-L1432 >> StackInterpreter class >> initializePrimitiveTable [ >> ... >> "Control Primitives (80-89)" >> (85 primitiveSignal) >> (86 primitiveWait) >> ... >> ] >> >> https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/InterpreterPrimitives.class.st#L5385-L5396 >> InterpreterPrimitives >> primitiveWait [ >> | sema excessSignals activeProc | >> sema := self stackTop. "rcvr" >> excessSignals := self fetchInteger: ExcessSignalsIndex ofObject: sema. >> excessSignals > 0 >> ifTrue: >> [self storeInteger: ExcessSignalsIndex ofObject: sema withValue: excessSignals - 1] >> ifFalse: >> [activeProc := self activeProcess. >> self addLastLink: activeProc toList: sema. >> self transferTo: self wakeHighestPriority] >> ] >> >> >> https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/InterpreterPrimitives.class.st#L4045-L4049 >> InterpreterPrimitives >> primitiveSignal [ >> "Synchronously signal the semaphore. >> This may change the active process as a result." >> self synchronousSignal: self stackTop >> ] >> >> >> https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/pharo/headless/smalltalksrc/VMMaker/StackInterpreter.class.st#L21662-L21679 >> StackInterpreter >> synchronousSignal: aSemaphore [ >> "Signal the given semaphore from within the interpreter. >> Answer if the current process was preempted." >> | excessSignals | >> <inline: false> >> (self isEmptyList: aSemaphore) ifTrue: >> ["no process is waiting on this semaphore" >> excessSignals := self fetchInteger: ExcessSignalsIndex ofObject: aSemaphore. >> self storeInteger: ExcessSignalsIndex >> ofObject: aSemaphore >> withValue: excessSignals + 1. >> ^false]. >> >> objectMemory ensureSemaphoreUnforwardedThroughContext: aSemaphore. >> >> ^self resume: (self removeFirstLinkOfList: aSemaphore) >> preemptedYieldingIf: preemptionYields >> ] >> >> >> cheers -ben > > |
In reply to this post by ducasse
> Am 10.01.2020 um 22:32 schrieb ducasse <[hidden email]>: > > >> ducasse wrote >>> It was so simple that I preferred to do it myself so like that I will look >>> like a great Pharo contributor. >> >> :) > :) > > Sometimes I do not know if my humour is strange or not :) It is! And you are the only one having that kind of humor ;) Norbert > But strange humour is better than no humour :) > > Stef > > PS: I read too many gary larson comics to be safe :) > > > |
>>> >>> :) >> :) >> >> Sometimes I do not know if my humour is strange or not :) > > It is! And you are the only one having that kind of humor ;) Not sure that we are safe :)))) https://www.thefarside.com Just arrived last month and it is super great |
Free forum by Nabble | Edit this page |