I got a new VM build, which is ready to be tested with new
scheduler(after i implement it). I rewrote the external signaling stuff & interrupt checking. Now its not signals any semaphores. Instead, i added a primitive which explicitly fetching all pending signals to array and flushing pending signals VM internal buffer. Then in interrupt checker i simply switch active process to special 'interrupt process' (or scheduler process - Andreas), if there any pending signals to handle. What does it means for language side? It means a very cool thing: you are no longer obliged to use semaphores to respond to signals! You can register any object in external objects table. And new scheduler will simply do: externalObjects := Smalltalk externalObjects. signalIndexes do: [:index | (externalObjects at: i) handleExternalSignal. ] so, as long as your registered object responds to #handleExternalSignal, you are free to choose what to do in response to signal. Semaphores, of course will signal themselves. After replacing scheduler with new model, the VM will no longer need to know anything about semaphores. This is because any scheduling related stuff will become 100% language-side specific. So, that with new model, multiple primitives become obsolete: primitiveYield primitiveWait primitiveSuspend primitiveSignal primitiveResume instead of them there are two new primitives: primitiveTransferToProcess "sets an ActiveProcess to new process, sets an InterruptedProcess to the process which was active set a ProcessAction to anAction object " primitiveFetchPendingSignals "primitive, fill an array (first argument)with special objects indexes, needed to be signaled. Returns a number of signals being filled. Or negative number indicating that array is not big enough to fetch all signals at once. Primitive fails if first argument is not array. " 2009/4/29 Igor Stasenko <[hidden email]>: > 2009/4/29 Andreas Raab <[hidden email]>: >> Igor Stasenko wrote: >>> >>> I came to an idea , you might be interested in. >>> As many of us know, some CPUs having a special mode - interrupt mode. >>> What if we introduce the interrupt mode for scheduler? >> >> [... snip ...] >>> >>> Now i trying to imagine, how a basic stuff might look like(please >>> correct me if its utterly wrong way ;), if we will be able to use >>> interrupt mode. >> >> This is actually along similar lines of thought that I had when I was >> thinking of how to get rid of the builtin VM scheduling behavior. The main >> thought that I had was that the VM may have a "special" process - the >> scheduler process (duh!) which it runs when it doesn't know what else to do. >> The VM would then not directly schedule processes after semaphore signals >> but rather put them onto a "ready" queue that can be read by the scheduler >> process and switch to the scheduler process. The scheduler process decides >> what to run next and resumes the process via a primitive. Whenever an >> external signal comes in, the VM automatically activates the scheduler >> process and the scheduler process then decides whether to resume the >> previously running process or to switch to a different process. >> >> In a way this folds the timer process into the scheduler (which makes good >> sense from my perspective because much of the work in the timer is stuff >> that could be more effectively take place in the scheduler). The >> implementation should be relatively straightforward - just add a scheduler >> process and a ready list to the special objects, and wherever the VM would >> normally process switch you just switch to the scheduler. Voila, there is >> your user-manipulable scheduler ;-) And obviously, anything that is run out >> of the scheduler process is by definition non-interruptable because there is >> simply nothing to switch to! >> > > Very nice indeed. That's even better that my first proposal. > ProcessorScheduler>>schedulingProcessLoop > [ > self handlePendingSignalsAndActions. > activeProcess ifNil: [ self idle ] ifNotNil: [ self > primitiveTransferControlTo: activeProcess]. > ] repeat. > > and when any process, somehow stops running > (suspend/wait/terminate/interrupted etc), VM will again switch to > scheduler process loop. > > What is important in having it, that there is guarantee to be not > preempted by anything. Simply by having this, many > concurrency/scheduling related problems can be solved by language-side > implementation, without fear of having gotchas from VM side. > > Also, VM doesn't needs to know details about priorities, suspending, > etc etc.. - which means that we can simplify VM considerably and > implement same parts on the language side, where everything is late > bound :) > > As for moving to multi-cores.. yes, as Gulik suggests, its like adding > a new dimension: > - local scheduler for each core > - single global scheduler for freezing everything > > This, of course, if we could afford running same object memory over > multiple cores. Handling interpreter/object memory state(s) with > multiple cores is not trivial thing. > > If we going to keep more isolated model (islands, hydra ) then we need > no/minimal changes to scheduler - each scheduler serves own island and > receives asynchronous signals from other collegues through shared > queue. > >> Cheers, >> - Andreas >> > > > -- > Best regards, > Igor Stasenko AKA sig. > -- Best regards, Igor Stasenko AKA sig. |
Igor Stasenko wrote:
> I got a new VM build, which is ready to be tested with new > scheduler(after i implement it). One thing you should do is to implement the current scheduling policy and compare the overhead when implementing it in user-land. If the overhead is not too bad I think it would be worthwhile thinking about pulling this in for real (I have some thoughts about how to make this backwards compatible too). Cheers, - Andreas > I rewrote the external signaling stuff & interrupt checking. > Now its not signals any semaphores. Instead, i added a primitive which > explicitly fetching all pending signals to array and flushing pending > signals VM internal buffer. Then in interrupt checker i simply switch > active process to special 'interrupt process' (or scheduler process - > Andreas), if there any pending signals to handle. > > What does it means for language side? > It means a very cool thing: you are no longer obliged to use > semaphores to respond to signals! > You can register any object in external objects table. > And new scheduler will simply do: > > externalObjects := Smalltalk externalObjects. > signalIndexes do: [:index | > (externalObjects at: i) handleExternalSignal. > ] > > so, as long as your registered object responds to > #handleExternalSignal, you are free to choose what to do in response > to signal. > Semaphores, of course will signal themselves. > > After replacing scheduler with new model, the VM will no longer need > to know anything about semaphores. This is because any scheduling > related stuff will become 100% language-side specific. > > So, that with new model, multiple primitives become obsolete: > > primitiveYield > primitiveWait > primitiveSuspend > primitiveSignal > primitiveResume > > instead of them there are two new primitives: > > primitiveTransferToProcess > "sets an ActiveProcess to new process, > sets an InterruptedProcess to the process which was active > set a ProcessAction to anAction object > " > > primitiveFetchPendingSignals > "primitive, fill an array (first argument)with special objects > indexes, needed to be signaled. > Returns a number of signals being filled. > Or negative number indicating that array is not big enough to fetch > all signals at once. > Primitive fails if first argument is not array. > " > > 2009/4/29 Igor Stasenko <[hidden email]>: >> 2009/4/29 Andreas Raab <[hidden email]>: >>> Igor Stasenko wrote: >>>> I came to an idea , you might be interested in. >>>> As many of us know, some CPUs having a special mode - interrupt mode. >>>> What if we introduce the interrupt mode for scheduler? >>> [... snip ...] >>>> Now i trying to imagine, how a basic stuff might look like(please >>>> correct me if its utterly wrong way ;), if we will be able to use >>>> interrupt mode. >>> This is actually along similar lines of thought that I had when I was >>> thinking of how to get rid of the builtin VM scheduling behavior. The main >>> thought that I had was that the VM may have a "special" process - the >>> scheduler process (duh!) which it runs when it doesn't know what else to do. >>> The VM would then not directly schedule processes after semaphore signals >>> but rather put them onto a "ready" queue that can be read by the scheduler >>> process and switch to the scheduler process. The scheduler process decides >>> what to run next and resumes the process via a primitive. Whenever an >>> external signal comes in, the VM automatically activates the scheduler >>> process and the scheduler process then decides whether to resume the >>> previously running process or to switch to a different process. >>> >>> In a way this folds the timer process into the scheduler (which makes good >>> sense from my perspective because much of the work in the timer is stuff >>> that could be more effectively take place in the scheduler). The >>> implementation should be relatively straightforward - just add a scheduler >>> process and a ready list to the special objects, and wherever the VM would >>> normally process switch you just switch to the scheduler. Voila, there is >>> your user-manipulable scheduler ;-) And obviously, anything that is run out >>> of the scheduler process is by definition non-interruptable because there is >>> simply nothing to switch to! >>> >> Very nice indeed. That's even better that my first proposal. >> ProcessorScheduler>>schedulingProcessLoop >> [ >> self handlePendingSignalsAndActions. >> activeProcess ifNil: [ self idle ] ifNotNil: [ self >> primitiveTransferControlTo: activeProcess]. >> ] repeat. >> >> and when any process, somehow stops running >> (suspend/wait/terminate/interrupted etc), VM will again switch to >> scheduler process loop. >> >> What is important in having it, that there is guarantee to be not >> preempted by anything. Simply by having this, many >> concurrency/scheduling related problems can be solved by language-side >> implementation, without fear of having gotchas from VM side. >> >> Also, VM doesn't needs to know details about priorities, suspending, >> etc etc.. - which means that we can simplify VM considerably and >> implement same parts on the language side, where everything is late >> bound :) >> >> As for moving to multi-cores.. yes, as Gulik suggests, its like adding >> a new dimension: >> - local scheduler for each core >> - single global scheduler for freezing everything >> >> This, of course, if we could afford running same object memory over >> multiple cores. Handling interpreter/object memory state(s) with >> multiple cores is not trivial thing. >> >> If we going to keep more isolated model (islands, hydra ) then we need >> no/minimal changes to scheduler - each scheduler serves own island and >> receives asynchronous signals from other collegues through shared >> queue. >> >>> Cheers, >>> - Andreas >>> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> > > > |
2009/4/29 Andreas Raab <[hidden email]>:
> Igor Stasenko wrote: >> >> I got a new VM build, which is ready to be tested with new >> scheduler(after i implement it). > > One thing you should do is to implement the current scheduling policy and > compare the overhead when implementing it in user-land. If the overhead is > not too bad I think it would be worthwhile thinking about pulling this in > for real (I have some thoughts about how to make this backwards compatible > too). > But there are places where it checking if new scheduler is in place: hasNewScheduler "the old scheduler using just two instance variables" ^ (self lastPointerOf: self schedulerPointer) >= (ProcessActionIndex*BytesPerWord + BaseHeaderSize) You're right about overhead. If its too heawyweight, then we may need some additional primitives. But i strongly against making scheduling being dependant from early bound VM behavior again :) Also, a new scheduler is not obliged to use an 80-long array of lists. It can use more optimized structure, like Heap to maintain a list of scheduled processes sorted by priority. Then a list iteration could be shortened , as well as we can use any priority value for process (not just in range 1-80), and still be able to schedule them correctly. > Cheers, > - Andreas > >> I rewrote the external signaling stuff & interrupt checking. >> Now its not signals any semaphores. Instead, i added a primitive which >> explicitly fetching all pending signals to array and flushing pending >> signals VM internal buffer. Then in interrupt checker i simply switch >> active process to special 'interrupt process' (or scheduler process - >> Andreas), if there any pending signals to handle. >> >> What does it means for language side? >> It means a very cool thing: you are no longer obliged to use >> semaphores to respond to signals! >> You can register any object in external objects table. >> And new scheduler will simply do: >> >> externalObjects := Smalltalk externalObjects. >> signalIndexes do: [:index | >> (externalObjects at: i) handleExternalSignal. >> ] >> >> so, as long as your registered object responds to >> #handleExternalSignal, you are free to choose what to do in response >> to signal. >> Semaphores, of course will signal themselves. >> >> After replacing scheduler with new model, the VM will no longer need >> to know anything about semaphores. This is because any scheduling >> related stuff will become 100% language-side specific. >> >> So, that with new model, multiple primitives become obsolete: >> >> primitiveYield >> primitiveWait >> primitiveSuspend >> primitiveSignal >> primitiveResume >> >> instead of them there are two new primitives: >> >> primitiveTransferToProcess >> "sets an ActiveProcess to new process, >> sets an InterruptedProcess to the process which was active >> set a ProcessAction to anAction object >> " >> >> primitiveFetchPendingSignals >> "primitive, fill an array (first argument)with special objects >> indexes, needed to be signaled. >> Returns a number of signals being filled. >> Or negative number indicating that array is not big enough to fetch >> all signals at once. >> Primitive fails if first argument is not array. >> " >> >> 2009/4/29 Igor Stasenko <[hidden email]>: >>> >>> 2009/4/29 Andreas Raab <[hidden email]>: >>>> >>>> Igor Stasenko wrote: >>>>> >>>>> I came to an idea , you might be interested in. >>>>> As many of us know, some CPUs having a special mode - interrupt mode. >>>>> What if we introduce the interrupt mode for scheduler? >>>> >>>> [... snip ...] >>>>> >>>>> Now i trying to imagine, how a basic stuff might look like(please >>>>> correct me if its utterly wrong way ;), if we will be able to use >>>>> interrupt mode. >>>> >>>> This is actually along similar lines of thought that I had when I was >>>> thinking of how to get rid of the builtin VM scheduling behavior. The >>>> main >>>> thought that I had was that the VM may have a "special" process - the >>>> scheduler process (duh!) which it runs when it doesn't know what else to >>>> do. >>>> The VM would then not directly schedule processes after semaphore >>>> signals >>>> but rather put them onto a "ready" queue that can be read by the >>>> scheduler >>>> process and switch to the scheduler process. The scheduler process >>>> decides >>>> what to run next and resumes the process via a primitive. Whenever an >>>> external signal comes in, the VM automatically activates the scheduler >>>> process and the scheduler process then decides whether to resume the >>>> previously running process or to switch to a different process. >>>> >>>> In a way this folds the timer process into the scheduler (which makes >>>> good >>>> sense from my perspective because much of the work in the timer is stuff >>>> that could be more effectively take place in the scheduler). The >>>> implementation should be relatively straightforward - just add a >>>> scheduler >>>> process and a ready list to the special objects, and wherever the VM >>>> would >>>> normally process switch you just switch to the scheduler. Voila, there >>>> is >>>> your user-manipulable scheduler ;-) And obviously, anything that is run >>>> out >>>> of the scheduler process is by definition non-interruptable because >>>> there is >>>> simply nothing to switch to! >>>> >>> Very nice indeed. That's even better that my first proposal. >>> ProcessorScheduler>>schedulingProcessLoop >>> [ >>> self handlePendingSignalsAndActions. >>> activeProcess ifNil: [ self idle ] ifNotNil: [ self >>> primitiveTransferControlTo: activeProcess]. >>> ] repeat. >>> >>> and when any process, somehow stops running >>> (suspend/wait/terminate/interrupted etc), VM will again switch to >>> scheduler process loop. >>> >>> What is important in having it, that there is guarantee to be not >>> preempted by anything. Simply by having this, many >>> concurrency/scheduling related problems can be solved by language-side >>> implementation, without fear of having gotchas from VM side. >>> >>> Also, VM doesn't needs to know details about priorities, suspending, >>> etc etc.. - which means that we can simplify VM considerably and >>> implement same parts on the language side, where everything is late >>> bound :) >>> >>> As for moving to multi-cores.. yes, as Gulik suggests, its like adding >>> a new dimension: >>> - local scheduler for each core >>> - single global scheduler for freezing everything >>> >>> This, of course, if we could afford running same object memory over >>> multiple cores. Handling interpreter/object memory state(s) with >>> multiple cores is not trivial thing. >>> >>> If we going to keep more isolated model (islands, hydra ) then we need >>> no/minimal changes to scheduler - each scheduler serves own island and >>> receives asynchronous signals from other collegues through shared >>> queue. >>> >>>> Cheers, >>>> - Andreas >>>> >>> >>> -- >>> Best regards, >>> Igor Stasenko AKA sig. >>> >> >> >> > > > -- Best regards, Igor Stasenko AKA sig. |
On Wed, Apr 29, 2009 at 11:51 AM, Igor Stasenko <[hidden email]> wrote:
Just to clear-up any confusion, the current VM is not limited to 80; it will use any size. To shorten the list iteration I've done the following, first in VisualWorks and now in the StackVM and Cog. Simply maintain a "high-tide" which is the highest current runnable process priority. The list search only has to start from this value rather than the highest priority, which most of the time saves scanning 40 empty lists on every wakeHighestPriority.
I've attached the changes (just two methods) except for the initialization of highestRunnableProcessPriority to zero in the relevant initializeInterpreter:.
highestRunnableProcessPriority.st (2K) Download Attachment |
2009/4/29 Eliot Miranda <[hidden email]>:
> > > On Wed, Apr 29, 2009 at 11:51 AM, Igor Stasenko <[hidden email]> wrote: >> >> 2009/4/29 Andreas Raab <[hidden email]>: >> > Igor Stasenko wrote: >> >> >> >> I got a new VM build, which is ready to be tested with new >> >> scheduler(after i implement it). >> > >> > One thing you should do is to implement the current scheduling policy >> > and >> > compare the overhead when implementing it in user-land. If the overhead >> > is >> > not too bad I think it would be worthwhile thinking about pulling this >> > in >> > for real (I have some thoughts about how to make this backwards >> > compatible >> > too). >> > >> the current VM retains all backward-compatible stuff. >> But there are places where it checking if new scheduler is in place: >> >> hasNewScheduler >> "the old scheduler using just two instance variables" >> ^ (self lastPointerOf: self schedulerPointer) >= >> (ProcessActionIndex*BytesPerWord + BaseHeaderSize) >> >> You're right about overhead. If its too heawyweight, then we may need >> some additional primitives. But i strongly against making scheduling >> being dependant from early bound VM behavior again :) >> >> Also, a new scheduler is not obliged to use an 80-long array of lists. >> It can use more optimized structure, like Heap to maintain a list of >> scheduled processes sorted by priority. Then a list iteration could be >> shortened , as well as we can use any priority value for process (not >> just in range 1-80), and still be able to schedule them correctly. > > Just to clear-up any confusion, the current VM is not limited to 80; it will > use any size. Sure. But to my experience, such limits changing very rarely, if never. There are many constants spitted around everywhere. Many of them is invented simply because there was need to choose a 'reasonable' number, like SemaphoresToSignalSize (for hartred semaphoresToSignalA and semaphoresToSignalB tweens), or 80 lists for scheduler. Other constants is based on impirical evidences. But best would be to write a code which requires no, or as small as possible number of constants. I bet that such code would serve much longer comparing with one which rely on constants - as hardware speed improves, it improves with it, without need in tuning different values, which were valid once 20 years ago :) > To shorten the list iteration I've done the following, first in VisualWorks > and now in the StackVM and Cog. Simply maintain a "high-tide" which is the > highest current runnable process priority. The list search only has to > start from this value rather than the highest priority, which most of the > time saves scanning 40 empty lists on every wakeHighestPriority. > I've attached the changes (just two methods) except for the initialization > of highestRunnableProcessPriority to zero in the relevant > initializeInterpreter:. > Yes, this is the simplest thing which can be done, to minimize the looping. Of course a loop of 80 iterations is hardly noticeable in compiled C code on modern machinery. But big buildings consist from small bricks. >> > Cheers, >> > - Andreas >> > >> >> I rewrote the external signaling stuff & interrupt checking. >> >> Now its not signals any semaphores. Instead, i added a primitive which >> >> explicitly fetching all pending signals to array and flushing pending >> >> signals VM internal buffer. Then in interrupt checker i simply switch >> >> active process to special 'interrupt process' (or scheduler process - >> >> Andreas), if there any pending signals to handle. >> >> >> >> What does it means for language side? >> >> It means a very cool thing: you are no longer obliged to use >> >> semaphores to respond to signals! >> >> You can register any object in external objects table. >> >> And new scheduler will simply do: >> >> >> >> externalObjects := Smalltalk externalObjects. >> >> signalIndexes do: [:index | >> >> (externalObjects at: i) handleExternalSignal. >> >> ] >> >> >> >> so, as long as your registered object responds to >> >> #handleExternalSignal, you are free to choose what to do in response >> >> to signal. >> >> Semaphores, of course will signal themselves. >> >> >> >> After replacing scheduler with new model, the VM will no longer need >> >> to know anything about semaphores. This is because any scheduling >> >> related stuff will become 100% language-side specific. >> >> >> >> So, that with new model, multiple primitives become obsolete: >> >> >> >> primitiveYield >> >> primitiveWait >> >> primitiveSuspend >> >> primitiveSignal >> >> primitiveResume >> >> >> >> instead of them there are two new primitives: >> >> >> >> primitiveTransferToProcess >> >> "sets an ActiveProcess to new process, >> >> sets an InterruptedProcess to the process which was active >> >> set a ProcessAction to anAction object >> >> " >> >> >> >> primitiveFetchPendingSignals >> >> "primitive, fill an array (first argument)with special objects >> >> indexes, needed to be signaled. >> >> Returns a number of signals being filled. >> >> Or negative number indicating that array is not big enough to >> >> fetch >> >> all signals at once. >> >> Primitive fails if first argument is not array. >> >> " >> >> >> >> 2009/4/29 Igor Stasenko <[hidden email]>: >> >>> >> >>> 2009/4/29 Andreas Raab <[hidden email]>: >> >>>> >> >>>> Igor Stasenko wrote: >> >>>>> >> >>>>> I came to an idea , you might be interested in. >> >>>>> As many of us know, some CPUs having a special mode - interrupt >> >>>>> mode. >> >>>>> What if we introduce the interrupt mode for scheduler? >> >>>> >> >>>> [... snip ...] >> >>>>> >> >>>>> Now i trying to imagine, how a basic stuff might look like(please >> >>>>> correct me if its utterly wrong way ;), if we will be able to use >> >>>>> interrupt mode. >> >>>> >> >>>> This is actually along similar lines of thought that I had when I was >> >>>> thinking of how to get rid of the builtin VM scheduling behavior. The >> >>>> main >> >>>> thought that I had was that the VM may have a "special" process - the >> >>>> scheduler process (duh!) which it runs when it doesn't know what else >> >>>> to >> >>>> do. >> >>>> The VM would then not directly schedule processes after semaphore >> >>>> signals >> >>>> but rather put them onto a "ready" queue that can be read by the >> >>>> scheduler >> >>>> process and switch to the scheduler process. The scheduler process >> >>>> decides >> >>>> what to run next and resumes the process via a primitive. Whenever an >> >>>> external signal comes in, the VM automatically activates the >> >>>> scheduler >> >>>> process and the scheduler process then decides whether to resume the >> >>>> previously running process or to switch to a different process. >> >>>> >> >>>> In a way this folds the timer process into the scheduler (which makes >> >>>> good >> >>>> sense from my perspective because much of the work in the timer is >> >>>> stuff >> >>>> that could be more effectively take place in the scheduler). The >> >>>> implementation should be relatively straightforward - just add a >> >>>> scheduler >> >>>> process and a ready list to the special objects, and wherever the VM >> >>>> would >> >>>> normally process switch you just switch to the scheduler. Voila, >> >>>> there >> >>>> is >> >>>> your user-manipulable scheduler ;-) And obviously, anything that is >> >>>> run >> >>>> out >> >>>> of the scheduler process is by definition non-interruptable because >> >>>> there is >> >>>> simply nothing to switch to! >> >>>> >> >>> Very nice indeed. That's even better that my first proposal. >> >>> ProcessorScheduler>>schedulingProcessLoop >> >>> [ >> >>> self handlePendingSignalsAndActions. >> >>> activeProcess ifNil: [ self idle ] ifNotNil: [ self >> >>> primitiveTransferControlTo: activeProcess]. >> >>> ] repeat. >> >>> >> >>> and when any process, somehow stops running >> >>> (suspend/wait/terminate/interrupted etc), VM will again switch to >> >>> scheduler process loop. >> >>> >> >>> What is important in having it, that there is guarantee to be not >> >>> preempted by anything. Simply by having this, many >> >>> concurrency/scheduling related problems can be solved by language-side >> >>> implementation, without fear of having gotchas from VM side. >> >>> >> >>> Also, VM doesn't needs to know details about priorities, suspending, >> >>> etc etc.. - which means that we can simplify VM considerably and >> >>> implement same parts on the language side, where everything is late >> >>> bound :) >> >>> >> >>> As for moving to multi-cores.. yes, as Gulik suggests, its like adding >> >>> a new dimension: >> >>> - local scheduler for each core >> >>> - single global scheduler for freezing everything >> >>> >> >>> This, of course, if we could afford running same object memory over >> >>> multiple cores. Handling interpreter/object memory state(s) with >> >>> multiple cores is not trivial thing. >> >>> >> >>> If we going to keep more isolated model (islands, hydra ) then we need >> >>> no/minimal changes to scheduler - each scheduler serves own island and >> >>> receives asynchronous signals from other collegues through shared >> >>> queue. >> >>> >> >>>> Cheers, >> >>>> - Andreas >> >>>> >> >>> >> >>> -- >> >>> Best regards, >> >>> Igor Stasenko AKA sig. >> >>> >> >> >> >> >> >> >> > >> > >> > >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> > > > > > -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Eliot Miranda-2
Careful now in the deep dark voids of tweak we discovered one day that
it was incrementing process priority in the semaphore logic, this led to the interesting behavior that the processor priority could reach 327... However *IF* a walkback occurred it the fellow responsible for launching the debug logic would create a fork off a new process at the same priority as the current process, however in the in the new process logic logic there was a sanity check for process priority numbers, and it would go boom, and the house of cards would sliently fall flat. Er since it's in the debug logic it *was* rather hard at first to determine what was going on... I do wonder. mmm latest pharo image [1 == 0] forkAt: 999. oops.. I wonder what tweak images do now? On 29-Apr-09, at 12:44 PM, Eliot Miranda wrote: > > Just to clear-up any confusion, the current VM is not limited to 80; > it will use any size. > > To shorten the list iteration I've done the following, first in > VisualWorks and now in the StackVM and Cog. Simply maintain a "high- > tide" which is the highest current runnable process priority. The > list search only has to start from this value rather than the > highest priority, which most of the time saves scanning 40 empty > lists on every wakeHighestPriority. > > I've attached the changes (just two methods) except for the > initialization of highestRunnableProcessPriority to zero in the > relevant initializeInterpreter:. > -- = = = ======================================================================== John M. McIntosh <[hidden email]> Twitter: squeaker68882 Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com = = = ======================================================================== |
In reply to this post by Igor Stasenko
I am sorry that this week I won't be able to really participate in this
thread, but I couldn't help mentioning that you guys are reinventing the Self scheduler and so might want to take a look at that. It includes a single primitive called TWAINS (transfer and wait for an interrupt or signal) for switching from one process to another. When Self starts up, it runs a single thread. At some point the scheduler is started - it is a process just like any other. The only thing special about the scheduler is that it calls the TWAINS primitives and none of the others ever do so. There is no protection against this, however. -- Jecel P.S.: the scheduler difference was what made a friend of mine drop Squeak and use Self for his PhD project several years ago |
In reply to this post by Igor Stasenko
Hi Jecel -
That is actually great news. I had absolutely no idea that the Self scheduler was implemented in Self (I had naturally assumed it was implemented in C). The fact that it uses the same idea is a great validation of our thoughts here ;-) Do you by any chance know how Self performed in heavy process switch benchmarks? Has anyone ever assessed the overhead of the scheduler? Cheers, - Andreas Jecel Assumpcao Jr wrote: > I am sorry that this week I won't be able to really participate in this > thread, but I couldn't help mentioning that you guys are reinventing the > Self scheduler and so might want to take a look at that. > > It includes a single primitive called TWAINS (transfer and wait for an > interrupt or signal) for switching from one process to another. When > Self starts up, it runs a single thread. At some point the scheduler is > started - it is a process just like any other. The only thing special > about the scheduler is that it calls the TWAINS primitives and none of > the others ever do so. There is no protection against this, however. > > -- Jecel > P.S.: the scheduler difference was what made a friend of mine drop > Squeak and use Self for his PhD project several years ago > > > |
Andreas,
> That is actually great news. I had absolutely no idea that the Self > scheduler was implemented in Self (I had naturally assumed it was > implemented in C). Given that even their parser (what they call the part that translates source text into bytecodes) is in C++, it is natural to assume that the scheduler would be as well. But it isn't the case. > The fact that it uses the same idea is a great > validation of our thoughts here ;-) Exactly my point - it is always great to find a working example of something you are thinking of doing. Anyone wanting to do multicore work in Squeak should really watch David Ungar's talk at the OOPSLA 08 Squeak BOF, for example. > Do you by any chance know how Self > performed in heavy process switch benchmarks? Has anyone ever assessed > the overhead of the scheduler? I think my friend's PhD project was probably the application which stressed the scheduler the most, but I don't think he made any measurements. Note that with the adaptive compilation technology, it probably was no slower than a VM based scheduler. But I think it is likely that it wasn't a hotspot and so the good compiler never got to it, in which case the performance didn't matter. Normal Self applications are based on Morphic and tend to be single threaded. Given that Self doesn't have exception handling, for every pass through the world redraw code a background thread was started up to detect when the drawing logic go into an infinite loop. That would be some 30 new threads being created and destroyed per second (normally doing nothing but waiting for a timeout). I imagine we would need at least a hundred times that before the scheduler performance would have a significant effect. -- Jecel |
Free forum by Nabble | Edit this page |