Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays
(<500ms) wait for significantly longer than under Windows (3.10.4)? As an example for the average time spent waiting when requesting a 10ms wait: ((1 to: 100) collect: [:i | Time millisecondsToRun: [(Delay forMilliseconds: 10) wait]]) sum / 100.0 This yields (on my machine) 10.35 for Windows and 43.48 for Linux. For a 100ms wait: 100.5 (Windows) and 131.39 (Linux). For a 1000ms wait: 1000.36 (Windows) and 1013.16 (Linux). There seems to be a fixed overhead per call that is significantly larger under Linux. Any thoughts would be appreciated. |
Hi Gary,
I got 11.52 to 12.58s on my Linux Best regards Janko Gary Chambers wrote: > Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays > (<500ms) wait for significantly longer than under Windows (3.10.4)? > > As an example for the average time spent waiting when requesting a 10ms > wait: > > > ((1 to: 100) collect: [:i | Time millisecondsToRun: [(Delay forMilliseconds: > 10) wait]]) sum / 100.0 > > This yields (on my machine) 10.35 for Windows and 43.48 for Linux. > For a 100ms wait: 100.5 (Windows) and 131.39 (Linux). > For a 1000ms wait: 1000.36 (Windows) and 1013.16 (Linux). > > There seems to be a fixed overhead per call that is significantly larger > under Linux. > > Any thoughts would be appreciated. > > > -- Janko Mivšek AIDA/Web Smalltalk Web Application Server http://www.aidaweb.si |
In reply to this post by Gary Chambers-4
On Monday 11 June 2007 7:24 pm, Gary Chambers wrote:
> Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays > (<500ms) wait for significantly longer than under Windows (3.10.4)? Delay for x ms means schedule squeak to run *after* x ms. If another process is hogging the cpu, then it would take as much as one time slice before squeak gets to the cpu. So check your loadavg. On my lightly loaded Linux box, I got 100.12 and 100.22ms. Regards .. Subbu |
In reply to this post by Gary Chambers-4
Well in May of 2002 ([ENH] relinquishProcessorForMicroseconds:)
We used | bag time delay | delay _ Delay forMilliseconds: 1. bag _ Bag new. 1 to: 1000 do: [ :i | time _ Time millisecondClockValue. delay wait. bag add: (Time millisecondClockValue - time). ]. bag sortedCounts. and Ian noted > NetBSD ppc: > a SortedCollection(950->20 19->19 19->21 3->18 3->22 2->30 1->25 1->8 > 1->10 1->14) > > Linux 386: > a SortedCollection(294->1 213->2 146->3 100->4 84->5 55->6 33->7 > 33->8 > 11->10 9->11 8->9 6->13 5->12 1->14 1->17 1->18) > > Ian on os-x 10.4.9 with a 3.8.17x VM I get a SortedCollection(878->10 122->11) Plus checking the clock gives | bag time | bag := Bag new. time := Time millisecondClockValue. 1 to: 1000 do:[:i| [Time millisecondClockValue = time] whileTrue. bag add: (Time millisecondClockValue - time). time := Time millisecondClockValue. ]. bag sortedCounts a SortedCollection(1000->1) However when you boot os-x and other BSD based unix systems it prints standard timeslicing quantum is 10000 us This is a clue, which means the operating system switches processes as fine grained as 10 ms. Old linux systems actually would switch on 100ms, those were called non-real-time unix systems. However in cross checking using a 3.2.8b9 VM (powerpc emulated on macintel) I get a SortedCollection(867->1 132->2 1->3) However the code base used for 3.2.8b9 is different and I used a non- portable delay semaphore which os-x gives good time accuracy to. But at some point I migrated towards the unix relinquishProcessorForMicroseconds code base because that tied into aioSleep(). Talk to your unix support person about how best to solve this... On Jun 11, 2007, at 6:54 AM, Gary Chambers wrote: > Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays > (<500ms) wait for significantly longer than under Windows (3.10.4)? > > As an example for the average time spent waiting when requesting a > 10ms > wait: > > > ((1 to: 100) collect: [:i | Time millisecondsToRun: [(Delay > forMilliseconds: > 10) wait]]) sum / 100.0 > > This yields (on my machine) 10.35 for Windows and 43.48 for Linux. > For a 100ms wait: 100.5 (Windows) and 131.39 (Linux). > For a 1000ms wait: 1000.36 (Windows) and 1013.16 (Linux). > > There seems to be a fixed overhead per call that is significantly > larger > under Linux. > > Any thoughts would be appreciated. > > -- ======================================================================== === John M. McIntosh <[hidden email]> Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ======================================================================== === |
In reply to this post by Gary Chambers-4
Well in May of 2002 ([ENH] relinquishProcessorForMicroseconds:)
We used | bag time delay | delay _ Delay forMilliseconds: 1. bag _ Bag new. 1 to: 1000 do: [ :i | time _ Time millisecondClockValue. delay wait. bag add: (Time millisecondClockValue - time). ]. bag sortedCounts. and Ian noted > NetBSD ppc: > a SortedCollection(950->20 19->19 19->21 3->18 3->22 2->30 1->25 1->8 > 1->10 1->14) > > Linux 386: > a SortedCollection(294->1 213->2 146->3 100->4 84->5 55->6 33->7 > 33->8 > 11->10 9->11 8->9 6->13 5->12 1->14 1->17 1->18) > > Ian on os-x 10.4.9 with a 3.8.17x VM I get a SortedCollection(878->10 122->11) Plus checking the clock gives | bag time | bag := Bag new. time := Time millisecondClockValue. 1 to: 1000 do:[:i| [Time millisecondClockValue = time] whileTrue. bag add: (Time millisecondClockValue - time). time := Time millisecondClockValue. ]. bag sortedCounts a SortedCollection(1000->1) However when you boot os-x and other BSD based unix systems it prints standard timeslicing quantum is 10000 us This is a clue, which means the operating system switches processes as fine grained as 10 ms. Old linux systems actually would switch on 100ms, those were called non-real-time unix systems. However in cross checking using a 3.2.8b9 VM (powerpc emulated on macintel) I get a SortedCollection(867->1 132->2 1->3) However the code base used for 3.2.8b9 is different and I used a non- portable delay semaphore which os-x gives good time accuracy to. But at some point I migrated towards the unix relinquishProcessorForMicroseconds code base because that tied into aioSleep(). Talk to your unix support person about how best to solve this... On Jun 11, 2007, at 6:54 AM, Gary Chambers wrote: > Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays > (<500ms) wait for significantly longer than under Windows (3.10.4)? > > As an example for the average time spent waiting when requesting a > 10ms > wait: > > > ((1 to: 100) collect: [:i | Time millisecondsToRun: [(Delay > forMilliseconds: > 10) wait]]) sum / 100.0 > > This yields (on my machine) 10.35 for Windows and 43.48 for Linux. > For a 100ms wait: 100.5 (Windows) and 131.39 (Linux). > For a 1000ms wait: 1000.36 (Windows) and 1013.16 (Linux). > > There seems to be a fixed overhead per call that is significantly > larger > under Linux. > > Any thoughts would be appreciated. > > -- ======================================================================== === John M. McIntosh <[hidden email]> Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ======================================================================== === |
In reply to this post by K. K. Subramaniam
Even with a loadavg (via uptime) was still around 44ms (for a 10ms delay)
for me. Found the culprit though... In sqUnixMain.c sqInt ioRelinquishProcessorForMicroseconds(sqInt us) { int now; dpy->ioRelinquishProcessorForMicroseconds(us); now= ioLowResMSecs(); if (now - lastInterruptCheck > (1000/25)) /* avoid thrashing intr checks from 1ms loop in idle proc */ { setInterruptCheckCounter(-1000); /* ensure timely poll for semaphore activity */ lastInterruptCheck= now; } return 0; } Note the (1000/25) which equals 40ms, around the same as the latency. Called from #relinquishProcessorForMicroseconds: in ProcessorScheduler, By the the idle process with a 1000 microsecond value. This reliquishes the processor for the 1000 microseconds then doesn't check for interrupts (including Pending delay semaphores) until at least 40ms have passed. Perhaps this could be made into a VM parameter? -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of subbukk Sent: 11 June 2007 5:39 pm To: [hidden email] Subject: Re: Delay and Linux On Monday 11 June 2007 7:24 pm, Gary Chambers wrote: > Hi all. Does anybody know why, on the Linux VM (3.9.8), short Delays > (<500ms) wait for significantly longer than under Windows (3.10.4)? Delay for x ms means schedule squeak to run *after* x ms. If another process is hogging the cpu, then it would take as much as one time slice before squeak gets to the cpu. So check your loadavg. On my lightly loaded Linux box, I got 100.12 and 100.22ms. Regards .. Subbu |
A loadavg of 0.01, I meant :-)
-----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of Gary Chambers Sent: 12 June 2007 7:15 pm To: 'The general-purpose Squeak developers list' Subject: RE: Delay and Linux Even with a loadavg (via uptime) was still around 44ms (for a 10ms delay) for me. Found the culprit though... |
In reply to this post by Gary Chambers-4
On Tuesday 12 June 2007 11:44 pm, Gary Chambers wrote:
> Note the (1000/25) which equals 40ms, around the same as the latency. > Called from #relinquishProcessorForMicroseconds: in ProcessorScheduler, > By the the idle process with a 1000 microsecond value. > This reliquishes the processor for the 1000 microseconds then doesn't check > for interrupts (including > Pending delay semaphores) until at least 40ms have passed. You may want to post this in squeak-vm mailing list for clarifications. Regards .. Subbu |
Will do.
-----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of subbukk Sent: 13 June 2007 5:22 pm To: [hidden email] Subject: Re: Delay and Linux On Tuesday 12 June 2007 11:44 pm, Gary Chambers wrote: > Note the (1000/25) which equals 40ms, around the same as the latency. > Called from #relinquishProcessorForMicroseconds: in > ProcessorScheduler, By the the idle process with a 1000 microsecond > value. This reliquishes the processor for the 1000 microseconds then > doesn't check for interrupts (including Pending delay semaphores) > until at least 40ms have passed. You may want to post this in squeak-vm mailing list for clarifications. Regards .. Subbu |
Free forum by Nabble | Edit this page |