Hi, I have been playing with changing the Delay InterruptPeriod from 100 milliseconds to 10 milliseconds. The Delay InterruptPeriod is used to interrupt a (running or sleeping) VA Smalltalk program so the dispatcher can then run the highest priority fork. Basically the OS is requested to interrupt the program in 100 milliseconds. While the Smalltalk program processes that interrupt, the OS is again requested to interrupt the program in 100 milliseconds. This normally seems fine but it puts a lower limit on the reliable use of delays to about 100 milliseconds. Trying to use a shorter delay is of little value. Most of the time this is no big deal but I have lots of programs that talk to each other via TCP/IP connections. One program will send a message to another program and wait for a reply. The reply is probably ready in much less than 100 milliseconds. If I wait for 10 milliseconds the InterruptPeriod turns that into 100 milliseconds. This caps the through put. My guess is that InterruptPeriod has been at 100 milliseconds for 20 years or more without anyone noticing a problem. Which on the one hand says the value is sound. But on the other hand, computers are much, much faster now and the number of instructions that can be executed in 100 milliseconds or 10 milliseconds is much greater. So, waiting for only 10 milliseconds is reasonable. It might add slightly to system overhead but it doesn't seem to be noticeable. You can run the code below to see what I mean. Note that the delay time and repeat times are the same. The only difference is the InterruptPeriod. As you can see, dropping the InterruptPeriod from 100 to 10 makes a big difference. I have tried values of 5 and 1 milliseconds but they aren't much better at least not on my machine. If you don't want to keep InterruptPeriod at something other than 100, you shouldn't save your image after trying this code. I have been running packaged programs and the development image with an InterruptPeriod of 10 milliseconds for a while now with no ill effects. | delay | delay := Delay forMilliseconds: 10. Delay interruptPeriod: 100. [100 timesRepeat: [delay wait]] bench: '100> '. 100> 9953 | delay | delay := Delay forMilliseconds: 10. Delay interruptPeriod: 10. [100 timesRepeat: [delay wait]] bench: '10> '. 10> 1591 A note to Instantiations: It would be great if someone could think about the impact of this change. If it seems to be harmless, then it would be nice to make the Delay class method #interruptPeriod: public instead of private. This doesn't mean much, other than giving us your blessing to use it. Also, adding #interruptPeriod (that answers InterruptPeriod) as a public class method might be helpful. You might also think about changing InterruptPeriod to 10. Lou
-- You received this message because you are subscribed to the Google Groups "VA Smalltalk" group. To view this discussion on the web visit https://groups.google.com/d/msg/va-smalltalk/-/H6I4jG0I908J. To post to this group, send email to [hidden email]. To unsubscribe from this group, send email to [hidden email]. For more options, visit this group at http://groups.google.com/group/va-smalltalk?hl=en. |
Hi All, Just a little follow up to
changing the Delay InterruptPeriod from 100 milliseconds to 10 milliseconds.
I have three programs that access the same database table. The first program gets data from an external host and updates rows in a table with the update information. The second table examines the updated data and updates another table. The third program (and possibly more) uses the updated data in the second table to send updates to another external host. This is a high volume application (was around 500,000 updates a day, now 1,000,000). All of these programs can get to a point where they have nothing to do or have had a database lock out error and need to wait a little while before trying again. Under VA Smalltalk 5.5.2 and 100 millisecond delays throughput was acceptable. Under VA Smalltalk 8.0.3 (still with 100 millisecond delays) throughput dropped by 20% or more. This was hard to understand. What could make v8.0.3 20% slower than v5.5.2? Changing the Delay InterruptPeriod from 100 milliseconds to 10 milliseconds and changing my short delays to 10 milliseconds mad all the difference in the world. My guess is that v8.0.3 was a little FASTER than v5.5.2, causing my programs to get to a point where they didn't have work and needed to wait. The 100 milliseconds wait was too long (because work would show up in much less than 100 milliseconds) causing poorer throughput. Changing the Delay InterruptPeriod from 100 milliseconds to 10 milliseconds and using small 10 millisecond delays has improved the throughput of all the programs, especially the second and third. So far I haven't seen any negative effect caused by the change. Lou -- You received this message because you are subscribed to the Google Groups "VA Smalltalk" group. To view this discussion on the web visit https://groups.google.com/d/msg/va-smalltalk/-/BbcUyJCVofsJ. To post to this group, send email to [hidden email]. To unsubscribe from this group, send email to [hidden email]. For more options, visit this group at http://groups.google.com/group/va-smalltalk?hl=en. |
Free forum by Nabble | Edit this page |