Hi,
I was checking the code in sqUnixHeartbeat.c to see how the heartbeat thread/itimer worked. It somehow bothers me that there are different compiled artifacts, one per option. What do you think about having a VM that manages that as an argument provided when we launch the VM? This would add some flexibility that we don't have right now because we make the decision at compile time. The code in sqUnixHeartbeat.c is not a lot nor very complex, it should not be difficult to do... Also, what would be the drawbacks besides an increase on the vm size? Guille |
On Fri, Jan 6, 2017 at 3:44 PM Guillermo Polito <[hidden email]> wrote:
IMHO, the heartbeat thread vm makes it unnecessarily hard to get started with Squeak/Pharo for beginners. Power-users should know that heartbeat thread vms are faster and how to set them up, but requiring all users to have sudo permissions on their system in order to support a heartbeat thread vm seems wrong. So, +1 for vms that support both. It'd be even better if a vm automatically picks a heatbeat thread if the system supports it, so all you have to do is raising the rtprio limit and reopen the vm. In addition, we aren't able to run heartbeat thread vms on TravisCI's Docker-based infrastructure, in fact, it's quite complicated to run these vms on Docker anyway (poweruser expertise needed). However, we can currently run them on Travis' sudo-enabled containers, but those builds are much slower.
Then do it! :) Maybe on a branch so we can discuss your change.
I can't think of any drawbacks right now. But I don't think an increased vm size is a big problem, especially now that even Raspberry Pis ship with enough storage. :)
|
In reply to this post by Guillermo Polito
Hi Guille, > On Jan 6, 2017, at 6:44 AM, Guillermo Polito <[hidden email]> wrote: > > Hi, > > I was checking the code in sqUnixHeartbeat.c to see how the heartbeat thread/itimer worked. It somehow bothers me that there are different compiled artifacts, one per option. > > What do you think about having a VM that manages that as an argument provided when we launch the VM? This would add some flexibility that we don't have right now because we make the decision at compile time. I think it's a fine idea but it isn't really the issue. The issue is that the itimer mechanism is problematic, especially for foreign code, and is therefore a stop gap. The itimer interrupts long-running system calls, which means that things like sound libraries break (at Qwaq I had to fix ALSA to get it to work with the itimer heartbeat). Since Pharo is becoming more reliant on external code it may impact us more going forward. The real issue is that linux's requirement that thread priorities be set in per-application file in /etc/security/limits.d (IIRC) is a big. Neither Windows nor Mac OS X requires such nonsense, and a threaded heartbeat is used on those systems without any issue at all. Why linux erected this mess in the first place is something I don't understand. I had to implement the itimer heartbeat to get Qwaq forums running on Linux running pre 2.6 kernels, but had many other problems to solve as a result (ALSA, database connects). Were it that the vm merely had to detect whether it could use the threaded heartbeat then things would be easy. Instead one can only use the thing if one has superuser permissions to install a file in /etc, just to use a thread of higher priority than the main one. An alternative might be to lower the priority of the main thread. Then the file installation would be unnecessary. To summarize, the itimer heartbeat is to be avoided as much as possible. It causes hard to debug issues with external code, has to be turned off and on around fork. It's a stop gap. Having to install a file in /etc just to be able to use a thread is insane (and AFAICT unique to linux). Whatever you do in the short term to deal with these problems I'll support, but in the long term we simply want a threaded heartbeat without needing to install anything. > > The code in sqUnixHeartbeat.c is not a lot nor very complex, it should not be difficult to do... > > Also, what would be the drawbacks besides an increase on the vm size? I hope I've explained above that I expect the drawbacks will be intermittent failures of external code. > > Guille |
On Fri, Jan 6, 2017 at 6:33 PM Eliot Miranda <[hidden email]> wrote:
Thanks for the explanation, Eliot. I had no idea how bad the issues are with the itimer, but I'm glad you also see the user-facing issue with the heartbeat.
Could you elaborate a little bit more on this idea? How could this impact the vm? What could be the drawbacks here? Fabio
|
Fabio you seem to have skipped the important part here - > On 06-01-2017, at 9:44 AM, Fabio Niephaus <[hidden email]> wrote: > > On Fri, Jan 6, 2017 at 6:33 PM Eliot Miranda <[hidden email]> wrote: > > > I had to implement the itimer heartbeat to get Qwaq forums running on Linux running pre 2.6 kernels, but had many other problems to solve as a result (ALSA, database connects). One can never be sure when it comes to linux but at least in theory any post 2.6 kernel should be ok - as an example the Raspbian distribution has no problems with the thread priority changing. Apparently that corresponds to around july 2011 at the worst. So anyone running a system newer than that ought to have no problem. Anyone running a system older than that probably has other interesting problems. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim The Static Typing Philosophy: Make it fast. Make it right. Make it run. |
In reply to this post by fniephaus
Hi Fabio, Hi Guille,
On Fri, Jan 6, 2017 at 9:44 AM, Fabio Niephaus <[hidden email]> wrote:
First of all, for the heartbeat thread to work reliably it must run at higher priority than the thread running Smalltalk code. This is because its job is to cause Smalltalk code to break out at regular intervals to check for events. If the Smalltalk code is compute-intensive then it will prevent the heartbeat thread from running unless the heartbeat thread is running at a higher priority, and so it will be impossible to receive input keys, etc. (Note that if event collection was in a separate thread it would suffer the same issue; compute intensive code would block the event collection thread unless it was running at higher priority). Right now, Linux restricts creating threads with priority higher than the default to those programs that have a /etc/security/limits.d/program.conf file that specifies the highest priority thread the program can create. And prior to the 2.6.12 kernel only superuser processes could create higher-priority threads. I do know that prior to 2.6.12 one couldn't create threads of *lower* priority than the default either (I would have used this if I could). If 2.6.12 allows a program to create threads with lower priorities *without* needing a /etc/security/limits.d/program.conf, or more conveniently to allow a thread's priority to be lowered, then the idea is: 1. at start-up create a heartbeat thread at the normal priority 2. lower the priority of the main VM thread below the heartbeat thread. Alternatively, one could spawn a new lower-priority thread to run Smalltalk code, but this may be be much more work. The draw-back is that running Smalltalk in a thread whose priority is lower than the default *might* impact performance with lots of other processes running. This depends on whether the scheduler conflates thread priorities with process priorities (which was the default with old linux threads, which were akin to processes). Sop there are some tests to perform: a) see if one can lower the priority of a thread without having a /etc/security/limits.d/program.conf in place b) write a simple performance test (nfib?) in a program that can be run either with its thread having normal or lower priority, and run two instances of the program at the same time and see if they take significantly different times to compute their result If a) is possible and b) shows no significant difference in the wall-times of the two programs then we can modify the linux heartbeat code to *lower* the priority of the main Smalltalk thread if it finds it can't create a heartbeat thread with higher priority. I hope this answers your questions. As a footnote let me describe why we use a heartbeat at all. When I started working on the VisualWorks VM (HPS) in the '90s it had no heartbeat (IIRC, it might have only been the Windows VM that worked like this). Instead there was a counter decremented in every frame-building send (i.e. in the jitted machine code that activated a Smalltalk send), and when this counter went to zero the VM broke out and checked for events. This counter was initialized to 256 (IIRC). Consequently there was an enormous frequency of event checks, until, that is, ione did something that reduced the frequency of frame-building sends. One day I was doing something which invoked lots of long-running large integer primitives and I noticed that when I tried to interrupt the program it took many seconds before the system stopped. What was happening was that the large integer primitives were taking so long that the counter took many seconds to count down to 0. The system didn't check for events very often. So the problems with a counter are that a) a read-modify-write cycle for a counter is in itself very expensive in a high-frequency operation like building a frame b) in normal operation the counter causes far too many check-fore-event calls c) in abnormal operation the counter causes infrequent check-fore-event calls One solution on Unix is an interval timer (which my old BrouHaHa VMs used, but it did;t have much of an FFI so the problems it caused weren't pressing). The natural solution is a heartbeat thread, and this is used in a number of VMs. One gets a regular event check frequency at very low cost. In Smalltalk VMs which do context-to-stack mapping it is natural to organize the stack as a set of pages and hence to have frame building sends check a stack limit (guarding the end of the page). The heartbeat simply sets the stack limit to the highest possible address to cause a stack limit check failure on the next send, and the stack check failure code checks if the stack limit has been set to the highest dress and calls the event check instead of handling the stack page overflow. In the HotSpot Java VM, if the platform supports it, a frame building send writes a byte to a guard page. Modern professors have write buffers so the write has very low cost (because it is never read) and is effectively free. So the heartbeat changes the guard page's permissions to take away write permission and cause an exception. The exception handler then checks and causes the VM to check for events. For this to work, all of writes, removing and setting page write permissions and handling exceptions must be sufficiently cheap. Anyone looking for a low-level project for the Cog VM could take a look at this mechanism. I've chosen to stick with the simple stack limit approach.
_,,,^..^,,,_ best, Eliot |
> On 06-01-2017, at 11:23 AM, Eliot Miranda <[hidden email]> wrote: > > Modern professors have write buffers so the write has very low cost That would mostly be adjunct professors.. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim Make sure your code "does nothing" gracefully. |
In reply to this post by Eliot Miranda-2
Hi Eliot,
On Fri, Jan 6, 2017 at 8:23 PM Eliot Miranda <[hidden email]> wrote:
Yes, it does. Thanks!
And thanks for all this info. I didn't really know much about the itimer vs heartbeat thread topic TBH. I was just surprised that this is so complicated, because I thought that event handling would be relatively easy. However, I still don't completely understand why other applications (e.g. games) don't have these event problems, but that's something I can look into myself this weekend :) Fabio
|
In reply to this post by Eliot Miranda-2
Hi,
On Fri, Jan 6, 2017 at 6:33 PM, Eliot Miranda <[hidden email]> wrote:
Yes, I know. I actually arrived here because I was having issues when using OSSubProcess. The itimer interrupts external commands executed through the fork/exec. But even worse, fork calls the clone system call, which does not fail: it restarts! And it gets interrupted again. And this just freezes the process. :/
Yeap, I know it is a big problem. But the thing is that having two different compiled VMs obligates on one hand to modify all download scripts, duplicating options, or even it makes me think when I download a VM if I'm downloading the right one or not... I'd prefer to have a linux VM with a decent default (let's say the itimer one to not bother beginners), but the possibility to say: ./vm --threaded-heartbeat myImage.image for people who know what they are doing :)
I'll play a bit to see if I can have a version using a flag in a separate branch. Guille
|
> On 10-01-2017, at 7:21 AM, Guillermo Polito <[hidden email]> wrote: > > I'd prefer to have a linux VM with a decent default (let's say the itimer one to not bother beginners) All my experience says that the thread timer *is* a decent default. How many systems are running pre 2.6 level kernels? And why? That’s years out of date. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim Useful Latin Phrases:- Braccae illae virides cum subucula rosea et tunica Caledonia-quam elenganter concinnatur! = Those green pants go so well with that pink shirt and the plaid jacket! |
In reply to this post by Eliot Miranda-2
Hi. Will event-driven VM fix this problem completely? Or heartbeat will be needed anyway? 2017-01-06 20:23 GMT+01:00 Eliot Miranda <[hidden email]>:
|
Hi Denis,
On Tue, Jan 10, 2017 at 12:08 PM, Denis Kudriashov <[hidden email]> wrote:
As I understand it, the heartbeat us always needed. In an event-driven VM it may be that the Smalltalk executive is called form the event loop, but the Smalltalk executive still has to break out of executing Smalltalk to return to the event loop to receive new events. The thing to understand about the JIT VM is that Smalltalk is executing in machine code just like simple code in some low level language. That code can respond to Smalltalk events such as attempting to wait on a Semaphore with no outstanding signals, which may cause it to switch processes. But it cannot respond to external asynchronous events unless it is informed of those events. And it is very difficult to construct a VM that can accept interrupts at arbitrary times that activate Smalltalk code (imagine receiving an interrupt in the middle of a GC, or mid-way through looking up a send not found in the cache, etc, etc; essentially interrupts can only be accepted at limited times). So the VM needs to check for events (and indeed interrupts) at safe points. The efficient implementation of safe points is checking on the next frame-building send. But if every frame-building send checked for interrupts and/or events, frame build would be very slow and the entire VM would crawl. The function of the heart beat is to cause frame building sends to check for interrupts and/or events at regular, but (relative to frame-builling send frequency) infrequent occasions. Compare that to interrupts in a real processor. The processor /also/ only tests e.g. the interrupt request pin at a safe point (perhaps at the end of each interaction decode cycle), and also provides means to disable interrupts for critical sections of code. It's just that the quanta are much smaller than in a Smalltalk vm. HTH
_,,,^..^,,,_ best, Eliot |
2017-01-10 21:54 GMT+01:00 Eliot Miranda <[hidden email]>:
Thank's Eliot. It's very informative explanation |
In reply to this post by Eliot Miranda-2
On Wed, Jan 11, 2017 at 4:54 AM, Eliot Miranda <[hidden email]> wrote:
With recent Pharo discussion on heatbeat itimer versus threaded I got curious.... $ vi test.c #include <stdio.h> #include <sys/time.h> #include <sys/resource.h> #include <stdlib.h> #include <time.h> void fib(int n, int pid) { int first = 0, second = 1, next, c; for ( c = 0 ; c < n ; c++ ) { if ( c <= 1 ) next = c; else { next = first + second; first = second; second = next; } //printf("%d - fib=%d\n", pid, next); } } struct timespec timediff(struct timespec start, struct timespec end) { struct timespec temp; temp.tv_sec = end.tv_sec-start.tv_sec; temp.tv_nsec = end.tv_nsec-start.tv_nsec; if (temp.tv_nsec < 0) { temp.tv_nsec += 1000000000; temp.tv_sec -= 1; } return temp; } int main(int argc, char *argv[]) { int which = PRIO_PROCESS; id_t pid; int priority, fibN, ret; struct timespec start, stop, diff; if(argc<2) { printf("Usage: %s ProcessPriority FibN\n", argv[0]); exit(1); } priority = atoi(argv[1]); fibN = atoi(argv[2]); pid = getpid(); printf("%d\t ==> %d original priority\n", pid, getpriority(which, pid)); setpriority(which, pid, priority); priority = getpriority(which, pid); printf("%d\t ==> %d new priority\n", pid, priority); clock_gettime( CLOCK_REALTIME, &start); sleep(1); // allow all threads to be scheduled fib(fibN, pid); clock_gettime( CLOCK_REALTIME, &stop); diff = timediff(start, stop); printf("\n%d @ %d\t ==> execution time %d:%d\n", pid, priority, diff.tv_sec - 1, diff.tv_nsec); } /////////////////////// $ gcc test.c $ uname -a Linux dom0 3.16.0-4-686-pae #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) i686 GNU/Linux $ nproc --all 4 $ N=1000000000 ; for NPROC in 1 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 28175 @ 0 ==> execution time 5:137392274 28171 @ 19 ==> execution time 5:493271222 28173 @ 1 ==> execution time 5:678498982 ...for NPROC in 1 2 ; do... 28339 @ 0 ==> execution time 5:891516242 28333 @ 0 ==> execution time 6:101871486 28331 @ 1 ==> execution time 6:197583303 28337 @ 1 ==> execution time 6:473926938 28335 @ 19 ==> execution time 11:19093473 28329 @ 19 ==> execution time 11:109494611 ...for NPROC in 1 2 3 ; do... 28370 @ 0 ==> execution time 8:286661748 28364 @ 0 ==> execution time 8:346971535 28376 @ 0 ==> execution time 8:919760746 28362 @ 1 ==> execution time 9:943310436 28368 @ 1 ==> execution time 10:43977329 28374 @ 1 ==> execution time 10:251189507 28372 @ 19 ==> execution time 14:807238482 28360 @ 19 ==> execution time 15:48684466 28366 @ 19 ==> execution time 15:392610447 ...for NPROC in 1 2 3 4 ; do... 28401 @ 0 ==> execution time 10:808863373 28407 @ 0 ==> execution time 11:144571568 28395 @ 0 ==> execution time 11:311897577 28389 @ 0 ==> execution time 12:49899167 28399 @ 1 ==> execution time 12:391939682 28387 @ 1 ==> execution time 12:922309497 28393 @ 1 ==> execution time 12:997908723 28405 @ 1 ==> execution time 13:116623935 28385 @ 19 ==> execution time 18:710195627 28391 @ 19 ==> execution time 19:160867082 28397 @ 19 ==> execution time 19:306339215 28403 @ 19 ==> execution time 19:340283641 ...for NPROC in 1 2 3 4 5; do... 28431 @ 0 ==> execution time 13:512767819 28437 @ 0 ==> execution time 13:682814129 28449 @ 0 ==> execution time 13:695500843 28443 @ 0 ==> execution time 14:212788768 28425 @ 0 ==> execution time 14:384973010 28435 @ 1 ==> execution time 15:356897222 28423 @ 1 ==> execution time 15:638388578 28441 @ 1 ==> execution time 15:692913828 28429 @ 1 ==> execution time 15:822047741 28447 @ 1 ==> execution time 15:845915440 28445 @ 19 ==> execution time 23:221173838 28433 @ 19 ==> execution time 23:349862424 28421 @ 19 ==> execution time 23:388930822 28439 @ 19 ==> execution time 23:374234661 28427 @ 19 ==> execution time 23:517771887 cheers -ben |
One of the things that bothers me a bit about this thread issue is quite why such old kernels are still in use. 2.6.12 was 2005 - even Debian, notoriously conservative, is using 4.4+ I’m sure there must be good reasons for such a long hold-back but it seems strange to try to hobble the VM for such arcana. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim Useful random insult:- An early example of the Peter Principle. |
In reply to this post by fniephaus
Hi Fabio,
A Smalltalk vm, like a real processor, is a compute engineer me that goes whatever it is told, which may include being told to do nothing but execute code, but it must still be able to receive interrupts, even when running at full tilt performing nothing but calculations. If one wants the Smalltalk vm to run suboptimally one will arrange that as part of its normal execution (e.g. on method activations) it will burn some cycles deciding if it should poll for events, e.g. decrenentung a counter. But as I explained elsewhere this has problems. If method invocation frequency falls, e.g. as a side effect of invoking long-running primitives, then it may poll for events too infrequently. If however, one wants the Smalltalk vm to run optimally, one wants the event check to occur at regular intervals, without slowly Ng down Smalltalk execution, hence either the itimer interrupt or the heartbeat thread. Does this make things clearer?
_,,,^..^,,,_ (phone)
|
On Tue, Mar 14, 2017 at 7:48 AM Eliot Miranda <[hidden email]> wrote:
Hi Eliot, Yes, it makes sense when you keep in mind that the VM aims to behave like a processor. Thanks a lot! Best, Fabio
|
In reply to this post by Eliot Miranda-2
> Hi Fabio, Hi Guille, > > On Fri, Jan 6, 2017 at 9:44 AM, Fabio Niephaus <[hidden email]> wrote: > > > > > On Fri, Jan 6, 2017 at 6:33 PM Eliot Miranda <[hidden email]> > > wrote: > > > >> > >> Hi Guille, > >> > >> > On Jan 6, 2017, at 6:44 AM, Guillermo Polito <[hidden email]> > >> wrote: > >> > > >> > Hi, > >> > > >> > I was checking the code in sqUnixHeartbeat.c to see how the heartbeat > >> thread/itimer worked. It somehow bothers me that there are different > >> compiled artifacts, one per option. > >> > > >> > What do you think about having a VM that manages that as an argument > >> provided when we launch the VM? This would add some flexibility that we > >> don't have right now because we make the decision at compile time. > >> > >> I think it's a fine idea but it isn't really the issue. The issue is > >> that the itimer mechanism is problematic, especially for foreign code, and > >> is therefore a stop gap. The itimer interrupts long-running system calls, > >> which means that things like sound libraries break (at Qwaq I had to fix > >> ALSA to get it to work with the itimer heartbeat). Since Pharo is becoming > >> more reliant on external code it may impact us more going forward. > >> > >> The real issue is that linux's requirement that thread priorities be set > >> in per-application file in /etc/security/limits.d (IIRC) is a big. Neither > >> Windows nor Mac OS X requires such nonsense, and a threaded heartbeat is > >> used on those systems without any issue at all. Why linux erected this > >> mess in the first place is something I don't understand. > >> > >> I had to implement the itimer heartbeat to get Qwaq forums running on > >> Linux running pre 2.6 kernels, but had many other problems to solve as a > >> result (ALSA, database connects). > >> > >> Were it that the vm merely had to detect whether it could use the > >> threaded heartbeat then things would be easy. Instead one can only use the > >> thing if one has superuser permissions to install a file in /etc, just to > >> use a thread of higher priority than the main one. > >> > > > > Thanks for the explanation, Eliot. I had no idea how bad the issues are > > with the itimer, but I'm glad you also see the user-facing issue with the > > heartbeat. > > > > > >> An alternative might be to lower the priority of the main thread. Then > >> the file installation would be unnecessary. > >> > > > > Could you elaborate a little bit more on this idea? How could this impact > > the vm? What could be the drawbacks here? > > > > First of all, for the heartbeat thread to work reliably it must run at > higher priority than the thread running Smalltalk code. This is because > its job is to cause Smalltalk code to break out at regular intervals to > check for events. If the Smalltalk code is compute-intensive then it will > prevent the heartbeat thread from running unless the heartbeat thread is > running at a higher priority, and so it will be impossible to receive input > keys, etc. (Note that if event collection was in a separate thread it would > suffer the same issue; compute intensive code would block the event > collection thread unless it was running at higher priority). > > Right now, Linux restricts creating threads with priority higher than the > default to those programs that have a /etc/security/limits.d/program.conf > file that specifies the highest priority thread the program can create. > And prior to the 2.6.12 kernel only superuser processes could create > higher-priority threads. I do know that prior to 2.6.12 one couldn't > create threads of *lower* priority than the default either (I would have > used this if I could). > > If 2.6.12 allows a program to create threads with lower priorities > *without* needing a /etc/security/limits.d/program.conf, or more > conveniently to allow a thread's priority to be lowered, then the idea is: > 1. at start-up create a heartbeat thread at the normal priority > 2. lower the priority of the main VM thread below the heartbeat thread. > Alternatively, one could spawn a new lower-priority thread to run Smalltalk > code, but this may be be much more work. > > The draw-back is that running Smalltalk in a thread whose priority is lower > than the default *might* impact performance with lots of other processes > running. This depends on whether the scheduler conflates thread priorities > with process priorities (which was the default with old linux threads, > which were akin to processes). If VM will have option for --heartbeat vs --itimer, can be this heartbeat thread priorities as option too? There is not only problems with older Linux kernels, there is also problems with Docker (some complaints in this thread) and also FreeBSD - VM needs to be run as root (due to higher thread priorities), which is unusable in real world (security). > > Sop there are some tests to perform: > > a) see if one can lower the priority of a thread without having a > /etc/security/limits.d/program.conf in place > b) write a simple performance test (nfib?) in a program that can be run > either with its thread having normal or lower priority, and run two > instances of the program at the same time and see if they take > significantly different times to compute their result > > If a) is possible and b) shows no significant difference in the wall-times > of the two programs then we can modify the linux heartbeat code to *lower* > the priority of the main Smalltalk thread if it finds it can't create a > heartbeat thread with higher priority. > > > I hope this answers your questions. > > > As a footnote let me describe why we use a heartbeat at all. When I > started working on the VisualWorks VM (HPS) in the '90s it had no heartbeat > (IIRC, it might have only been the Windows VM that worked like this). > Instead there was a counter decremented in every frame-building send (i.e. > in the jitted machine code that activated a Smalltalk send), and when this > counter went to zero the VM broke out and checked for events. This counter > was initialized to 256 (IIRC). Consequently there was an enormous > frequency of event checks, until, that is, ione did something that reduced > the frequency of frame-building sends. One day I was doing something which > invoked lots of long-running large integer primitives and I noticed that > when I tried to interrupt the program it took many seconds before the > system stopped. What was happening was that the large integer primitives > were taking so long that the counter took many seconds to count down to 0. > The system didn't check for events very often. So the problems with a > counter are that > a) a read-modify-write cycle for a counter is in itself very expensive in a > high-frequency operation like building a frame > b) in normal operation the counter causes far too many check-fore-event > calls > c) in abnormal operation the counter causes infrequent check-fore-event > calls > > One solution on Unix is an interval timer (which my old BrouHaHa VMs used, > but it did;t have much of an FFI so the problems it caused weren't > pressing). > > The natural solution is a heartbeat thread, and this is used in a number of > VMs. One gets a regular event check frequency at very low cost. In > Smalltalk VMs which do context-to-stack mapping it is natural to organize > the stack as a set of pages and hence to have frame building sends check a > stack limit (guarding the end of the page). The heartbeat simply sets the > stack limit to the highest possible address to cause a stack limit check > failure on the next send, and the stack check failure code checks if the > stack limit has been set to the highest dress and calls the event check > instead of handling the stack page overflow. In the HotSpot Java VM, if > the platform supports it, a frame building send writes a byte to a guard > page. Modern professors have write buffers so the write has very low cost > (because it is never read) and is effectively free. So the heartbeat > changes the guard page's permissions to take away write permission and > cause an exception. The exception handler then checks and causes the VM to > check for events. For this to work, all of writes, removing and setting > page write permissions and handling exceptions must be sufficiently cheap. > Anyone looking for a low-level project for the Cog VM could take a look at > this mechanism. I've chosen to stick with the simple stack limit approach. > > Fabio > > > > > >> > >> To summarize, the itimer heartbeat is to be avoided as much as possible. > >> It causes hard to debug issues with external code, has to be turned off and > >> on around fork. It's a stop gap. Having to install a file in /etc just to > >> be able to use a thread is insane (and AFAICT unique to linux). Whatever > >> you do in the short term to deal with these problems I'll support, but in > >> the long term we simply want a threaded heartbeat without needing to > >> install anything. > >> > >> > > >> > The code in sqUnixHeartbeat.c is not a lot nor very complex, it should > >> not be difficult to do... > >> > > >> > Also, what would be the drawbacks besides an increase on the vm size? > >> > >> I hope I've explained above that I expect the drawbacks will be > >> intermittent failures of external code. > >> > >> > > >> > Guille > > > > > _,,,^..^,,,_ > best, Eliot |
In reply to this post by Eliot Miranda-2
On Sat, Jan 7, 2017 at 3:23 AM, Eliot Miranda <[hidden email]> wrote:
After reading pages 11-14 of [1] , page 11 of [2] , and then [3] I was wondering if the dynamic priorities of the Linux 2.6 scheduler would cause a sleeping heartbeat thread to *effectively* remain a higher priority than the Smalltalk execution thread.
So I contrived the experiment below. I'm not sure it properly represents what is happening with the threaded heartbeat, but it still may make interesting discussion. $vi dynamicPriorityHeartbeat.c #include <stdio.h> #include <sys/time.h> #include <stdlib.h> #include <time.h> #include <pthread.h> struct timespec programStartTime; int heartbeat, heartbeatCount; double elapsed(struct timespec start, struct timespec end) { struct timespec temp; double elapsed; temp.tv_sec = end.tv_sec-start.tv_sec; temp.tv_nsec = end.tv_nsec-start.tv_nsec; if (temp.tv_nsec < 0) { temp.tv_nsec += 1000 * 1000 * 1000; temp.tv_sec -= 1; } elapsed = temp.tv_nsec / 1000 / 1000 ; elapsed = temp.tv_sec + elapsed / 1000; return elapsed; } void *heart() { int i; for(i=0; i<=heartbeatCount; i++) { printf("Heartbeat %02d ", i); heartbeat=1; usleep(500000); } heartbeat=0; exit(0); } void runSmalltalk() { struct timespec heartbeatTime; double intenseComputation; intenseComputation=0; while(1) { if(!heartbeat) { intenseComputation += 1; } else { heartbeat=0; clock_gettime(CLOCK_REALTIME, &heartbeatTime); printf("woke at time %f ", elapsed(programStartTime,heartbeatTime)); printf("intenseComputation=%f\n", intenseComputation); } } } int main(int argc, char *argv[]) { clock_gettime( CLOCK_REALTIME, &programStartTime); if(argc<2) { printf("Usage: %s heartbeatCount\n", argv[0]); exit(1); } heartbeatCount = atoi(argv[1]); heartbeat=0; pthread_t heartbeatThread; if(pthread_create(&heartbeatThread, NULL, heart, NULL)) { fprintf(stderr, "Error creating thread\n"); return 1; } runSmalltalk(); } ///////////////////////////////////////// $ gcc dynamicPriorityHeartbeat.c -lpthread && ./a.out 50 Heartbeat 00 woke at time 0.000000 intenseComputation=19006.000000 Heartbeat 01 woke at time 0.500000 intenseComputation=124465517.000000 Heartbeat 02 woke at time 1.000000 intenseComputation=248758350.000000 Heartbeat 03 woke at time 1.500000 intenseComputation=373112100.000000 Heartbeat 04 woke at time 2.000000 intenseComputation=495566843.000000 Heartbeat 05 woke at time 2.500000 intenseComputation=619276640.000000 .... Heartbeat 45 woke at time 22.503000 intenseComputation=5583834266.000000 Heartbeat 46 woke at time 23.003000 intenseComputation=5708079390.000000 Heartbeat 47 woke at time 23.503000 intenseComputation=5831910783.000000 Heartbeat 48 woke at time 24.003000 intenseComputation=5955439495.000000 Heartbeat 49 woke at time 24.503000 intenseComputation=6078752118.000000 Heartbeat 50 woke at time 25.003000 intenseComputation=6202527610.000000 Those times a quite regular. A drift of 3 milliseconds over 25 seconds (0.012%). It seems the intense computation in the main thread does not block timely events in the heatbeat thread. Perhaps the heatbeat thread needed it static priority managed manually for with Linux 2.4, but maybe that is not required in 2.6 ?? cheers -ben |
Hi Ben,
On Mon, Mar 20, 2017 at 9:43 AM, Ben Coman <[hidden email]> wrote:
I wonder what happens if and when the main thread starts idling? Then there's the concern that while it *may* just happen to work in particular schedulers it won't work if there's a scheduler change. It seems to me that the safe thing is to use priorities and be sure. But you should easily be able to test the scheme above by modifying the VM to create a heartbeat thread at the same priority as the main thread and then give it some effectively infinite computation that won't cause an interrupt by itself and see if you can interrupt it. e.g. SmallInteger minVal to: SmallInteger maxVal do: [:i| | sum | sum := 0. 0 to: SmallInteger maxVal // 2 do: [:j| sum := j even ifTrue: [sum + j] ifFalse: [sum - j]]] This loop shouldn't do any allocations so shouldn't cause a scavenging GC, which would cause an interrupt check. So if you can break into this one in your modified VM then I'd say it was worth experimenting further with your scheme.
_,,,^..^,,,_ best, Eliot |
Free forum by Nabble | Edit this page |