Hi guys,
There is a bug happening in OSSubprocess since quite some time which we were never able to fix. What happens is that at certain point (after having used OSSubprocess for a while), a simple command like the following would "hang" Pharo: OSSUnixSubprocess new command: 'ls'; arguments: #(); redirectStdout; runAndWaitOnExitDo: [ :process :outString | outString ] Once the image does that, it keeps doing it in a 80% of the cases (so it is easy enough to debug). Guille Polito kept one of these images if someone can give us a hand. He also proposed the great idea of using `strace` to see what was going on. He (together with Pable Tesone) suspected that the heartbeat could be interrupting the `clone()` function which is (I think) called internally by the 'posix_spawn()' which is the one used by OSSubprocess. When these images are "hung" they found at a infinitive loop like this: [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 593498}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 600126}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 606282}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 612528}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 618689}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 624922}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 631145}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 637241}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 643578}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 17477] gettimeofday({1482152630, 649974}, NULL) = 0 [pid 17477] sigreturn() (mask []) = 120 [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID| [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- As you can see, there is a SIGALARM involved. It also looks like the `gettimeofday` is used by the heartbeat ? Could it be that someone the heartbeat is interrupting the `clone()` ? Guille also showed me the `strace` output with a regular / working image: [pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 18647] gettimeofday({1482152829, 481014}, NULL) = 0 [pid 18647] sigreturn() (mask []) = -1 EINTR (Interrupted system call) [pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 1917}}) = 0 [pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable) [pid 18647] select(4, [3], [], [3], {0, 1000}) = 0 (Timeout) [pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 797}}) = 0 [pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable) [pid 18647] select(4, [3], [], [3], {0, 1000}) = ? ERESTARTNOHAND (To be restarted if no handler) [pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- Does anyone have any hint here? Thanks in advance, |
> On 19 Dec 2016, at 14:41, Mariano Martinez Peck <[hidden email]> wrote: > > Hi guys, > > Guille Polito kept one of these images if someone can give us a hand. He also proposed the great idea of using `strace` to see what was going on. He (together with Pable Tesone) suspected that the heartbeat could be interrupting the `clone()` function which is (I think) called internally by the 'posix_spawn()' which is the one used by OSSubprocess. > > When these images are "hung" they found at a infinitive loop like this: Okay, how many child processes do you have at that point? How many processes does the system have? > > [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- > [pid 17477] gettimeofday({1482152630, 593498}, NULL) = 0 > [pid 17477] sigreturn() (mask []) = 120 > [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7578768) = ? ERESTARTNOINTR (To be restarted) > [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- > [pid 17477] gettimeofday({1482152630, 600126}, NULL) = 0 > [pid 17477] sigreturn() (mask []) = 120 > [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7578768) = ? ERESTARTNOINTR (To be restarted) so above.. 7ms between the two gettimeofday calls? Nothing else? Set a breakpoint on clone/fork in gdb and look at the c-stack at this point? Could you strace with timestamps to see how much time is spent? Is the process suspicious in other ways? So yes.. sounds like clone doesn't complete.. the question is why? Is it out of resources? Is something in the VM blocking longer than the heartbeat, is the heartbeat more frequent than expected? > As you can see, there is a SIGALARM involved. It also looks like the `gettimeofday` is used by the heartbeat ? Could it be that someone the heartbeat is interrupting the `clone()` ? > > Guille also showed me the `strace` output with a regular / working image: > > > [pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- > [pid 18647] gettimeofday({1482152829, 481014}, NULL) = 0 > [pid 18647] sigreturn() (mask []) = -1 EINTR (Interrupted system call) > [pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 1917}}) = 0 > [pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable) > [pid 18647] select(4, [3], [], [3], {0, 1000}) = 0 (Timeout) > [pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 797}}) = 0 > [pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable) > [pid 18647] select(4, [3], [], [3], {0, 1000}) = ? ERESTARTNOHAND (To be restarted if no handler) > [pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- > > > Does anyone have any hint here? Get timestamps in there. How long does it take to fail/end in this situation? holger |
Hi Holger, Just to add to the issue, when we talk about hanging, the VM thread is stopped in an FFI call, 0xffdfe1c0 I OSSUnixSubprocess>primitivePosixSpawn:command:fileActions:attributes:argPointer:envPointer: 0x9482438: a(n) OSSUnixSubprocess 0xffdfe200 I OSSUnixSubprocess>internalRun 0x9482438: a(n) OSSUnixSubprocess 0xffdfe220 I [] in OSSUnixSubprocess>run 0x9482438: a(n) OSSUnixSubprocess 0xffdf9fe0 M BlockClosure>ensure: 0x9482750: a(n) BlockClosure 0xffdfa004 I OSSUnixSubprocess>run 0x9482438: a(n) OSSUnixSubprocess 0xffdfa024 I OSSUnixSubprocess>runAndWaitOnExitDo: 0x9482438: a(n) OSSUnixSubprocess self ffiCall: #( int posix_spawnp (void* pidPointer, String commandString, void* fileActions, void * attributes, void* argPointer, void* envPointer) ) Now, here is an extract of the timestamped `strace -ftt` of the hanging process: [pid 24108] 10:28:09.044534 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.044577 gettimeofday({1482226089, 44582}, NULL) = 0 [pid 24108] 10:28:09.044644 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.044664 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.050924 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.050970 gettimeofday({1482226089, 50976}, NULL) = 0 [pid 24108] 10:28:09.050996 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.051009 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.051016 gettimeofday({1482226089, 51020}, NULL) = 0 [pid 24108] 10:28:09.051028 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.051040 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.066154 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.066196 gettimeofday({1482226089, 66201}, NULL) = 0 [pid 24108] 10:28:09.066220 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.066233 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.072599 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.072643 gettimeofday({1482226089, 72648}, NULL) = 0 [pid 24108] 10:28:09.072716 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.072740 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.079552 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.079580 gettimeofday({1482226089, 79585}, NULL) = 0 [pid 24108] 10:28:09.079648 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.079672 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.089178 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.089233 gettimeofday({1482226089, 89239}, NULL) = 0 [pid 24108] 10:28:09.089264 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.089278 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.101007 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.101046 gettimeofday({1482226089, 101050}, NULL) = 0 [pid 24108] 10:28:09.101096 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.101116 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.107650 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.107699 gettimeofday({1482226089, 107706}, NULL) = 0 [pid 24108] 10:28:09.107771 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.107789 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.116188 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- [pid 24108] 10:28:09.116226 gettimeofday({1482226089, 116232}, NULL) = 0 [pid 24108] 10:28:09.116254 sigreturn() (mask []) = 120 [pid 24108] 10:28:09.116273 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7500768) = ? ERESTARTNOINTR (To be restarted) [pid 24108] 10:28:09.129739 --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- The second trace in Mariano's email was a trace in the case of no FFI call. Just to compare both of them and their differences, but there is no hang in there. Although my knowledge arrives until here, I can help in providing any other traces or information. I asked Santa an OSSubprocess that won't hang :) Thanks, Guille -------- Original Message -------- > > >> On 19 Dec 2016, at 14:41, Mariano Martinez Peck <[hidden email]> wrote: >> >> Hi guys, >> >> Guille Polito kept one of these images if someone can give us a hand. He also proposed the great idea of using `strace` to see what was going on. He (together with Pable Tesone) suspected that the heartbeat could be interrupting the `clone()` function which is (I think) called internally by the 'posix_spawn()' which is the one used by OSSubprocess. >> >> When these images are "hung" they found at a infinitive loop like this: > Okay, how many child processes do you have at that point? How many processes does the system have? > > >> [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- >> [pid 17477] gettimeofday({1482152630, 593498}, NULL) = 0 >> [pid 17477] sigreturn() (mask []) = 120 >> [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7578768) = ? ERESTARTNOINTR (To be restarted) >> [pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- >> [pid 17477] gettimeofday({1482152630, 600126}, NULL) = 0 >> [pid 17477] sigreturn() (mask []) = 120 >> [pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7578768) = ? ERESTARTNOINTR (To be restarted) > so above.. 7ms between the two gettimeofday calls? Nothing else? Set a breakpoint on clone/fork in gdb and look at the c-stack at this point? Could you strace with timestamps to see how much time is spent? Is the process suspicious in other ways? > > So yes.. sounds like clone doesn't complete.. the question is why? Is it out of resources? Is something in the VM blocking longer than the heartbeat, is the heartbeat more frequent than expected? > > >> As you can see, there is a SIGALARM involved. It also looks like the `gettimeofday` is used by the heartbeat ? Could it be that someone the heartbeat is interrupting the `clone()` ? >> >> Guille also showed me the `strace` output with a regular / working image: >> >> >> [pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- >> [pid 18647] gettimeofday({1482152829, 481014}, NULL) = 0 >> [pid 18647] sigreturn() (mask []) = -1 EINTR (Interrupted system call) >> [pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 1917}}) = 0 >> [pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable) >> [pid 18647] select(4, [3], [], [3], {0, 1000}) = 0 (Timeout) >> [pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 797}}) = 0 >> [pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable) >> [pid 18647] select(4, [3], [], [3], {0, 1000}) = ? ERESTARTNOHAND (To be restarted if no handler) >> [pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} --- >> >> >> Does anyone have any hint here? > Get timestamps in there. How long does it take to fail/end in this situation? > > holger > > > > > |
Ok, so following a bit on this. I'll summarize some of our
findings, some of them maybe obvious for some people in this list. - threaded heartbeat - itimer + signal heartbeat Specially, I'd like to cite the following paragraph for the lazy: A distinction on linux is between VMs with an itimer hearbeat or a threaded heartbeat. VMs with an itimer hearbeat use setitimer to deliver a SIGALRM signal at regular intervals to interrupt the VM to check for events. These signals can be troublesome, interrupting foreign code that cannot cope with such signals. VMs with a threaded heartbeat use a high-priority thread that loops, blocking on nanosleep and then interrupting the VM, performing the same function as the itimer heartbeat but without using signals. These VMs are to be preferred but suport for multiple thread priorities in user-level processes has only been available on linux in kernels later than 2.6.12.So, I downloaded the heartbeat squeak VM from bintray [2]. This VM requires so deploy some configuration files in /etc/security [2]. Under this configuration OSSubprocess worked like a charm (or I did not found the issue again so far). Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic. Cheers, Guille [1] https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/e17db79411cfec767e04f3d94d12a642d920a30e/build.linux64x64/HowToBuild [2] https://bintray.com/opensmalltalk/vm/download_file?file_path=cog_linux32x86_squeak.sista.spur_201612170124.tar.gz [3] https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux -------- Original Message --------
|
> On 21 Dec 2016, at 22:29, Guille Polito <[hidden email]> wrote: > Hi! > Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic. It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed. holger PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well? |
> Hi! > > > Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic. > > It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed. Also in FreeBSD, vm with heartbeat thread can't be run in jails (secured chroot sandboxes) even as root and also on main FreeBSD system, vm can't be run as normal user, only as root :( I did not find the solution, how to allow real time priority threads for vm process on FreeBSD. I tested linux vm (pharo, oscog) with linux compatibilty layer on FreeBSD + CentOS 7 userland (I am not able to compile native vm under FreeBSD). pf > > holger > > PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well? |
> On 24 Feb 2017, at 13:56, Petr Fischer <[hidden email]> wrote: > > > Also in FreeBSD, vm with heartbeat thread can't be run in jails (secured chroot sandboxes) even as root and also on main FreeBSD system, vm can't be run as normal user, only as root :( I did not find the solution, how to allow real time priority threads for vm process on FreeBSD. I tested linux vm (pharo, oscog) with linux compatibilty layer on FreeBSD + CentOS 7 userland (I am not able to compile native vm under FreeBSD). What are the link failures for you? backtrace and iconv_open and later ft2build.h/freetype? make LIBS+="-lexecinfo -lutil -lpthread -lm -liconv" LDFLAGS+=-L/usr/local/lib and with some modifications to FT2Plugin/Makefile I get the rest to compile too. |
In reply to this post by Holger Freyther
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther <[hidden email]> wrote: > > >> On 21 Dec 2016, at 22:29, Guille Polito <[hidden email]> wrote: >> > > > Hi! > >> Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic. > > It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed. > > holger > > PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well? If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61ffaff6decc2a3e4b6ca5e https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6d57ee732fd7fd942da08c cheers -ben |
I think this change restores a cleanup introduced by Eliot (nuke all those #define and change it by exeName). Esteban |
In reply to this post by Ben Coman
> On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther <[hidden email]> wrote: > > > > Hi! > > > >> Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic. > > > > It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed. > > > > holger > > > > PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well? > > If these two commits work out, the threaded VM may become a viable > default option... > https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61ffaff6decc2a3e4b6ca5e > https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6d57ee732fd7fd942da08c > > cheers -ben Are these commits immediatelly available in pharo-vm GitHub repository, or I need to compile directly from opensmalltalk-vm? Is it really safe to run VM with normal priority heartbeat? Thanks! pf |
On Fri, Mar 31, 2017 at 7:17 AM, Petr Fischer <[hidden email]> wrote:
I'm not clear on the process Esteban uses to keep pharo-vm up to date with opensmalltalk-vm, but I believe its in. Its easy to test. Just download latest VM and run without the limits conf.
Of course user testing is required, but considering how Linux operates with dynamic thread priorities I believe it is safe. cheers -ben |
In reply to this post by Petr Fischer
On Thu, Mar 30, 2017 at 4:17 PM, Petr Fischer <[hidden email]> wrote:
I believe not. Do something lie spawn a process waiting on a delay that will interrupt the active process after some time (say 1 second). Then have the active process enter a tight loop (e.g. a repeat loop that sends class to an object). It is very important that the loop /not/ cause a GC or a deep call chain so that it doesn't incidentally poll for events and hence update the clock. Once the active process is in a tight loop the delay is effectively disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired.
_,,,^..^,,,_ best, Eliot |
In reply to this post by Ben Coman
there is a mirror that runs every hour… but I just publish not failing builds. anyway, this will change again soon (™)… to commit also generated sources (I commit packages and let the travis ci to generate the sources now). Esteban
|
In reply to this post by Eliot Miranda-2
On Thu, 30 Mar 2017, Eliot Miranda wrote: > Once the active process is in a tight loop the delay is effectively disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired. I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread. Levente |
Hi Levente, > On Mar 31, 2017, at 6:42 AM, Levente Uzonyi <[hidden email]> wrote: > >> On Thu, 30 Mar 2017, Eliot Miranda wrote: >> >> Once the active process is in a tight loop the delay is effectively > disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired. > > I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread. It did happen. In 2009 Andreas and I demonstrated it on Linux. Yes it may be the case that certain Linux kernel versions fix the issue but the only reliable way is to have the heartbeat thread run at a higher priority than the Smalltalk thread. > > Levente |
Hi Eliot, On Fri, 31 Mar 2017, Eliot Miranda wrote: > > Hi Levente, > >> On Mar 31, 2017, at 6:42 AM, Levente Uzonyi <[hidden email]> wrote: >> >>> On Thu, 30 Mar 2017, Eliot Miranda wrote: >>> >>> Once the active process is in a tight loop the delay is effectively >> disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired. >> >> I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread. > > It did happen. In 2009 Andreas and I demonstrated it on Linux. Yes it may be the case that certain Linux kernel versions fix the issue but the only reliable way is to have the heartbeat thread run at a higher priority than the Smalltalk thread. Back in 2009 neither of those schedulers were released. Of course, one can compile a kernel with a cooperative scheduler, but the average user is unlikely to ever use such kernel. Levente >> >> Levente |
In reply to this post by Levente Uzonyi
> On Thu, 30 Mar 2017, Eliot Miranda wrote: > > > Once the active process is in a tight loop the delay is effectively > disabled because the tight loop effectively shuts out the heartbeat thread > and hence the system never notices that the delay has expired. > > I think that won't happen, because the process scheduler (O(1), CFS, BFS) on > linux is not cooperative. So, the kernel will periodically preempt the main > thread and run the heartbeat thread no matter what their priorities are. The > higher priority only provides lower jitter on the heartbeat thread. > > Levente Is there some test case or code, that I can run in Pharo and evaluate if kernel sheduler is working correctly (with heartbeat thread at normal priority). I need to test it under FreeBSD. Thanks! pf |
On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer <[hidden email]> wrote:
Just for starters, what result do you get for my multi-priority fibonacci stress test... cheers -ben |
On Sun, Apr 2, 2017 at 8:10 AM, Ben Coman <[hidden email]> wrote: > > > > On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer <[hidden email]> wrote: >> >> >> > On Thu, 30 Mar 2017, Eliot Miranda wrote: >> > >> > > Once the active process is in a tight loop the delay is effectively >> > disabled because the tight loop effectively shuts out the heartbeat thread >> > and hence the system never notices that the delay has expired. >> > >> > I think that won't happen, because the process scheduler (O(1), CFS, BFS) on >> > linux is not cooperative. So, the kernel will periodically preempt the main >> > thread and run the heartbeat thread no matter what their priorities are. The >> > higher priority only provides lower jitter on the heartbeat thread. >> > >> > Levente >> >> Is there some test case or code, that I can run in Pharo and evaluate if kernel sheduler is working correctly (with heartbeat thread at normal priority). >> I need to test it under FreeBSD. >> >> Thanks! pf > > > Just for starters, what result do you get for my multi-priority fibonacci stress test... > > cheers -ben > I got curious to read up on the FreeBSD scheduler. FreeBSD has the same constraint as Linux such that "Only the super-user may lower priorities." Each CPU has (a KSeq) three arrays of run queue indexed by priority * The Current queue receives interactive, real time and interrupt threads * The Next queue receives everything else except idle threads * When Current queue is empty, the two queues swap. * The third queue hold idle threads, and is only used when there are no other runnable threads ULE: A Modern Scheduler For FreeBSD A thread is assigned to a queue until it sleeps, or for the duration of a slice. The base priority, slice size, and interactivity score are recalculated each time a slice expires. The thread is assigned to the Current queue if it is interactive or to the Next queue otherwise. Inserting interactive tasks onto the Current queue and giving them a higher priority results in a very low latency response. In ULE the interactivity of a thread is determined using its voluntary sleep time and run time. The voluntary sleep time is recorded by counting the number of ticks that have passed between a sleep() and wakeup() or while sleeping on a condition variable. The run time is simply the number of ticks while the thread is running. The scheduler uses the interactivity score to determine whether or not a thread should be assigned to the Current queue when it becomes runnable. On x86, FreeBSD has a default HZ of 100, and a minimum slice value of 10ms and maximum slice value of 140ms. Interactive tasks receive the minimum slice value. This allows us to more quickly discover that an interactive task is no longer interactive. The Design and Implementation of the FreeBSD Operating System The scheduling policy initially assigns a high execution priority to each thread and allows that thread to execute for a fixed time slice. Threads that execute for the duration of their slice have their priority lowered, whereas threads that give up the CPU (usually because they do I/O) are allowed to remain at their priority. Threads that are inactive have their priority raised. Some tasks, such as the compilation of a large application, may be done in many small steps in which each component is compiled in a separate process. No individual step runs long enough to have its priority degraded, so the compilation as a whole impacts the interactive programs. To detect and avoid this problem, the scheduling priority of a child process is propagated back to its parent. When a new child process is started, it begins running with its parent’s current priority. As the program that coordinates the compilation (typically make) starts many compilation steps, its priority is dropped because of the CPU-intensive behavior of its children. Later compilation steps started by make begin running and stay at a lower priority, which allows higher-priority interactive programs to run in preference to them as desired. Resuming a thread ... If any threads are placed on the run queue and one of them has a scheduling priority higher than that of the currently executing thread, it will request that the CPU be rescheduled as soon as possible. Real-time and interrupt threads do preempt lower-priority threads. The kernel can be configured to preempt timeshare threads executing in the kernel with other higher-priority timeshare threads. This option is not used by default as the increase in context switches adds overhead and does not help make timeshare threads response time more predictable and substituting defined constants... PRIO_MIN -20 PRIO_MAX 20 SCHED_INTERACT_THRESH 30 SCHED_INTERACT_HALF 50 = (SCHED_INTERACT_MAX / 2) SCHED_INTERACT_MAX 100 PRI_MIN_TIMESHARE 120 PRI_MAX_TIMESHARE 223 = (PRI_MIN_IDLE - 1) PRI_MIN_IDLE 224 SCHED_PRI_NRESV 40 = (PRIO_MAX - PRIO_MIN) PRI_TIMESHARE_RANGE 104 = (PRI_MAX_TIMESHARE - PRI_MIN_TIMESHARE + 1) PRI_INTERACT_RANGE 32 = ((PRI_TIMESHARE_RANGE - SCHED_PRI_NRESV) / 2) PRI_MIN_INTERACT 120 = (PRI_MIN_TIMESHARE) PRI_MAX_INTERACT 153 = (120 + PRI_INTERACT_RANGE - 1) PRI_MIN_BATCH 152 = (PRI_MIN_TIMESHARE + PRI_INTERACT_RANGE) PRI_MAX_BATCH 223 = (PRI_MAX_TIMESHARE) SCHED_PRI_NHALF 20 = (SCHED_PRI_NRESV / 2) SCHED_PRI_MIN 172 = (PRI_MIN_BATCH + SCHED_PRI_NHALF) SCHED_PRI_MAX 203 = (PRI_MAX_BATCH - SCHED_PRI_NHALF) SCHED_PRI_RANGE 30 = (SCHED_PRI_MAX - SCHED_PRI_MIN + 1) sched_interact 30 = (SCHED_INTERACT_THRESH) sched_interact_score() if (sleep/run)>1, interact_score = 50 / (sleep/run) if (sleep/run)=1, interact_score = 50 if (sleep/run)<1, interact_score = 50 * (2 - (sleep/run)) sched_priority() * If the score is interactive we place the thread in the realtime * queue with a priority that is less than kernel and interrupt * priorities. These threads are not subject to nice restrictions. * * Scores greater than this are placed on the normal timeshare queue * where the priority is partially decided by the most recent cpu * utilization and the rest is decided by nice value. * * The nice value of the process has a linear effect on the calculated * score. Negative nice values make it easier for a thread to be * considered interactive. Default nice is 0. * score = sched_interact_score() + nice); if (score < (30)) priority = 120 + score*34/30 // = 120 + (153 - 120 + 1) / 30 * score else priority = 201 + nice // = 172 + 30 - 1 + nice sched_add(struct thread *td, int flags) * Select the target thread queue and add a thread to it. * Request preemption or IPI a remote processor if required. * Recalculate the priority before we select the target cpu or run-queue. * if (PRI_BASE(td->td_pri_class) == PRI_TIMESHARE) sched_priority(td); ... So it seems as long as "sleep/run > 2" then it seems FreeBSD heatbeat-thread will get an interactive priority bump. cheers -ben |
In reply to this post by Ben Coman
> On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer <[hidden email]> wrote: > > > > On Thu, 30 Mar 2017, Eliot Miranda wrote: > > > > > > > Once the active process is in a tight loop the delay is effectively > > > disabled because the tight loop effectively shuts out the heartbeat > > thread > > > and hence the system never notices that the delay has expired. > > > > > > I think that won't happen, because the process scheduler (O(1), CFS, > > BFS) on > > > linux is not cooperative. So, the kernel will periodically preempt the > > main > > > thread and run the heartbeat thread no matter what their priorities are. > > The > > > higher priority only provides lower jitter on the heartbeat thread. > > > > > > Levente > > > > Is there some test case or code, that I can run in Pharo and evaluate if > > kernel sheduler is working correctly (with heartbeat thread at normal > > priority). > > I need to test it under FreeBSD. > > > > Thanks! pf > > > > Just for starters, what result do you get for my multi-priority fibonacci > stress test... > http://forum.world.st/Unix-heartbeat-thread-vs-itimer-tp4928943p4938456.html > cheers -ben Output from your test C program (my laptop was cca 30% loaded with other processes [sha256 summing + disk IO] during test runs): -------------------- Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz No. of Cores: 4 N=1000000000 ; for NPROC in 1 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 29971 @ 19 ==> execution time 7:13689552 29973 @ 1 ==> execution time 7:329530217 29975 @ 0 ==> execution time 7:372957715 N=1000000000 ; for NPROC in 1 2 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 30354 @ 0 ==> execution time 8:503484639 30360 @ 0 ==> execution time 8:825420373 30350 @ 19 ==> execution time 10:419283219 30358 @ 1 ==> execution time 12:449180718 30352 @ 1 ==> execution time 12:607924228 30356 @ 19 ==> execution time 14:783309741 N=1000000000 ; for NPROC in 1 2 3 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 31041 @ 0 ==> execution time 8:469494752 31045 @ 1 ==> execution time 12:300027654 31049 @ 19 ==> execution time 15:669175038 31047 @ 0 ==> execution time 16:625971781 31053 @ 0 ==> execution time 17:291503470 31039 @ 1 ==> execution time 17:532778854 31051 @ 1 ==> execution time 18:99766937 31037 @ 19 ==> execution time 20:579350446 31043 @ 19 ==> execution time 21:23873011 N=1000000000 ; for NPROC in 1 2 3 4 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 31294 @ 0 ==> execution time 10:22735629 31288 @ 0 ==> execution time 17:412909550 31304 @ 1 ==> execution time 17:938611595 31292 @ 1 ==> execution time 17:950983991 31300 @ 0 ==> execution time 18:652930065 31298 @ 1 ==> execution time 21:613235002 31306 @ 0 ==> execution time 22:996656086 31286 @ 1 ==> execution time 23:865226663 31302 @ 19 ==> execution time 25:605616888 31296 @ 19 ==> execution time 25:825197854 31290 @ 19 ==> execution time 26:915729372 31284 @ 19 ==> execution time 26:995877168 N=1000000000 ; for NPROC in 1 2 3 4 5 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 31636 @ 0 ==> execution time 18:804222157 31640 @ 1 ==> execution time 19:501420760 31658 @ 1 ==> execution time 20:657524560 31634 @ 1 ==> execution time 20:814068666 31648 @ 0 ==> execution time 24:936846447 31660 @ 0 ==> execution time 25:193719778 31654 @ 0 ==> execution time 25:510585576 31642 @ 0 ==> execution time 25:530504029 31652 @ 1 ==> execution time 25:975794455 31646 @ 1 ==> execution time 26:165684663 31632 @ 19 ==> execution time 28:938437090 31650 @ 19 ==> execution time 28:954028063 31638 @ 19 ==> execution time 33:65970826 31656 @ 19 ==> execution time 33:646365534 31644 @ 19 ==> execution time 33:838210078 -------------------- pf |
Free forum by Nabble | Edit this page |