I think I figured out why Cog some times terminates when downloading large files on OSX; An oversleep which is interrupted can be returned as a t_time struct with tv_sec -1 and tv_nsec close to 999999999. The while check is nanosleep (&naptime, &naptime) == -1 && naptime.tv_sec > 0 1|| naptime.tv_nsec > MINSLEEPNS or so, so when an interrupt is encountered which has tv_sec = -1, you will get the invalid argument error. rewriting it to naptime.tvsec > -1 && naptime.tv_nsec > MINSLEEPNS && nanosleep(&naptime, &naptime) == -1 I no longer had crashes in 99% of hte cases when evaluating the following test: HTTPSocket httpGetDocument: 'http://www.squeaksource.com/Pharo/Morphic-stephane_ducasse.334.mcz'. I've attached the sqUnixHeartbeat.c. Cheers, Henry sqUnixHeartbeat.c.zip (9K) Download Attachment |
Thanks Henrik!! On Fri, Aug 13, 2010 at 7:38 AM, Henrik Johansen <[hidden email]> wrote:
|
Thank you! I can confirm that this solves the last remaining Seaside issue. I simulated some heavy parallel requests on a Cog-Seaside image and it doesn't crash anymore. Lukas On 13 August 2010 21:20, Eliot Miranda <[hidden email]> wrote: > > Thanks Henrik!! > > On Fri, Aug 13, 2010 at 7:38 AM, Henrik Johansen <[hidden email]> wrote: >> >> >> I think I figured out why Cog some times terminates when downloading large files on OSX; >> An oversleep which is interrupted can be returned as a t_time struct with tv_sec -1 and tv_nsec close to 999999999. >> >> The while check is >> nanosleep (&naptime, &naptime) == -1 && naptime.tv_sec > 0 1|| naptime.tv_nsec > MINSLEEPNS >> >> or so, so when an interrupt is encountered which has tv_sec = -1, you will get the invalid argument error. >> >> rewriting it to >> naptime.tvsec > -1 && naptime.tv_nsec > MINSLEEPNS && nanosleep(&naptime, &naptime) == -1 >> >> I no longer had crashes in 99% of hte cases when evaluating the following test: >> >> HTTPSocket httpGetDocument: 'http://www.squeaksource.com/Pharo/Morphic-stephane_ducasse.334.mcz'. >> >> I've attached the sqUnixHeartbeat.c. >> >> Cheers, >> Henry >> >> > > > -- Lukas Renggli www.lukas-renggli.ch |
On 16 August 2010 16:48, Lukas Renggli <[hidden email]> wrote: > > Thank you! I can confirm that this solves the last remaining Seaside issue. > > I simulated some heavy parallel requests on a Cog-Seaside image and it > doesn't crash anymore. > Lukas, could you share details how you simulating heavy load? I know there are some tools on linux to flood server with HTTP requests , but i never used it myself. > Lukas > > On 13 August 2010 21:20, Eliot Miranda <[hidden email]> wrote: >> >> Thanks Henrik!! >> >> On Fri, Aug 13, 2010 at 7:38 AM, Henrik Johansen <[hidden email]> wrote: >>> >>> >>> I think I figured out why Cog some times terminates when downloading large files on OSX; >>> An oversleep which is interrupted can be returned as a t_time struct with tv_sec -1 and tv_nsec close to 999999999. >>> >>> The while check is >>> nanosleep (&naptime, &naptime) == -1 && naptime.tv_sec > 0 1|| naptime.tv_nsec > MINSLEEPNS >>> >>> or so, so when an interrupt is encountered which has tv_sec = -1, you will get the invalid argument error. >>> >>> rewriting it to >>> naptime.tvsec > -1 && naptime.tv_nsec > MINSLEEPNS && nanosleep(&naptime, &naptime) == -1 >>> >>> I no longer had crashes in 99% of hte cases when evaluating the following test: >>> >>> HTTPSocket httpGetDocument: 'http://www.squeaksource.com/Pharo/Morphic-stephane_ducasse.334.mcz'. >>> >>> I've attached the sqUnixHeartbeat.c. >>> >>> Cheers, >>> Henry >>> >>> >> >> >> > > > > -- > Lukas Renggli > www.lukas-renggli.ch > -- Best regards, Igor Stasenko AKA sig. |
> Lukas, could you share details how you simulating heavy load? > I know there are some tools on linux to flood server with HTTP > requests , but i never > used it myself. I used 'ab', the Apache HTTP server benchmarking tool. That works out of the box on any machine with Apache installed and additionally gives a nice performance report. In this case however I just wanted to generate load to crash the VM (which I didn't manage). For this test I used the following code using a single session I created beforehand in the web browser. ab -n 1000 -c 10 "http://127.0.0.1:8080/tests/functional/WAHtml5ElementsTest?_s=As2WXbb5pqm18xuy&_k=IbUsjDo1UIRR-SeI" Another easy thing to do is to fork multiple instances of 'wget' and tell it to randomly browse through the links of a web application. wget --recursive --no-parent --delete-after "http://127.0.0.1:8081/examples/multicounter" Each of the started processes then clicks through an individual session. It is important to disable the toolbar, otherwise 'wget' starts to mess around with your image (through the configuration). Previously I've also used JMeter. This tool is much more controllable and can be scripted to click through an application in a predefined way, but it also means a lot more work to setup. Lukas On 16 August 2010 17:32, Igor Stasenko <[hidden email]> wrote: > > On 16 August 2010 16:48, Lukas Renggli <[hidden email]> wrote: >> >> Thank you! I can confirm that this solves the last remaining Seaside issue. >> >> I simulated some heavy parallel requests on a Cog-Seaside image and it >> doesn't crash anymore. >> > > Lukas, could you share details how you simulating heavy load? > I know there are some tools on linux to flood server with HTTP > requests , but i never > used it myself. > > >> Lukas >> >> On 13 August 2010 21:20, Eliot Miranda <[hidden email]> wrote: >>> >>> Thanks Henrik!! >>> >>> On Fri, Aug 13, 2010 at 7:38 AM, Henrik Johansen <[hidden email]> wrote: >>>> >>>> >>>> I think I figured out why Cog some times terminates when downloading large files on OSX; >>>> An oversleep which is interrupted can be returned as a t_time struct with tv_sec -1 and tv_nsec close to 999999999. >>>> >>>> The while check is >>>> nanosleep (&naptime, &naptime) == -1 && naptime.tv_sec > 0 1|| naptime.tv_nsec > MINSLEEPNS >>>> >>>> or so, so when an interrupt is encountered which has tv_sec = -1, you will get the invalid argument error. >>>> >>>> rewriting it to >>>> naptime.tvsec > -1 && naptime.tv_nsec > MINSLEEPNS && nanosleep(&naptime, &naptime) == -1 >>>> >>>> I no longer had crashes in 99% of hte cases when evaluating the following test: >>>> >>>> HTTPSocket httpGetDocument: 'http://www.squeaksource.com/Pharo/Morphic-stephane_ducasse.334.mcz'. >>>> >>>> I've attached the sqUnixHeartbeat.c. >>>> >>>> Cheers, >>>> Henry >>>> >>>> >>> >>> >>> >> >> >> >> -- >> Lukas Renggli >> www.lukas-renggli.ch >> > > > > -- > Best regards, > Igor Stasenko AKA sig. > -- Lukas Renggli www.lukas-renggli.ch |
Thanks a lot! On 16 August 2010 19:22, Lukas Renggli <[hidden email]> wrote: > >> Lukas, could you share details how you simulating heavy load? >> I know there are some tools on linux to flood server with HTTP >> requests , but i never >> used it myself. > > I used 'ab', the Apache HTTP server benchmarking tool. That works out > of the box on any machine with Apache installed and additionally gives > a nice performance report. In this case however I just wanted to > generate load to crash the VM (which I didn't manage). > > For this test I used the following code using a single session I > created beforehand in the web browser. > > ab -n 1000 -c 10 > "http://127.0.0.1:8080/tests/functional/WAHtml5ElementsTest?_s=As2WXbb5pqm18xuy&_k=IbUsjDo1UIRR-SeI" > > Another easy thing to do is to fork multiple instances of 'wget' and > tell it to randomly browse through the links of a web application. > > wget --recursive --no-parent --delete-after > "http://127.0.0.1:8081/examples/multicounter" > > Each of the started processes then clicks through an individual > session. It is important to disable the toolbar, otherwise 'wget' > starts to mess around with your image (through the configuration). > > Previously I've also used JMeter. This tool is much more controllable > and can be scripted to click through an application in a predefined > way, but it also means a lot more work to setup. > > Lukas > > On 16 August 2010 17:32, Igor Stasenko <[hidden email]> wrote: >> >> On 16 August 2010 16:48, Lukas Renggli <[hidden email]> wrote: >>> >>> Thank you! I can confirm that this solves the last remaining Seaside issue. >>> >>> I simulated some heavy parallel requests on a Cog-Seaside image and it >>> doesn't crash anymore. >>> >> >> Lukas, could you share details how you simulating heavy load? >> I know there are some tools on linux to flood server with HTTP >> requests , but i never >> used it myself. >> >> >>> Lukas >>> >>> On 13 August 2010 21:20, Eliot Miranda <[hidden email]> wrote: >>>> >>>> Thanks Henrik!! >>>> >>>> On Fri, Aug 13, 2010 at 7:38 AM, Henrik Johansen <[hidden email]> wrote: >>>>> >>>>> >>>>> I think I figured out why Cog some times terminates when downloading large files on OSX; >>>>> An oversleep which is interrupted can be returned as a t_time struct with tv_sec -1 and tv_nsec close to 999999999. >>>>> >>>>> The while check is >>>>> nanosleep (&naptime, &naptime) == -1 && naptime.tv_sec > 0 1|| naptime.tv_nsec > MINSLEEPNS >>>>> >>>>> or so, so when an interrupt is encountered which has tv_sec = -1, you will get the invalid argument error. >>>>> >>>>> rewriting it to >>>>> naptime.tvsec > -1 && naptime.tv_nsec > MINSLEEPNS && nanosleep(&naptime, &naptime) == -1 >>>>> >>>>> I no longer had crashes in 99% of hte cases when evaluating the following test: >>>>> >>>>> HTTPSocket httpGetDocument: 'http://www.squeaksource.com/Pharo/Morphic-stephane_ducasse.334.mcz'. >>>>> >>>>> I've attached the sqUnixHeartbeat.c. >>>>> >>>>> Cheers, >>>>> Henry >>>>> >>>>> >>>> >>>> >>>> >>> >>> >>> >>> -- >>> Lukas Renggli >>> www.lukas-renggli.ch >>> >> >> >> >> -- >> Best regards, >> Igor Stasenko AKA sig. >> > > > > -- > Lukas Renggli > www.lukas-renggli.ch > -- Best regards, Igor Stasenko AKA sig. |
Free forum by Nabble | Edit this page |