I have a few images connecting regularly to a server using ZnEasy get: on
windows 64 bit and a recent vm and Pharo 7 image. At a certain point, the server stops with ‘CreateThread() failed (1450) - Insufficient system resources exist to complete the requested service. I probably should switch to keeping connections open and avoid this problem, but it made me wonder about the limitations I should expect here. What are the limits here and can I change them? Stephan |
Hi Stephan,
> On 18 Jan 2019, at 14:03, Stephan Eggermont via Pharo-users <[hidden email]> wrote: > > > From: Stephan Eggermont <[hidden email]> > Subject: Limits of ZnManagingMultiThreadedServer > Date: 18 January 2019 at 14:03:43 GMT+1 > To: [hidden email] > > > I have a few images connecting regularly to a server using ZnEasy get: on > windows 64 bit and a recent vm and Pharo 7 image. At a certain point, the > server stops with ‘CreateThread() failed (1450) - Insufficient system > resources exist to complete the requested service. > > I probably should switch to keeping connections open and avoid this > problem, but it made me wonder about the limitations I should expect here. > What are the limits here and can I change them? > > Stephan I am not sure I understand correctly: does the error occur on the client or server side (assuming they are both written in Pharo using Zinc HTTP Components). Because 'CreateThread() failed (1450) - Insufficient system resources exist to complete the requested service' does not sound like a Pharo error. In any case, ZnEasy class>>#get: uses #beOneShot so that the client side connection is closed after 1 request, it also uses the Connection:close header so that the server knows it does not have to keep the connection open. Server side, Zn should also clean up nicely (even for dirty requests). But is also depends on what you do inside your server side request handler. Maybe something prevents normal GC to clean up ? Although Pharo processes are like green threads (not OS threads) you can technically use too many. Typically you will run out of sockets for IO. Unless you have a a real leak, it might be a GC configuration problem: a little used server might not invoke enough real/full GC cycles. In any case, I have many images with long running HTTP servers and clients and I don't have any issues. How many requests are we talking about, how many concurrent users, how long before things go wrong ? Sven |
Sven Van Caekenberghe <[hidden email]> wrote:
> I am not sure I understand correctly: does the error occur on the client > or server side (assuming they are both written in Pharo using Zinc HTTP Components). First Pharo server side, and afterwards of course the clients complain that they cannot connect to the server. Everything is Pharo. And running perhaps a bit too close to the machine capacity. At least memory wise > Because 'CreateThread() failed (1450) - Insufficient system resources > exist to complete the requested service' does not sound like a Pharo error. It’s a vm error I suppose. Opening the output console > In any case, ZnEasy class>>#get: uses #beOneShot so that the client side > connection is closed after 1 request, it also uses the Connection:close > header so that the server knows it does not have to keep the connection open. Ok. Do I understand correctly that the server does not forcibly releases the connection and thread after a timeout, but continues waiting? > Server side, Zn should also clean up nicely (even for dirty requests). > But is also depends on what you do inside your server side request > handler. Maybe something prevents normal GC to clean up ? That is lightweight, telling the clients which tests to run and collecting the results > Although Pharo processes are like green threads (not OS threads) you can > technically use too many. Typically you will run out of sockets for IO. > Unless you have a a real leak, it might be a GC configuration problem: a > little used server might not invoke enough real/full GC cycles. So when running out of sockets, no GC is triggered? > In any case, I have many images with long running HTTP servers and > clients and I don't have any issues. How many requests are we talking > about, how many concurrent users, how long before things go wrong ? I haven’t instrumented it yet, but after several hours. And the machine is pushed to its limits. Stephan |
Ah, found another consumer of sockets :). For reliable systems, putting
limits on the number of retries seems a useful idea. Thanks Sven, Stephan |
Free forum by Nabble | Edit this page |