Seaside and Connection Reset by Peer problems

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Seaside and Connection Reset by Peer problems

StormByte
I'm stressing test a seaside pharo image with the default configuration by
using apache's ab benchmark tool.

I have a ZnZincServerAdaptor listenning on port 8080 (created via seaside
menu GUI).

The problem is that I am starting to receive connection reset by peer when
the number of concurrent connection grows, which I think can be dangerous on
a loaded server.

These are the test I've run (default configuration, seaside welcome page)


( SUCCESSFUL )
ab -c 60 -n 5000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        Zinc
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /
Document Length:        4954 bytes

Concurrency Level:      60
Time taken for tests:   13.859 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      25620000 bytes
HTML transferred:       24770000 bytes
Requests per second:    360.78 [#/sec] (mean)
Time per request:       166.306 [ms] (mean)
Time per request:       2.772 [ms] (mean, across all concurrent requests)
Transfer rate:          1805.31 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   37 233.8      0    7010
Processing:     7  101 436.7     71   12857
Waiting:        7  101 436.7     71   12857
Total:          8  138 537.0     72   13858

Percentage of the requests served within a certain time (ms)
  50%     72
  66%     83
  75%     87
  80%     88
  90%    104
  95%    216
  98%   1087
  99%   1275
 100%  13858 (longest request)





( FAILING CONNECTIONS )
ab -c 600 -n 5000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 3349 requests completed
stormbyte@zero ~ $ ab -c 600 -n 5000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 2097 requests completed






I would appreciate any help or direction so I can configure it better or
whatever makes me not to be stuck by this anymore as I am out of ideas (at
first, I thought it was my code, but it is reproducible with "vanilla"
seaside)

Thanks.

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Seaside and Connection Reset by Peer problems

StormByte
David Carlos Manuelda wrote:

> I'm stressing test a seaside pharo image with the default configuration by
> using apache's ab benchmark tool.
>
> I have a ZnZincServerAdaptor listenning on port 8080 (created via seaside
> menu GUI).
>
> The problem is that I am starting to receive connection reset by peer when
> the number of concurrent connection grows, which I think can be dangerous
> on a loaded server.
>
> These are the test I've run (default configuration, seaside welcome page)
>
>
> ( SUCCESSFUL )
> ab -c 60 -n 5000 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking 127.0.0.1 (be patient)
> Completed 500 requests
> Completed 1000 requests
> Completed 1500 requests
> Completed 2000 requests
> Completed 2500 requests
> Completed 3000 requests
> Completed 3500 requests
> Completed 4000 requests
> Completed 4500 requests
> Completed 5000 requests
> Finished 5000 requests
>
>
> Server Software:        Zinc
> Server Hostname:        127.0.0.1
> Server Port:            8080
>
> Document Path:          /
> Document Length:        4954 bytes
>
> Concurrency Level:      60
> Time taken for tests:   13.859 seconds
> Complete requests:      5000
> Failed requests:        0
> Total transferred:      25620000 bytes
> HTML transferred:       24770000 bytes
> Requests per second:    360.78 [#/sec] (mean)
> Time per request:       166.306 [ms] (mean)
> Time per request:       2.772 [ms] (mean, across all concurrent requests)
> Transfer rate:          1805.31 [Kbytes/sec] received
>
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        0   37 233.8      0    7010
> Processing:     7  101 436.7     71   12857
> Waiting:        7  101 436.7     71   12857
> Total:          8  138 537.0     72   13858
>
> Percentage of the requests served within a certain time (ms)
>   50%     72
>   66%     83
>   75%     87
>   80%     88
>   90%    104
>   95%    216
>   98%   1087
>   99%   1275
>  100%  13858 (longest request)
>
>
>
>
>
> ( FAILING CONNECTIONS )
> ab -c 600 -n 5000 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking 127.0.0.1 (be patient)
> Completed 500 requests
> Completed 1000 requests
> Completed 1500 requests
> Completed 2000 requests
> Completed 2500 requests
> Completed 3000 requests
> apr_socket_recv: Connection reset by peer (104)
> Total of 3349 requests completed
> stormbyte@zero ~ $ ab -c 600 -n 5000 http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking 127.0.0.1 (be patient)
> Completed 500 requests
> Completed 1000 requests
> Completed 1500 requests
> Completed 2000 requests
> apr_socket_recv: Connection reset by peer (104)
> Total of 2097 requests completed
>
>
>
>
>
>
> I would appreciate any help or direction so I can configure it better or
> whatever makes me not to be stuck by this anymore as I am out of ideas (at
> first, I thought it was my code, but it is reproducible with "vanilla"
> seaside)
>
> Thanks.

I forgot to point that the -c argument gives the number of concurrent
connections (in this case, 20 was successful, but 200+ was failing some of
them)

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Seaside and Connection Reset by Peer problems

sebastianconcept@gmail.co
To be safe, if you want to go beyond 10 or 15 concurrent connections you put additional Pharo image workers so you scale your application horizontally. It makes good use of CPU too.

There is a point in which all stacks have to do it, so yes, I think you are testing the borders of one Pharo worker.

PS: when you use more than one, you have to design your app in a "more stateless way" and use sticky sessions


On Feb 17, 2015, at 4:04 PM, David Carlos Manuelda <[hidden email]> wrote:

David Carlos Manuelda wrote:

I'm stressing test a seaside pharo image with the default configuration by
using apache's ab benchmark tool.

I have a ZnZincServerAdaptor listenning on port 8080 (created via seaside
menu GUI).

The problem is that I am starting to receive connection reset by peer when
the number of concurrent connection grows, which I think can be dangerous
on a loaded server.

These are the test I've run (default configuration, seaside welcome page)


( SUCCESSFUL )
ab -c 60 -n 5000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        Zinc
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /
Document Length:        4954 bytes

Concurrency Level:      60
Time taken for tests:   13.859 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      25620000 bytes
HTML transferred:       24770000 bytes
Requests per second:    360.78 [#/sec] (mean)
Time per request:       166.306 [ms] (mean)
Time per request:       2.772 [ms] (mean, across all concurrent requests)
Transfer rate:          1805.31 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0   37 233.8      0    7010
Processing:     7  101 436.7     71   12857
Waiting:        7  101 436.7     71   12857
Total:          8  138 537.0     72   13858

Percentage of the requests served within a certain time (ms)
 50%     72
 66%     83
 75%     87
 80%     88
 90%    104
 95%    216
 98%   1087
 99%   1275
100%  13858 (longest request)





( FAILING CONNECTIONS )
ab -c 600 -n 5000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 3349 requests completed
stormbyte@zero ~ $ ab -c 600 -n 5000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1638069 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 2097 requests completed






I would appreciate any help or direction so I can configure it better or
whatever makes me not to be stuck by this anymore as I am out of ideas (at
first, I thought it was my code, but it is reproducible with "vanilla"
seaside)

Thanks.

I forgot to point that the -c argument gives the number of concurrent 
connections (in this case, 20 was successful, but 200+ was failing some of 
them)

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Seaside and Connection Reset by Peer problems

StormByte
Sebastian Sastre wrote:

> To be safe, if you want to go beyond 10 or 15 concurrent connections you
> put additional Pharo image workers so you scale your application
> horizontally. It makes good use of CPU too.
>
> There is a point in which all stacks have to do it, so yes, I think you
> are testing the borders of one Pharo worker.
>
> PS: when you use more than one, you have to design your app in a "more
> stateless way" and use sticky sessions
>
Thanks for your response.

Yes, in previous tests, I made an array with 8 pharo images with nginx as
load balancer with sticky sessions, and, of course it responded to ~1k
concurrent petitions without problems, but of course, it still failed beyond
some point, so that is why I decided to run tests on a single one.

Isn't there any way to change this behavior, for example by letting a higher
timeout or something else in order not to have those connections rejected so
soon? Because in my opinion, less than 300 petition per second on the same
image is not such a high load for it to be starting to drop connections.

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Seaside and Connection Reset by Peer problems

Sven Van Caekenberghe-2
David,

You are benchmarking 'session creation', not 'session using', think about it. Correctly benchmarking Seaside is very hard to do because it is state full by definition.

You need to increase #listenBacklogSize up from 32 if you want more concurrency.

Normal performance for 1 Seaside image is 50-100 full dynamic req/s

Performance for a pure Zinc HTTP server can be 10x as much, for example, serving a single byte reusing connections, but will drop when the work/size per request is increased.

Load balancing is the answer, as well as off loading static resource serving.

Sven

> On 17 Feb 2015, at 19:52, David Carlos Manuelda <[hidden email]> wrote:
>
> Sebastian Sastre wrote:
>
>> To be safe, if you want to go beyond 10 or 15 concurrent connections you
>> put additional Pharo image workers so you scale your application
>> horizontally. It makes good use of CPU too.
>>
>> There is a point in which all stacks have to do it, so yes, I think you
>> are testing the borders of one Pharo worker.
>>
>> PS: when you use more than one, you have to design your app in a "more
>> stateless way" and use sticky sessions
>>
> Thanks for your response.
>
> Yes, in previous tests, I made an array with 8 pharo images with nginx as
> load balancer with sticky sessions, and, of course it responded to ~1k
> concurrent petitions without problems, but of course, it still failed beyond
> some point, so that is why I decided to run tests on a single one.
>
> Isn't there any way to change this behavior, for example by letting a higher
> timeout or something else in order not to have those connections rejected so
> soon? Because in my opinion, less than 300 petition per second on the same
> image is not such a high load for it to be starting to drop connections.
>
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Seaside and Connection Reset by Peer problems

sebastianconcept@gmail.co
In reply to this post by StormByte

On Feb 17, 2015, at 4:52 PM, David Carlos Manuelda <[hidden email]> wrote:

and, of course it responded to ~1k 
concurrent petitions without problems, but of course, it still failed beyond 
some point, so that is why I decided to run tests on a single one.

Isn't there any way to change this behavior, for example by letting a higher 
timeout or something else in order not to have those connections rejected so 
soon? Because in my opinion, less than 300 petition per second on the same 
image is not such a high load for it to be starting to drop connections.

So you know there is a number. NodeJS, Ruby VM, the JVM even Google they all also have a number in which they have to do horizontal scaling.

The best question you can use now is if your number for your concrete app is giving you stuff in a way that is cheap enough to horizontally scale going safely to profitability (or not).

Pharo is getting better by the day so you will have some “free rides” ahead to enjoy.


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside