Workaround to browser maximum connection limit ?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Workaround to browser maximum connection limit ?

Mariano Martinez Peck
Hi guys,

In my app I have one scenario where we render huge reports. These reports could have say... 20/30 large tables as well as quite some charts and some other report elements. 

Previously we were using a single ajax request to generate the whole report html but that was a pain because client machine would have a really large TTFB. So I was wasting CPU and network at client machine while waiting. 

What we did now is that each report element renders a title + spinning while and does an ajax request which, on success does a #replaceWith:  with the real contents. The idea is to show the report as soon as possible and start replacing spinning wheels with real contents as soon as content is ready. That way I maximize CPU and network on client side. 

The second thing is that making this on AJAX calls, that would end up on different Gems on my GemStone which was very performant. I have 10 Seaside gems on a 8 cores CPU so all those AJAX request were load balanced via nginx over the 10 seaside gems, which on the other hand were split across all cores. Previously, with a single request, only one Gem took care and hence only one CPU core was used. 

This change was nice and improved performance. However, when I analyze requests, I see that I have many that are "Stalled". And yeah, on Chrome, they are all stalled when there is more than 6 requests to the same location. 

To conclude, it looks like i am doing what is called "Loading page content with many Ajax requests" [1]. But I still don't find an easy workaround. I would like to be able to use my 10 Gems over the 8 CPU cores....

Any idea?

Thanks in advance,



_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Workaround to browser maximum connection limit ?

Paul DeBruicker
Are you using HTTP/2 in your nginx config? https://en.wikipedia.org/wiki/HTTP/2  It both multiplexes & pipelines requests so the browser's limits have less, if no effect.



Could also maybe adapt the functionality of the lazy-loading code that is sometimes used for images.  If the report isn't on the screen & hasn't been scrolled to do you really need it in the DOM?






Mariano Martinez Peck wrote
Hi guys,

In my app I have one scenario where we render huge reports. These reports
could have say... 20/30 large tables as well as quite some charts and some
other report elements.

Previously we were using a single ajax request to generate the whole report
html but that was a pain because client machine would have a really large
TTFB. So I was wasting CPU and network at client machine while waiting.

What we did now is that each report element renders a title + spinning
while and does an ajax request which, on success does a #replaceWith:  with
the real contents. The idea is to show the report as soon as possible and
start replacing spinning wheels with real contents as soon as content is
ready. That way I maximize CPU and network on client side.

The second thing is that making this on AJAX calls, that would end up on
different Gems on my GemStone which was very performant. I have 10 Seaside
gems on a 8 cores CPU so all those AJAX request were load balanced via
nginx over the 10 seaside gems, which on the other hand were split across
all cores. Previously, with a single request, only one Gem took care and
hence only one CPU core was used.

This change was nice and improved performance. However, when I analyze
requests, I see that I have many that are "Stalled". And yeah, on Chrome,
they are all stalled when there is more than 6 requests to the same
location.

To conclude, it looks like i am doing what is called "Loading page content
with many Ajax requests" [1]. But I still don't find an easy workaround. I
would like to be able to use my 10 Gems over the 8 CPU cores....

Any idea?

Thanks in advance,


[1]
http://sgdev-blog.blogspot.com.ar/2014/01/maximum-concurrent-connection-to-same.html


--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Workaround to browser maximum connection limit ?

Mariano Martinez Peck


On Mon, Mar 20, 2017 at 10:11 PM, Paul DeBruicker <[hidden email]> wrote:
Are you using HTTP/2 in your nginx config?
https://en.wikipedia.org/wiki/HTTP/2  It both multiplexes & pipelines
requests so the browser's limits have less, if no effect.


Yes, this is very interesting idea. Do you have such a setup working with Seaside and GemStone?
I am compiling now newer version of nginx with ALPN support and correct version of SSL on CentOS7. 
But I am not sure what else must be done to close the loop so that to be sure HTTP2 is used everywhere. 
Do I need to do something on the Seaside adaptors? 

 


Could also maybe adapt the functionality of the lazy-loading code that is
sometimes used for images.  If the report isn't on the screen & hasn't been
scrolled to do you really need it in the DOM?







Mariano Martinez Peck wrote
> Hi guys,
>
> In my app I have one scenario where we render huge reports. These reports
> could have say... 20/30 large tables as well as quite some charts and some
> other report elements.
>
> Previously we were using a single ajax request to generate the whole
> report
> html but that was a pain because client machine would have a really large
> TTFB. So I was wasting CPU and network at client machine while waiting.
>
> What we did now is that each report element renders a title + spinning
> while and does an ajax request which, on success does a #replaceWith:
> with
> the real contents. The idea is to show the report as soon as possible and
> start replacing spinning wheels with real contents as soon as content is
> ready. That way I maximize CPU and network on client side.
>
> The second thing is that making this on AJAX calls, that would end up on
> different Gems on my GemStone which was very performant. I have 10 Seaside
> gems on a 8 cores CPU so all those AJAX request were load balanced via
> nginx over the 10 seaside gems, which on the other hand were split across
> all cores. Previously, with a single request, only one Gem took care and
> hence only one CPU core was used.
>
> This change was nice and improved performance. However, when I analyze
> requests, I see that I have many that are "Stalled". And yeah, on Chrome,
> they are all stalled when there is more than 6 requests to the same
> location.
>
> To conclude, it looks like i am doing what is called "Loading page content
> with many Ajax requests" [1]. But I still don't find an easy workaround. I
> would like to be able to use my 10 Gems over the 8 CPU cores....
>
> Any idea?
>
> Thanks in advance,
>
>
> [1]
> http://sgdev-blog.blogspot.com.ar/2014/01/maximum-concurrent-connection-to-same.html
>
>
> --
> Mariano
> http://marianopeck.wordpress.com
>
> _______________________________________________
> seaside mailing list

> seaside@.squeakfoundation

> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside





--
View this message in context: http://forum.world.st/Workaround-to-browser-maximum-connection-limit-tp4939444p4939450.html
Sent from the Seaside General mailing list archive at Nabble.com.
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside



--

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Workaround to browser maximum connection limit ?

Paul DeBruicker
As far as I know there isn't HTTP/2 support available in any of the Smalltalk HTTP clients but as the gems can only process one request at a time it shouldn't matter.  Do HTTP/2 to your nginx, then split those requests among your gems round robin.  Or I think if you pay for nginx you can choose which backend to send a request to based on its busy-ness.






Mariano Martinez Peck wrote
On Mon, Mar 20, 2017 at 10:11 PM, Paul DeBruicker <[hidden email]>
wrote:

> Are you using HTTP/2 in your nginx config?
> https://en.wikipedia.org/wiki/HTTP/2  It both multiplexes & pipelines
> requests so the browser's limits have less, if no effect.
>
>
Yes, this is very interesting idea. Do you have such a setup working with
Seaside and GemStone?
I am compiling now newer version of nginx with ALPN support and correct
version of SSL on CentOS7.
But I am not sure what else must be done to close the loop so that to be
sure HTTP2 is used everywhere.
Do I need to do something on the Seaside adaptors?



>
>
> Could also maybe adapt the functionality of the lazy-loading code that is
> sometimes used for images.  If the report isn't on the screen & hasn't been
> scrolled to do you really need it in the DOM?
>
>
>
>
>
>
>
> Mariano Martinez Peck wrote
> > Hi guys,
> >
> > In my app I have one scenario where we render huge reports. These reports
> > could have say... 20/30 large tables as well as quite some charts and
> some
> > other report elements.
> >
> > Previously we were using a single ajax request to generate the whole
> > report
> > html but that was a pain because client machine would have a really large
> > TTFB. So I was wasting CPU and network at client machine while waiting.
> >
> > What we did now is that each report element renders a title + spinning
> > while and does an ajax request which, on success does a #replaceWith:
> > with
> > the real contents. The idea is to show the report as soon as possible and
> > start replacing spinning wheels with real contents as soon as content is
> > ready. That way I maximize CPU and network on client side.
> >
> > The second thing is that making this on AJAX calls, that would end up on
> > different Gems on my GemStone which was very performant. I have 10
> Seaside
> > gems on a 8 cores CPU so all those AJAX request were load balanced via
> > nginx over the 10 seaside gems, which on the other hand were split across
> > all cores. Previously, with a single request, only one Gem took care and
> > hence only one CPU core was used.
> >
> > This change was nice and improved performance. However, when I analyze
> > requests, I see that I have many that are "Stalled". And yeah, on Chrome,
> > they are all stalled when there is more than 6 requests to the same
> > location.
> >
> > To conclude, it looks like i am doing what is called "Loading page
> content
> > with many Ajax requests" [1]. But I still don't find an easy workaround.
> I
> > would like to be able to use my 10 Gems over the 8 CPU cores....
> >
> > Any idea?
> >
> > Thanks in advance,
> >
> >
> > [1]
> > http://sgdev-blog.blogspot.com.ar/2014/01/maximum-
> concurrent-connection-to-same.html
> >
> >
> > --
> > Mariano
> > http://marianopeck.wordpress.com
> >
> > _______________________________________________
> > seaside mailing list
>
> > seaside@.squeakfoundation
>
> > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>
>
>
>
>
> --
> View this message in context: http://forum.world.st/
> Workaround-to-browser-maximum-connection-limit-tp4939444p4939450.html
> Sent from the Seaside General mailing list archive at Nabble.com.
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>



--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Workaround to browser maximum connection limit ?

Mariano Martinez Peck


On Tue, Mar 21, 2017 at 11:57 AM, Paul DeBruicker <[hidden email]> wrote:
As far as I know there isn't HTTP/2 support available in any of the Smalltalk
HTTP clients but as the gems can only process one request at a time it
shouldn't matter.  Do HTTP/2 to your nginx, then split those requests among
your gems round robin.


Yeah, I did that and it worked! Not sure if it helped for this particular case but using HTTP2 is worth nonetheless.
For the record, I followed this guide for CentOS 7: 

 
  Or I think if you pay for nginx you can choose which
backend to send a request to based on its busy-ness.

Yes, I am already using that stragegy (see least_conn below)


upstream xxx
{
least_conn;
server localhost:40210;
server localhost:40211;
server localhost:40212;
server localhost:40213;
server localhost:40214;
server localhost:40215;
server localhost:40216;
server localhost:40217;
server localhost:40218;
server localhost:40219;
server localhost:40220;
}

 
 





Mariano Martinez Peck wrote
> On Mon, Mar 20, 2017 at 10:11 PM, Paul DeBruicker &lt;

> pdebruic@

> &gt;
> wrote:
>
>> Are you using HTTP/2 in your nginx config?
>> https://en.wikipedia.org/wiki/HTTP/2  It both multiplexes & pipelines
>> requests so the browser's limits have less, if no effect.
>>
>>
> Yes, this is very interesting idea. Do you have such a setup working with
> Seaside and GemStone?
> I am compiling now newer version of nginx with ALPN support and correct
> version of SSL on CentOS7.
> But I am not sure what else must be done to close the loop so that to be
> sure HTTP2 is used everywhere.
> Do I need to do something on the Seaside adaptors?
>
>
>
>>
>>
>> Could also maybe adapt the functionality of the lazy-loading code that is
>> sometimes used for images.  If the report isn't on the screen & hasn't
>> been
>> scrolled to do you really need it in the DOM?
>>
>>
>>
>>
>>
>>
>>
>> Mariano Martinez Peck wrote
>> > Hi guys,
>> >
>> > In my app I have one scenario where we render huge reports. These
>> reports
>> > could have say... 20/30 large tables as well as quite some charts and
>> some
>> > other report elements.
>> >
>> > Previously we were using a single ajax request to generate the whole
>> > report
>> > html but that was a pain because client machine would have a really
>> large
>> > TTFB. So I was wasting CPU and network at client machine while waiting.
>> >
>> > What we did now is that each report element renders a title + spinning
>> > while and does an ajax request which, on success does a #replaceWith:
>> > with
>> > the real contents. The idea is to show the report as soon as possible
>> and
>> > start replacing spinning wheels with real contents as soon as content
>> is
>> > ready. That way I maximize CPU and network on client side.
>> >
>> > The second thing is that making this on AJAX calls, that would end up
>> on
>> > different Gems on my GemStone which was very performant. I have 10
>> Seaside
>> > gems on a 8 cores CPU so all those AJAX request were load balanced via
>> > nginx over the 10 seaside gems, which on the other hand were split
>> across
>> > all cores. Previously, with a single request, only one Gem took care
>> and
>> > hence only one CPU core was used.
>> >
>> > This change was nice and improved performance. However, when I analyze
>> > requests, I see that I have many that are "Stalled". And yeah, on
>> Chrome,
>> > they are all stalled when there is more than 6 requests to the same
>> > location.
>> >
>> > To conclude, it looks like i am doing what is called "Loading page
>> content
>> > with many Ajax requests" [1]. But I still don't find an easy
>> workaround.
>> I
>> > would like to be able to use my 10 Gems over the 8 CPU cores....
>> >
>> > Any idea?
>> >
>> > Thanks in advance,
>> >
>> >
>> > [1]
>> > http://sgdev-blog.blogspot.com.ar/2014/01/maximum-
>> concurrent-connection-to-same.html
>> >
>> >
>> > --
>> > Mariano
>> > http://marianopeck.wordpress.com
>> >
>> > _______________________________________________
>> > seaside mailing list
>>
>> > seaside@.squeakfoundation
>>
>> > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://forum.world.st/
>> Workaround-to-browser-maximum-connection-limit-tp4939444p4939450.html
>> Sent from the Seaside General mailing list archive at Nabble.com.
>> _______________________________________________
>> seaside mailing list
>>

> seaside@.squeakfoundation

>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>>
>
>
>
> --
> Mariano
> http://marianopeck.wordpress.com
>
> _______________________________________________
> seaside mailing list

> seaside@.squeakfoundation

> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside





--
View this message in context: http://forum.world.st/Workaround-to-browser-maximum-connection-limit-tp4939444p4939510.html
Sent from the Seaside General mailing list archive at Nabble.com.
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside



--

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside