Swazoo web server optimizations

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Swazoo web server optimizations

Janko Mivšek
I'm forwarding my seaside post about yesterday made Swazoo optimization,
which allows you to serve large static files efficiently and in parallel
to other serving. Also I just measured max. throughput and achieved
100MB/sec! That is on VW on local machine to exclude net speed limitations.


-------- Original Message --------
Subject: Re: [Seaside] Swazoo web server
Date: Sun, 22 Jul 2007 20:20:59 +0200
From: Janko Miv?ek <janko.mivsek na eranova.si>

>> Can Swazoo handle 20MB downloads at the same time as continuing
>> serving my seaside application ?
>
> Currently not very efficient. I just tried and it managed to download
> 20M file with 56KB/sec (on 100MB optical line) but CPU was 100% and with
> excessive garbage generation. Parallel wget was blocked during that
> time. That wget otherwise got 11KB HTML file in avg 120ms (about
> 100KB/sec) on 4M DSL line.

Well, I just made that optimization and now Swazoo can serve long files
while serving others too. Problem was in very badly done flush to TCP
socket, which copied a whole file everytime it sends a small chunk to
socket. This wasn't obvious at small responses but became obvious at
large files.

Now a 20MB file is served with about 4MB/sec on 100Mb line while 11KB
requests are just slightly slower, about 160ms instead of 120 on 4Mb line.

But it still takes about 10s reading that file to memory and copying
there several times and during that time server is blocked. Streaming
will solve that problem too.

You can try by yourself: download
http://esug2003.esug.org/vrh-oktober.tar.gz while requesting first page
http://esug2003.esug.org.

This website is served by Aida/Web static web serving but I think same
results would be with Swazoo's own one.

This optimization will be already included in forthcoming 2.0 beta2.

Best regards
Janko


> Problem seems to be in current implementation of response sending, which
> generates a lot of small short living arrays to send segments of data to
> TCP socket. Not to mention that all file is read in memory and copied
> there several times!
>
> I'm currently working on streaming support and just finished input
> streaming (for file uploads). Idea is to stream files directly from/to
> TCP sockets, without intermediate buffering in memory. After that and
> after careful GC avoidance optimizations I'm confident that Swazoo will
> be capable to serve such large files in parallel to serve other requests.
>
> Input streaming will be released in few days as Swazoo 2.0 beta2, while
> I expect to finish output streaming soon after that too.
>
> Best regards
> Janko

--
Janko Miv?ek
AIDA/Web
Smalltalk Web Application Server
http://www.aidaweb.si