Hello,
recently we measured the throughput of network streams in various LAN
environments in the following scenario:
- client sends a request (a few bytes) to a server
- server sends data (between 100 KB and 10 MB) to the client. The
throughput of sending this data is measured.
This request/response cycle is a repeated for a while. Both client and
server use a plain binary stream on a socket connection.
In some OS/hardware setups, the measured throughput was not even close
to the available capacity (which one might expect for large bulk
transfers with little overhead).
The "winner" managed to transfer about 70 KB/sec. In the same setup,
Iperf reported 10 MB/sec.
After some fruitless socket option tweaks (usually counter-productive
due to the auto-tuning features of modern OS), we experimented with
modifying the buffer size of the stream.
VW has a hardcoded value (20480) for socket streams, so we added some
overrides to be able to specify the buffer size, and measured again...
Some highlights:
- the setup with the original throughput of 70 KB/sec finally reached 5
MB/sec when using a buffer size of 4096 bytes
- an experimental setup with a virtualized server running Windows 2000
(yeah, not your typical server OS) reached a good performance after
switching to a buffer size of 20440 bytes (40 bytes less then the
default size!). 20450 was already bad and had a significant impact on
performance (dropping from 15 MB/sec to 100 KB/sec!)
- on the other hand, Linux based servers seem to prefer values >= 20480.
The penalty when using smaller buffer sizes was much lower though, e.g.
dropping from 10 MB/sec to 5 MB/sec
Conclusion:
- Measure, measure, measure
- the default buffer size of 20480 is often unsuitable, sometimes even
disastrous. Currently I would suggest 4096 for Windows-based environments
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc