Strange HTTP Performance Variations Between OS’s

1 minute read

[2013-12-19 I let latentsee.com lapse but you can still install latentsee on your own webserver]

In a recent talk at VelocityConf, John Rauser explained the effect of TCP Slow Start and Congestion Control on web performance. He pointed out that RFC 1122 states:

Recent work by Jacobson on Internet congestion and TCP retransmission stability has produced a transmission algorithm combining “slow start” with “congestion avoidance”. A TCP MUST implement this algorithm.

While examining the impact of these with my new HTTP performance testing tool (LatentSee) I noticed that the charts generated on my Mac & Windows machines didn’t seem to match the theory. Usually we would expect to receive 3 packets (< 4500 bytes) in the first segment. Instead I am seeing up to 67KB on the Mac and around 35KB on Windows 7.

Effect of file size on retrieval time from slicehost.latentsee.com

Original images: Theory Ubuntu Windows 7 Mac OS X

When I ran the same tests against a different hosting provider (Brightbox), Windows and Mac OSX behaved more like my Ubuntu box.

Effect of file size on retrieval time from brightbox.latentsee.com brightboxlatentseecom-100×1kb.png

Original images Theory Ubuntu Windows 7 Mac OS X

I’m very curious about these differences. It takes the same time for my Mac to retrieve any file up to 67KB in size from Slicehost. Have they tuned their TCP stack differently? Why then does Ubuntu behave similarly against both Slicehost and Brightbox? Is everyone conforming to the RFCs?

I’d love to hear if people can reproduce these results.

You can experiment with LatentSee using your web browser.

You can also put latentsee.php on your Apache webserver to test your own servers (be sure to disable compression on the vhost).