Brian's Waste of Time

Sat, 12 Aug 2006

HTTP, Mongrel, and Pipelining

Mongrel is getting a lot of good (and deserved, in my opinion) attention lately as an app server for ruby. One of the things that bothered me about it, for a good while, was this decision, explained in a comment in mongrel.rb:

  # A design decision was made to force the client to not pipeline requests.  HTTP/1.1
  # pipelining really kills the performance due to how it has to be handled and how
  # unclear the standard is.  To fix this the HttpResponse gives a "Connection: close"
  # header which forces the client to close right away.  The bonus for this is that it
  # gives a pretty nice speed boost to most clients since they can close their connection
  # immediately.

Interestingly, the whole HTTP status line and first couple headers are a constant, frozen, string -- short of patching mongrel or using your own TCP connection handling in your Handler, it *will* close the connection a la HTTP 1.0.

I know Zed is an awfully good programmer, so this decision really irked me. I recently asked why this was so, and the answer amounted to ~"because it fits the use case for which mongrel is intended, and makes life easier," which is valid. So, how does it fit this use case?

If you think of mongrel as being designed to run fairly big sites with one dynamic element and mostly static elements, and then this decision works. Basically you have mongrel serve the dynamic page (possibly from rails) and go ahead and close the connection because you know the same server isn't going to receive a followup resource request immediately, those are handled by servers optimized for that, or by a content distribution network. In this case the Connection:close on the initial request makes sense, the browser is going to be opening additional connections to a different host (or hosts for a CDN, or round-robined static setup) which will pipeline requests for resources.

Yahoo! is a good example of this, we see the initial response headers for the front page, made against, return the Connection: close header:

HTTP/1.x 200 OK
Date: Thu, 27 Jul 2006 16:53:56 GMT
P3P: policyref="", ...
Vary: User-Agent
Cache-Control: private
Set-Cookie: FPB=3r0o6jmqh12chrt4; expires=Thu, 01 ....
Set-Cookie: D=_ylh=X3oDMTFmdWZsNGY1BF9TAzI ...
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
Content-Encoding: gzip

but subsequent image loads are made against their CDN, with hosts such as, and do pipeline:

HTTP/1.x 200 OK
Last-Modified: Thu, 11 May 2006 20:46:13 GMT
Accept-Ranges: bytes
Content-Length: 7857
Content-Type: image/png
Cache-Control: max-age=2345779
Date: Thu, 27 Jul 2006 16:53:56 GMT
Connection: keep-alive
Expires: Thu, 12 May 2016 20:29:52 GMT

HTTP/1.x 200 OK
Last-Modified: Wed, 27 Jul 2005 00:18:07 GMT
Etag: "29990d3-122-42e6d2bf"
Accept-Ranges: bytes
Content-Length: 290
Content-Type: image/gif
Cache-Control: max-age=979924
Date: Thu, 27 Jul 2006 16:53:56 GMT
Connection: keep-alive
Expires: Sat, 01 Aug 2015 00:46:23 GMT

And so on...

Mongrel is not designed to be a general HTTP server. However, put Apache 2.2 with the worker mpm and mod_proxy in front of it (making sure to strip out the Connection: close header) and you have a pretty decent setup for a high-load system. Just make sure static resources (including page caching) get served up by apache, not Mongrel :-) This will work best when Apache and Mongrel are on the same machine to reduce the overhead for mod_proxy's connection establishment, but given a fast network, the local connect will be far from the bottlenecks for dynamic pages (and Apache is serving the statics directly).

Anyway, nice stuff, all told.

3 writebacks [/src/ruby] permanent link