Justin Gehtland posted some loose macro benchmarks for an app he's done in both Rails and J2EE. The performance numbers were interesting, if just in that without caching they performed about the same. Yawn, yawn -- perl folks have been saying this for years. When the heavy lifting is done in C (web server, database), and the app logic is in a scripting language (the, er, app specific stuff), you get the best of both worlds.
Now, where it does get interesting is when he turns caching on. Enabling caching on the rails version jumped requests per second from ~70 to ~1750. The same on the J2EE side from ~70 to ~80. Now, there are a ton of explanations for this, ranging from Servlets being theoretically unable to scale as high as a select based web server in C, to Justin having set up caching better on the Rails side of things, to the caching mechanisms employed on the different stacks.
All of these probably play in, though I think the theoretical limit one is pretty bogus, so will discard it as rubbish (you can break the servlet spec in a transparent-to-the-programmer way and use NIO to get way better throughput). The other two, on the other hand, are very relevant.
Caching being better configured on the Rails impl is easy to explain -- caching is dirt easy to configure in rails. It's a one-liner in the controller. To clean up after the cache is also dirt easy, you attach a sweeper which is basically a callback that picks up on when a cached element should be discarded. This does two things, 1) it lets you enable caching as a seperate concern from the logic/view very, very easily, and 2) lets you cache aggressivley as clearing out the cache is easy to configure. The equivalent in J2EE space is much tricker. Cocoon probably has the nicest cache setup, in terms of making it easy to cache things, but it lacks the easy cache invallidation. The much more commonly used OSCache is really easy to use, but litters your JSP's with caching stuff, and invalidation, again, is much trickier. There are a bunch of other ones, but I don't know of one as easy to use.
The other point is the caching mechanism. The Java-only route still generally has a higher overhead to throw cached data at the client in comparison to Apache or lighttpd shoving static content at the client -- which rails page caching basically does. You can get this with the Java stack -- it is one of the reasons people give for putting Apache Web Server in front of the servlet container, but it is much nastier to cache semi-dynamic stuff via mod_cache on a different server, communicating via headers, then it is to do it via the page and sweeper mechanism in rails.
It reminds me, a great deal, of a conversation I had with a really bright guy who re-implemented (okay, actually pre-implemented, or co-implemented) something very popular in the open source world (written in C) in ocaml. After beating on it for a while he concluded that the basis for the whole design was broken, but he attributes being able to see why the whole design was broken to the expressiveness of the language, not to any abstract conceptual model. The C version is in widespread use, releases bug fix versions quite frequently, and a lot of people wonder if it will ever actually be stable.
Whether he was right or not, in this case, is a guessing game. He has a PhD in this stuff, I just use it. His arguments seemed quite valid, and the tangent of noticing design flaws because the language got out of his way (to paraphrase him into PragDave's description of Ruby) is what really stuck with me. The caching stuff Justin found is in the same vein. Getting the massive performance boost out of rails from caching is easier to do because it is easy to see what is being done. The framework is expressive. The same can very easily be said of ocaml compared to C. It is an argument that has come up again, and again, and again, (and again) in discussing the relative merits of Java, and C, and assembly languages, and Fortran, and Python, and Perl, and Ruby (oh my!). Language matters. Expressiveness matters.
I think most people would agree that algorithm selection matters more than language of implementation for performance. That is what this is. The simple fact is that Justin could get a 20x speed improvement with the ruby based implementation, not because ruby is faster (it definately isn't) but because the language, the framework, and the stack makes it easier to do what he wants to do. The theoretical limit of optimization is definately in J2EE's favor, but if actually doing those optimizations is significantly more difficult (complexity for the most part) -- does it matter if they can be done?
The other interesting thing here is that the boost rails can get from caching is directly tied to it being closer to the metal. Rails caching is basically "not caching" but generating static content and letting the web server spew it out. Java based caching is almost always at a much higher level of abstraction -- specifically to make it more flexible. OSCache, for example, lets you cache arbitrary fragments of JSP's and uses a servlet filter to fill in the blanks from the cache when the cached version is used. Rails doesn't do this. It has different levels of caching, but it loses some flexibility in order to make it easier to use and take advantage of the services otherwise available from the web server (in the case of page caching, anyway). The tradeoffs are interesting, and are the anture of design.
Another good place to see this kind of tradeoff is in SpringMVC. Most web frameworks build their abstractions around "physical" elements: actions, pages, requests, etc. A lot of the SpringMVC stuff builds abstractions around "behavioral" elements like wizards. You see this all over Spring, in fact, not just in SpringMVC. I think Rod et. al. are closet FP junkies. These are underlying design assumptions and ideas. Sure, it has actions, and requests, and the standard structural abstractions. The interesting bits are where it goes beyond that and into the behavioral abstractions.
Okay, am rambling now, gonna shut up =)
Andris Birkmanis posted a reference to a paper, by David Teller, on resource recovery in pi-calculus on LtU. I'd not really thought of dead process collection in terms of garbage collection before, but it certainly makes sense when thought about.
This is also something to think about with the growing interest/importance of modal web applications (continuation based a la Wee, Seaside, RIFE, Cocooon Flow) which effectively model a session as a continuable process. I'm wandering away from David's paper, but how to handle the growing memory requirements of a bunch of continuations in the session gets hairy. Cocoon does it, thinking in terms of GC, via explicit "memory" management, where continuations are stored in a tree structure and you are responsible for invalidating them yourself. Not sure if there is an equivalent for "dead back button detection" as there is for dead process detection.
Ah well, food for thought after I've had some more coffee.