Shortly after my last entry, I switched from a CPU-bound WSGI app to something comparatively more I/O heavy: serving a static 10K page. I then tried it out with the non-threadpool version of scgi-wsgi. The results surprised me. scgi-wsgi was doing well over 700 (nearly 800 at times) requests per second while ajp-wsgi only mustered 300 requests/sec.
Though of course, like before, when I switched to the non-preforking scgi-wsgi, its throughput dropped to a little over 50 requests/sec. ajp-wsgi maintained 300 requests/sec even while forking.
Given the non-threadpool performance of scgi-wsgi, I was spurred to write my own preforking server code (again), this time in C. The result can be found here. Unlike flup's preforking server, this one is based on descriptor passing. (And since I couldn't find my copy of UNIX Network Programming, I have to thank Google for having it browsable online. )
Hooking up the prefork server into scgi-wsgi, I now get similar performance to the threaded version: 700+ requests/sec. With that, scgi-wsgi graduated from 'limbo' to 'alpha.' And since mod_proxy_scgi is now included with vanilla Apache HTTPD, I will probably be transitioning my stuff from ajp-wsgi to scgi-wsgi. (Eat your own dog food and all that.)
As for the future, I would like to merge the two. I'm still entertaining the idea of hooking up my C WSGI code to an embedded HTTP server, similar to what PyCaduceus did (but remaining a top-level program rather than a Python module). I haven't really found an embeddable HTTP server with an interface that I like, though mongoose looks promising.
In the meantime, I'll go ahead and release ajp-wsgi 1.1 and scgi-wsgi 1.1 (eventually).