Evaluating proxy engines and load balancers for mongrel-driven ruby on rails applications: an introduction and an open call

Zed Shaw’s mongrel “is a fast HTTP library and server for Ruby that is intended for hosting Ruby web applications of any kind using plain HTTP rather than FastCGI or SCGI.”

And saying that it’s “fast” is true. The performance you get from a single mongrel process listening on a port is quite good. You see how such a benchmark relates to your network traffic in an older post of mine.

For example, on a SunFire x4100, with dual Opteron 285s (one of the standard container servers; the 285s are dual core opterons) running Solaris and with 16GBs of RAM.

$ uname -a
SunOS 69-12-222-41 5.11 snv_45 i86pc i386 i86pc
$ prtconf
System Configuration:  Sun Microsystems  i86pc
Memory size: 16256 Megabytes

A simple “Hello World” rails app will serve at 250 req/sec just fine over a gigabit network (I wasn’t trying to push it and involving a database isn’t the point yet).

[benchmark-client1:/] root# httperf --hog --server 69.12.222.41 --uri /hello --port 8000 --num-conn 10000 --rate 250 --timeout 5httperf --hog --timeout=5 --client=0/1 --server=69.12.222.41 --port=8000 --uri=/hello --rate=250 --send-buffer=4096 --recv-buffer=16384 --num-conns=10000 --num-calls=1

Total: connections 10000 requests 10000 replies 10000 test-duration 40.041 s

Connection rate: 249.7 conn/s (4.0 ms/conn, <=26 concurrent connections)
Connection time [ms]: min 3.4 avg 20.7 max 114.2 median 14.5 stddev 18.0
Connection time [ms]: connect 0.7
Connection length [replies/conn]: 1.000

Request rate: 249.7 req/s (4.0 ms/req)
Request size [B]: 68.0

Reply rate [replies/s]: min 247.2 avg 249.7 max 250.4 stddev 1.0 (8 samples)
Reply time [ms]: response 19.9 transfer 0.1
Reply size [B]: header 251.0 content 21.0 footer 0.0 (total 272.0)
Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0



Post written by jason