apocryph.org Notes to my future self


A more in-depth analysis of Ruby HTTP client performance

As a follow-up to my previous article on Ruby HTTP client performance I’ve revamed my test rig and revised my tests to cover more variables, more implementations, and more Ruby versions.

This time I compare Ruby 1.8.6, 1.8.7, and 1.9.0, exclusively on CentOS 5 Linux. I compare the stock Net::HTTP, rfuzz, libcurl, eventmachine, right_http, and a number of Net::HTTP variations with slight performance tweaks. I evaluate clock time, CPU time, and CPU time over clock time, for five sites with varying network characteristics. As before, my test rig and results are available on my SVN repository for further experimentation.

For the conclusion and pretty pictures skip to the bottom. For the gory details read on.

Test Methodology

I have a simple HTTP client task, downloading a 10MB zip file from each of five data centers around the world (Seattle, Dallas, Chicago, Washington DC, and London). I’ve implemented that task using each of the HTTP client implementations I’m testing. I then use each implementation to download the file from each of the data centers using my CentOS 5 VPS box located in Future Hosting’s Dallas data center. I keep track of how much wall clock and CPU time is consumed by each implementation with the help of the Ruby benchmark library, and log that information to a CSV file.

I repeat this for ruby 1.8.6, 1.8.7, and 1.9.0. I then aggregate the results and generate pretty graphs.

Test implementations

I have tested the following HTTP client implementations:

  • stock_net_http – The Net::HTTP library that ships with Ruby. In 1.8.7 and beyond this library is a bit improved by a larger 16K read buffer size, but is otherwise unchanged between revisions
  • net_http_notimeout – A subclass of Net::HTTP that overrides the timeout logic to eliminate the timeout feature, which is cause for some shitty performance. This implementation also forces a 16K buffer size even under Ruby 1.8.6
  • net_http_select – A subclass of Net::HTTP that uses select() to implement timeouts instead of the rather inefficient stock timeout implementation. This, like all my custom HTTP impls, forces a 16K buffer size.
  • net_http_zerocopy – Another Net::HTTP subclass that has a modified read loop which uses the same pre-allocated String buffer for each read. This implementation also uses select() for timeout, and a hard-coded 16K buffer size.
  • net_http_zerocopy_sysread – A variation of net_http_zerocopy that uses readpartial with no timeout for socket reads, along with the existing preallocated buffer optimizations of its parent.
  • rfuzz – Uses a slightly modified version of the lightweight HTTP client in the rfuzz library. The rfuzz base implementation as well as this tweaked one do not implement timeouts. The rfuzz library is required or this implementation will be skipped.
  • right_http_connection – Uses the right_http_connection HTTP client implementation from the rightaws library. Annoyingly, right_http_connection works by monkey-patching Net::HTTP, which is why I had to modify my test rig to run each implementation test in a new instance of the Ruby interpreter. Bad form.
  • eventmachine – Uses the EventMachine HTTP client unmodified
  • libcurl Uses the Ruby bindings for the curl native HTTP library

Unfortunately, I could not get rev or revactor working on any of my Ruby versions, so I was unable to evaluate those implementations.

Try this at home

To reproduce my results, do the following:

  • Check out my SVN repository at http://svn.apocryph.org/svn/projects/rubyhttp/trunk revision 145.
  • Run the tests with Ruby 1.8.6. On my machine that’s the version in the path:
      ruby -v
      ruby 1.8.6 (2007-09-24 patchlevel 111) [i686-linux]

    which means the following command runs all available impls:

      ruby -w -rubygems test_all_impls.rb

    That will run all the available impls (some, like rfuzz, aren’t available if you haven’t installed the necessary gem), and log the results to ./results/(date), where (date) is the YYYY-MM-DD date.

  • Run the tests with Ruby 1.8.7. On my machine I had to build that from source:
       ~/ruby18/bin/ruby -v
       ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-linux]

    To run the tests you have to pass the Ruby command on the command line, since I couldn’t figure out how to programmatically determine the path to the Ruby interpreter. On my system that’s:

      ~/ruby18/bin/ruby -w -rubygems test_all_impls.rb "~/ruby18/bin/ruby -w -rubygems"

    Again, gems are required for some impls.

  • Run the tests with Ruby 1.9.0. Same deal as 1.8.7.
      ~/ruby19/bin/ruby -v
      ruby 1.9.0 (2008-10-06 revision 19702) [i686-linux]

    The command is similar:

      ~/ruby19/bin/ruby -w -rubygems test_all_impls.rb "~/ruby19/bin/ruby -w -rubygems"
  • After running all three, you’ll have a bunch of CSV files in the results subdirectory for today’s date. Here’s what I have:
  • To generate aggregate data files suitable for analysis, cd into the results directory and run:

    ruby -w -rubygems ../../combine_csv.rb

    This will aggregate all the .csv files in the directory and format them into three aggregate files: clock_time.txt, cpu_percentage.txt, and total_cpu_time.txt. These files are formatted with columns for the server locations, rows for the HTTP implementation names, and values corresponding to the wall clock time, clock time over cpu time, and cpu time for each implementation and site. These are ready-made for generating the bar charts below in Excel. Note that you’ll need the FasterCSV library in order for this to work.


Running all the tests is a pain in the ass. If you fetch my SVN repository, you’ll find the raw data files that I got from my tests under results/2008-11-09. Or, you can just read my analysis below.

Clock time

Wall clock time

As you can see above, each implementation takes more or less the same amount of wall clock time to download from a given site, with significant variations between sites. This is expected, as downloading a file over the Internet is a mostly network-bound operation. We don’t care so much how long it takes, as how much the CPU has to work while it’s happening. Which brings us to…

CPU Time

CPU time

Wow, stock 1.8.6 Net::HTTP really is teh suck! At least twice as much CPU usage as the nearest competitor. Going from a 1K read buffer to 16K in 1.8.7 made a big difference.

Going further down the list, you can see the Ruby 1.9.0 Net::HTTP implementations with zero copy reads and readpartial, and the notimeout variant, are the best performers, with rfuzz, libcurl, and eventmachine close behind. It’s encouraging that a pure-ruby impl like rfuzz can compete with a mostly native impl like libcurl.

It’s also important to note that each of the downloads, be it the super-fast Dallas or the slow London, hit the CPU the same way. This really jumps out when you look at CPU time over wall clock time:

Percent of wall clock time spent using the CPU

CPU time over wall clock time

Here you see the various transfer times for each site, but you can also see the widely ranging performance of the various HTTP implementations. No real surprises here; rfuzz, libcurl, and eventmachine are doing very well, while the 1.8.6 stock Net::HTTP continues to blow.


If you need an HTTP client in Ruby, DO NOT use the 1.8.6 Net::HTTP. The 1.8.7 version is considerably better, but libcurl, rfuzz, or eventmachine are all better still.

Within the Net::HTTP family, dropping the inefficient timeout implementation and optimizing the read code to reuse the same buffer are both pretty low-hanging optimizations which should be considered for a future Ruby release. For now, I’d recommend libcurl if you’re on Linux, or 1.8.7 Net::HTTP on Windows (since rfuzz doesn’t have timeouts, and eventmachine is hard to get going under Ruby on Windows).

UPDATE: It turns out there is a binary gem release of eventmachine 0.12.0 for Windows, so if you’re doing Windows development and need a performant HTTP client implementation, you should definitely look into eventmachine. Thanks to Abdul-Rahman Advany for the tip.

Comments (6) Trackbacks (1)
  1. We saw the same issues getting ready for launch at Powerset. The frontend talks to a lot of backend services, some of which run over http. CPU time for http transfer was saturating the machines. As a stopgap measure I monkey-patched Net::HTTP so that it had a larger buffer and didn’t do a timeout every read, at least until we could port all the code over to use libcurl.

  2. Indeed. The more I learn how shitty the stock 1.8.6 HTTP client impl is, the more surprised I am that I haven’t read more about it. It’s basically useless if you’re doing any sort of HTTP interface on the server side, particularly with miserly shared hosts. Really lame.

  3. What problems did you encounter with Rev?

  4. A flurry of warnings, followed by:

    In file included from rev_loop.c:9:
    /home/anelson/ruby19/include/ruby-1.9.0/ruby/backward/rubysig.h:14:2: warning: #warning rubysig.h is obsolete
    In file included from rev.h:11,
    from rev_loop.c:14:
    /home/anelson/ruby19/include/ruby-1.9.0/ruby/backward/rubyio.h:2:2: warning: #warning use “ruby/io.h” instead of “rubyio.h”
    rev_loop.c: In function ‘Rev_Loop_ev_loop_oneshot':
    rev_loop.c:211: error: ‘RB_UBF_DFL’ undeclared (first use in this function)
    rev_loop.c:211: error: (Each undeclared identifier is reported only once
    rev_loop.c:211: error: for each function it appears in.)
    rev_loop.c:287:2: warning: no newline at end of file
    make: *** [rev_loop.o] Error 1

    I admit I didn’t spend a whole lot of time looking into it, though I at least got as far as getting libev installed.

  5. is it? I thought eventmachine had binary releases for windows…

  6. I didn’t think so, but I just checked and sure enough, there is a binary gem for version 0.12.0. Thanks for pointing that out.

Leave a comment