Results for a PerformanceEvaluation benchmark on a cluster with
19 RegionServers running HBase 0.92.1. Each test was on 30M
KeyValues, each value with a size of 100 bytes (normally
PerformanceEvaluation uses 1000 byte values). The total size of
the data is negligible compared to the amount of memory available for the
cluster (each RS has a 16GB heap, so a total of 304GB of RAM for RS JVM
heaps), so everything fits comfortably in the block cache and no memstore
needs to be flushed. This is a best-case scenario for HBase and it allows us
to assess the performance of the client. The test is run by the driver in
bench.sh. GC stats were obtained by
running a slightly modified version of
PrintGCStats on the GC
logs. In order to plug asynchbase into PerformanceEvaluation
see HBASE-5539.
In all graphs, "lower is better", the X axis is always the number of client
threads producing HBase requests. Note that tests using HTable
always use 10x more threads than those using asynchbase in order to have a
fairer comparison (otherwise HTable suffers from poor concurrency
due to blocking RPC calls).