I/O Benchmarking tools

December 2nd, 2013

This blog post will  be a place to park ideas and experiences with I/O benchmark tools and will be updated  on an ongoing basis.

Please feel free to share your own experiences with these tools or others in the comments!

There are a number of tools out there to do I/O benchmark testing such as

  • fio
  • IOZone
  • bonnie++
  • FileBench
  • Tiobench
  • orion

My choice for best of breed is fio
(thanks to Eric Grancher for suggesting fio).


For Oracle I/O testing , Orion from Oracle would be the normal choice but I’ve run into some  install errors which were solved  but more importantly run into  runtime bugs.


IOZone, available at  http://linux.die.net/man/1/iozone, is the tool I see the most references to on the net and google searches. The biggest drawback of IOZone is the there seems to be no way to limit the test to 8K random reads. Example



Bonnie  is a close to IOZone, but not quite as flexible and even less flexible than fio. Example.


Haven’t investigated FileBench though looks interesting.



not much info


flexible I/O tester

Here is a  description from the above URL:
“fio is an I/O tool meant to be used both for benchmark and stress/hardware verification. It has support for 13 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. fio displays all sorts of I/O performance information. Fio is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OS X, OpenSolaris, AIX, HP-UX, and Windows.”
with fio, setup options in a file with benchmark configuration, example


# job name between brackets (except when value is "global" )

# overwrite if true will create file if it doesn't exist
# if file exists and is large enough nothing happens
# here it is set to false because file should exist

#   read        Sequential reads
#   write       Sequential writes
#   randwrite   Random writes
#   randread    Random reads
#   rw          Sequential mixed reads and writes
#   randrw      Random mixed reads and writes

# ioengine=
#    sync       Basic read(2) or write(2) io. lseek(2) is
#               used to position the io location.
#    psync      Basic pread(2) or pwrite(2) io.
#    vsync      Basic readv(2) or writev(2) IO.
#    libaio     Linux native asynchronous io.
#    posixaio   glibc posix asynchronous io.
#    solarisaio Solaris native asynchronous io.
#    windowsaio Windows native asynchronous io.

# direct If value is true, use non-buffered io. This is usually
#        O_DIRECT. Note that ZFS on Solaris doesn't support direct io.

# bs The block size used for the io units. Defaults to 4k.


# fadvise_hint if set to true fio will use fadvise() to advise the kernel
#               on what IO patterns it is likely to issue.

# nrfiles= Number of files to use for this job. Defaults to 1.

Then run

$ fio config_file

read_8k_200MB: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=1
fio 1.50
Starting 1 process
Jobs: 1 (f=1): [R] [100.0% done] [8094K/0K /s] [988 /0  iops] [eta 00m:00s]
read_8k_200MB: (groupid=0, jobs=1): err= 0: pid=27041
  read : io=204800KB, bw=12397KB/s, iops=1549 , runt= 16520msec
    slat (usec): min=14 , max=2324 , avg=20.09, stdev=15.57
    clat (usec): min=62 , max=10202 , avg=620.90, stdev=246.24
     lat (usec): min=203 , max=10221 , avg=641.43, stdev=246.75
    bw (KB/s) : min= 7680, max=14000, per=100.08%, avg=12407.27, stdev=1770.39
  cpu          : usr=0.69%, sys=2.62%, ctx=26443, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=25600/0/0, short=0/0/0
     lat (usec): 100=0.01%, 250=2.11%, 500=20.13%, 750=67.00%, 1000=3.29%
     lat (msec): 2=7.21%, 4=0.23%, 10=0.02%, 20=0.01%

Run status group 0 (all jobs):
   READ: io=204800KB, aggrb=12397KB/s, minb=12694KB/s, maxb=12694KB/s, mint=16520msec, maxt=16520msec



  1. Trackbacks

  2. No trackbacks yet.

  2. khailey
  3. olivier bernhard
  4. | #4

    No SLOB?

  5. Noons
    | #5

    Most of these tools do IO out of context of an Oracle database.
    The one that vaguely approaches it is Orion. But only for RAW.
    It is no match for the directed and SQL-based physical/logical IO that SLOB does.
    Hence why SLOB is a much better tool: I don’t care how many IOs some dedicated tool can pump if a database instance can’t match it in any way shape or format.
    Hence why I prefer to use the db itself to test. The actual release I use for my data.
    In some cases, the actual server and database itself. Not out of context code.

  6. Mahmoud
    | #6


    I am trying to benchmark the performance of virtual machines through benchmarking the performance of the disk. I am using FIO, and I am wondering what would be a typical file size and block size; I am currently using 2GB, 4GB, 8GB as file sizes, and 4KB, 16KB, 64KB, 512KB as block sizes (knowing that I have a 2GB RAM).


three × 8 =