How to check hard disk performanceWhat are relevant disk benchmark settings for a desktop systemTrying to find out my SATA speedHow is the /tmp directory cleaned up?Why doesn't /tmp use tmpfs?How do I check the integrity of a storage medium (hard disk or flash drive)?Very poor NVME performanceBenchmarking a HDD using the Benchmark utility in “Disks” on Ubuntu 16.04How to check file-level write performance?Check disk benchmark result with the disk running the OSSlow speed while transferring files from pc to USB Hard driveTips for debugging Samba performance?Can an external flash or hard drive be used to boost performance, similar to Windows' ReadyBoost?Can't read/write on Hard DiskHow to check file-level write performance?External Hard drive gets slow while copying lots of dataHow do i check which process is reading hard disk?Disk performance results with iostat

Do monthly payments decrease on a mortgage later in the loan?

Arabia and the Middle Ages

why does my cheese cake not rise when cooked in a water bath

.htaccess with single page website

What is the maximum distance you can cause damage from?

What number would you bet on to maximize your chances of winning?

Is the genre 'fantasy' still fantasy without magic?

Why has no one requested the tape of the Trump/Ukraine call?

Can the treble clef be used instead of the bass clef in piano music?

Which attack stat does each starter Pokemon favour in Pokemon Sword and Shield?

Difference/relationship between power and significance

What should I tell a customer when my co-worker fails to show up to a meeting?

Is it possible to have multiple *Find* buffers in emacs?

Can a website incur liability for linking to a funding campaign for a lawsuit against it?

ArcMap 10.7.1: Is it still a 32-bit software

Is it possible to determine if a Pokémon is a powerful encounter?

What kinds of sanctions would be applied if you wrongly invoked the Fifth Amendment?

How to time code execution in TeX?

Who was the first human to sleep - perchance to dream - on the Moon?

Can anyone identify the aircraft in the background of this photo please?

40 amp range outlet as extension cord source

What does this docker log entry mean?

Change array in javascript into simpler object

how can traditional forms of magic compete against demon magic?



How to check hard disk performance


What are relevant disk benchmark settings for a desktop systemTrying to find out my SATA speedHow is the /tmp directory cleaned up?Why doesn't /tmp use tmpfs?How do I check the integrity of a storage medium (hard disk or flash drive)?Very poor NVME performanceBenchmarking a HDD using the Benchmark utility in “Disks” on Ubuntu 16.04How to check file-level write performance?Check disk benchmark result with the disk running the OSSlow speed while transferring files from pc to USB Hard driveTips for debugging Samba performance?Can an external flash or hard drive be used to boost performance, similar to Windows' ReadyBoost?Can't read/write on Hard DiskHow to check file-level write performance?External Hard drive gets slow while copying lots of dataHow do i check which process is reading hard disk?Disk performance results with iostat






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









339


















How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.










share|improve this question






















  • 2





    Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .

    – Anon
    Jun 22 '18 at 5:16

















339


















How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.










share|improve this question






















  • 2





    Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .

    – Anon
    Jun 22 '18 at 5:16













339













339









339


155






How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.










share|improve this question
















How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.







hard-drive performance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 11 '12 at 5:22







Luis Alvarado

















asked Dec 12 '11 at 0:22









Luis AlvaradoLuis Alvarado

178k143 gold badges495 silver badges669 bronze badges




178k143 gold badges495 silver badges669 bronze badges










  • 2





    Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .

    – Anon
    Jun 22 '18 at 5:16












  • 2





    Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .

    – Anon
    Jun 22 '18 at 5:16







2




2





Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .

– Anon
Jun 22 '18 at 5:16





Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .

– Anon
Jun 22 '18 at 5:16










9 Answers
9






active

oldest

votes


















429



















Terminal method



hdparm is a good place to start.



sudo hdparm -Tt /dev/sda

/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec


sudo hdparm -v /dev/sda will give information as well.



dd will give you information on write speed.



If the drive doesn't have a file system (and only then), use of=/dev/sda.



Otherwise, mount it on /tmp and write then delete the test output file.



dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output

10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s


Graphical method



  1. Go to System -> Administration -> Disk Utility.

    • Alternatively, launch the Gnome disk utility from the command line by running gnome-disks


  2. Select your hard disk at left pane.

  3. Now click “Benchmark – Measure Drive Performance” button in right pane.

  4. A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.

test



How to benchmark disk I/O



Article



Is there something more you want?






share|improve this answer























  • 11





    I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

    – Ian Mackinnon
    Nov 8 '12 at 16:23







  • 3





    There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

    – Fran Marzoa
    Nov 30 '12 at 22:00







  • 6





    Note that since 12.10 it's simply called Disks and can be found through Unity.

    – Paul Lammertsma
    Feb 14 '14 at 11:18






  • 1





    On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

    – Ken Sharp
    Mar 13 '14 at 15:12






  • 2





    The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

    – Zoredache
    Mar 27 '14 at 16:44


















99



















Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:



dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s





share|improve this answer





















  • 28





    It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

    – Alaa Ali
    Aug 18 '13 at 19:01






  • 2





    Why have you used 384k as block size?

    – Diego F. Durán
    Jun 2 '14 at 14:39






  • 1





    @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

    – Tele
    Jul 25 '14 at 0:17











  • it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

    – MSS
    Aug 27 '16 at 10:00







  • 1





    Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

    – Anon
    Nov 12 '18 at 7:38


















50



















I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.



Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.



To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:



time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"


with that method you get output:



sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile 
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s

real 0m0.441s
user 0m0.004s
sys 0m0.124s


so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s



That is over 100MB/s lower than with caching.



Happy benchmarking,






share|improve this answer





















  • 3





    Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

    – Anon
    Nov 12 '18 at 7:40



















36



















If you want to monitor the disk read and write speed real-time you can use the iotop tool.



This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top.



To install iotop:



sudo apt-get install iotop 


To run it:



sudo iotop





share|improve this answer



































    35



















    If you want accuracy, you should use fio. It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:



    Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):



    fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


    Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):



    fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


    Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):



    fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


    Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):



    fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


    Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.



    However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.



    In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.



    Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.



    If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32-128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.






    share|improve this answer























    • 1





      I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

      – Mikko Rantalainen
      Jun 26 '18 at 7:07






    • 1





      Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

      – Mikko Rantalainen
      Jun 26 '18 at 7:18






    • 1





      Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

      – Anon
      Jun 26 '18 at 20:28






    • 1





      I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

      – Mikko Rantalainen
      Jul 3 '18 at 7:46






    • 1





      Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

      – Anon
      Jul 4 '18 at 18:33


















    25



















    bonnie++ is the ultimate benchmark utility I know for linux.



    (I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)



    It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.



    I have no idea if it is included in Ubuntu, but you can compile it from source easily.



    http://www.coker.com.au/bonnie++/






    share|improve this answer























    • 1





      Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

      – Anon
      Jan 12 at 5:52



















    22



















    Write speed



    $ dd if=/dev/zero of=./largefile bs=1M count=1024
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s


    Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.




    Read speed



    Run the following command to clear the memory cache



    $ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"


    Now read the file which was created in write test:



    $ dd if=./largefile of=/dev/null bs=4k
    165118+0 records in
    165118+0 records out
    676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s





    share|improve this answer





















    • 1





      Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

      – Anon
      Nov 12 '18 at 7:35


















    14



















    some hints on how to use bonnie++



    bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER] 
    bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james


    A bit more at: SIMPLE BONNIE++ EXAMPLE.






    share|improve this answer



































      1



















      f3 - Fight Flash Fraud



      Check integrity, detect fake flash drives and test performance, all three in one shot.



      More on this answer.






      share|improve this answer



























        protected by Community Feb 12 '14 at 16:23



        Thank you for your interest in this question.
        Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



        Would you like to answer one of these unanswered questions instead?














        9 Answers
        9






        active

        oldest

        votes








        9 Answers
        9






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        429



















        Terminal method



        hdparm is a good place to start.



        sudo hdparm -Tt /dev/sda

        /dev/sda:
        Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
        Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec


        sudo hdparm -v /dev/sda will give information as well.



        dd will give you information on write speed.



        If the drive doesn't have a file system (and only then), use of=/dev/sda.



        Otherwise, mount it on /tmp and write then delete the test output file.



        dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output

        10240+0 records in
        10240+0 records out
        83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s


        Graphical method



        1. Go to System -> Administration -> Disk Utility.

          • Alternatively, launch the Gnome disk utility from the command line by running gnome-disks


        2. Select your hard disk at left pane.

        3. Now click “Benchmark – Measure Drive Performance” button in right pane.

        4. A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.

        test



        How to benchmark disk I/O



        Article



        Is there something more you want?






        share|improve this answer























        • 11





          I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

          – Ian Mackinnon
          Nov 8 '12 at 16:23







        • 3





          There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

          – Fran Marzoa
          Nov 30 '12 at 22:00







        • 6





          Note that since 12.10 it's simply called Disks and can be found through Unity.

          – Paul Lammertsma
          Feb 14 '14 at 11:18






        • 1





          On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

          – Ken Sharp
          Mar 13 '14 at 15:12






        • 2





          The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

          – Zoredache
          Mar 27 '14 at 16:44















        429



















        Terminal method



        hdparm is a good place to start.



        sudo hdparm -Tt /dev/sda

        /dev/sda:
        Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
        Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec


        sudo hdparm -v /dev/sda will give information as well.



        dd will give you information on write speed.



        If the drive doesn't have a file system (and only then), use of=/dev/sda.



        Otherwise, mount it on /tmp and write then delete the test output file.



        dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output

        10240+0 records in
        10240+0 records out
        83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s


        Graphical method



        1. Go to System -> Administration -> Disk Utility.

          • Alternatively, launch the Gnome disk utility from the command line by running gnome-disks


        2. Select your hard disk at left pane.

        3. Now click “Benchmark – Measure Drive Performance” button in right pane.

        4. A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.

        test



        How to benchmark disk I/O



        Article



        Is there something more you want?






        share|improve this answer























        • 11





          I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

          – Ian Mackinnon
          Nov 8 '12 at 16:23







        • 3





          There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

          – Fran Marzoa
          Nov 30 '12 at 22:00







        • 6





          Note that since 12.10 it's simply called Disks and can be found through Unity.

          – Paul Lammertsma
          Feb 14 '14 at 11:18






        • 1





          On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

          – Ken Sharp
          Mar 13 '14 at 15:12






        • 2





          The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

          – Zoredache
          Mar 27 '14 at 16:44













        429















        429











        429









        Terminal method



        hdparm is a good place to start.



        sudo hdparm -Tt /dev/sda

        /dev/sda:
        Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
        Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec


        sudo hdparm -v /dev/sda will give information as well.



        dd will give you information on write speed.



        If the drive doesn't have a file system (and only then), use of=/dev/sda.



        Otherwise, mount it on /tmp and write then delete the test output file.



        dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output

        10240+0 records in
        10240+0 records out
        83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s


        Graphical method



        1. Go to System -> Administration -> Disk Utility.

          • Alternatively, launch the Gnome disk utility from the command line by running gnome-disks


        2. Select your hard disk at left pane.

        3. Now click “Benchmark – Measure Drive Performance” button in right pane.

        4. A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.

        test



        How to benchmark disk I/O



        Article



        Is there something more you want?






        share|improve this answer
















        Terminal method



        hdparm is a good place to start.



        sudo hdparm -Tt /dev/sda

        /dev/sda:
        Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
        Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec


        sudo hdparm -v /dev/sda will give information as well.



        dd will give you information on write speed.



        If the drive doesn't have a file system (and only then), use of=/dev/sda.



        Otherwise, mount it on /tmp and write then delete the test output file.



        dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output

        10240+0 records in
        10240+0 records out
        83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s


        Graphical method



        1. Go to System -> Administration -> Disk Utility.

          • Alternatively, launch the Gnome disk utility from the command line by running gnome-disks


        2. Select your hard disk at left pane.

        3. Now click “Benchmark – Measure Drive Performance” button in right pane.

        4. A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.

        test



        How to benchmark disk I/O



        Article



        Is there something more you want?







        share|improve this answer















        share|improve this answer




        share|improve this answer








        edited Aug 2 at 20:06









        Community

        1




        1










        answered Dec 12 '11 at 0:34









        PantherPanther

        84.4k14 gold badges164 silver badges265 bronze badges




        84.4k14 gold badges164 silver badges265 bronze badges










        • 11





          I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

          – Ian Mackinnon
          Nov 8 '12 at 16:23







        • 3





          There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

          – Fran Marzoa
          Nov 30 '12 at 22:00







        • 6





          Note that since 12.10 it's simply called Disks and can be found through Unity.

          – Paul Lammertsma
          Feb 14 '14 at 11:18






        • 1





          On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

          – Ken Sharp
          Mar 13 '14 at 15:12






        • 2





          The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

          – Zoredache
          Mar 27 '14 at 16:44












        • 11





          I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

          – Ian Mackinnon
          Nov 8 '12 at 16:23







        • 3





          There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

          – Fran Marzoa
          Nov 30 '12 at 22:00







        • 6





          Note that since 12.10 it's simply called Disks and can be found through Unity.

          – Paul Lammertsma
          Feb 14 '14 at 11:18






        • 1





          On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

          – Ken Sharp
          Mar 13 '14 at 15:12






        • 2





          The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

          – Zoredache
          Mar 27 '14 at 16:44







        11




        11





        I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

        – Ian Mackinnon
        Nov 8 '12 at 16:23






        I would recommend testing /dev/urandom as well as /dev/zero as inputs to dd when testing an SSD as the compressibility of the data can have a massive effect on write speed.

        – Ian Mackinnon
        Nov 8 '12 at 16:23





        3




        3





        There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

        – Fran Marzoa
        Nov 30 '12 at 22:00






        There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest

        – Fran Marzoa
        Nov 30 '12 at 22:00





        6




        6





        Note that since 12.10 it's simply called Disks and can be found through Unity.

        – Paul Lammertsma
        Feb 14 '14 at 11:18





        Note that since 12.10 it's simply called Disks and can be found through Unity.

        – Paul Lammertsma
        Feb 14 '14 at 11:18




        1




        1





        On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

        – Ken Sharp
        Mar 13 '14 at 15:12





        On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.

        – Ken Sharp
        Mar 13 '14 at 15:12




        2




        2





        The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

        – Zoredache
        Mar 27 '14 at 16:44





        The /tmp filesystem is often using a ramdisk these days. So writing to /tmp would seem to be testing your memory, not your disk subsystem.

        – Zoredache
        Mar 27 '14 at 16:44













        99



















        Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:



        dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
        1024+0records in
        1024+0 records out
        402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s





        share|improve this answer





















        • 28





          It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

          – Alaa Ali
          Aug 18 '13 at 19:01






        • 2





          Why have you used 384k as block size?

          – Diego F. Durán
          Jun 2 '14 at 14:39






        • 1





          @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

          – Tele
          Jul 25 '14 at 0:17











        • it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

          – MSS
          Aug 27 '16 at 10:00







        • 1





          Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

          – Anon
          Nov 12 '18 at 7:38















        99



















        Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:



        dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
        1024+0records in
        1024+0 records out
        402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s





        share|improve this answer





















        • 28





          It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

          – Alaa Ali
          Aug 18 '13 at 19:01






        • 2





          Why have you used 384k as block size?

          – Diego F. Durán
          Jun 2 '14 at 14:39






        • 1





          @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

          – Tele
          Jul 25 '14 at 0:17











        • it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

          – MSS
          Aug 27 '16 at 10:00







        • 1





          Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

          – Anon
          Nov 12 '18 at 7:38













        99















        99











        99









        Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:



        dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
        1024+0records in
        1024+0 records out
        402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s





        share|improve this answer














        Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:



        dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
        1024+0records in
        1024+0 records out
        402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s






        share|improve this answer













        share|improve this answer




        share|improve this answer










        answered Aug 18 '13 at 18:31









        TeleTele

        1,2319 silver badges6 bronze badges




        1,2319 silver badges6 bronze badges










        • 28





          It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

          – Alaa Ali
          Aug 18 '13 at 19:01






        • 2





          Why have you used 384k as block size?

          – Diego F. Durán
          Jun 2 '14 at 14:39






        • 1





          @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

          – Tele
          Jul 25 '14 at 0:17











        • it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

          – MSS
          Aug 27 '16 at 10:00







        • 1





          Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

          – Anon
          Nov 12 '18 at 7:38












        • 28





          It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

          – Alaa Ali
          Aug 18 '13 at 19:01






        • 2





          Why have you used 384k as block size?

          – Diego F. Durán
          Jun 2 '14 at 14:39






        • 1





          @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

          – Tele
          Jul 25 '14 at 0:17











        • it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

          – MSS
          Aug 27 '16 at 10:00







        • 1





          Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

          – Anon
          Nov 12 '18 at 7:38







        28




        28





        It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

        – Alaa Ali
        Aug 18 '13 at 19:01





        It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.

        – Alaa Ali
        Aug 18 '13 at 19:01




        2




        2





        Why have you used 384k as block size?

        – Diego F. Durán
        Jun 2 '14 at 14:39





        Why have you used 384k as block size?

        – Diego F. Durán
        Jun 2 '14 at 14:39




        1




        1





        @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

        – Tele
        Jul 25 '14 at 0:17





        @Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.

        – Tele
        Jul 25 '14 at 0:17













        it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

        – MSS
        Aug 27 '16 at 10:00






        it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower

        – MSS
        Aug 27 '16 at 10:00





        1




        1





        Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

        – Anon
        Nov 12 '18 at 7:38





        Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

        – Anon
        Nov 12 '18 at 7:38











        50



















        I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.



        Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.



        To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:



        time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"


        with that method you get output:



        sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile 
        1024+0 records in
        1024+0 records out
        104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s

        real 0m0.441s
        user 0m0.004s
        sys 0m0.124s


        so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s



        That is over 100MB/s lower than with caching.



        Happy benchmarking,






        share|improve this answer





















        • 3





          Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

          – Anon
          Nov 12 '18 at 7:40
















        50



















        I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.



        Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.



        To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:



        time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"


        with that method you get output:



        sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile 
        1024+0 records in
        1024+0 records out
        104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s

        real 0m0.441s
        user 0m0.004s
        sys 0m0.124s


        so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s



        That is over 100MB/s lower than with caching.



        Happy benchmarking,






        share|improve this answer





















        • 3





          Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

          – Anon
          Nov 12 '18 at 7:40














        50















        50











        50









        I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.



        Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.



        To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:



        time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"


        with that method you get output:



        sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile 
        1024+0 records in
        1024+0 records out
        104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s

        real 0m0.441s
        user 0m0.004s
        sys 0m0.124s


        so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s



        That is over 100MB/s lower than with caching.



        Happy benchmarking,






        share|improve this answer














        I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.



        Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.



        To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:



        time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"


        with that method you get output:



        sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile 
        1024+0 records in
        1024+0 records out
        104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s

        real 0m0.441s
        user 0m0.004s
        sys 0m0.124s


        so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s



        That is over 100MB/s lower than with caching.



        Happy benchmarking,







        share|improve this answer













        share|improve this answer




        share|improve this answer










        answered Dec 6 '12 at 23:18









        Pasi SuominenPasi Suominen

        5014 silver badges2 bronze badges




        5014 silver badges2 bronze badges










        • 3





          Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

          – Anon
          Nov 12 '18 at 7:40













        • 3





          Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

          – Anon
          Nov 12 '18 at 7:40








        3




        3





        Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

        – Anon
        Nov 12 '18 at 7:40






        Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...

        – Anon
        Nov 12 '18 at 7:40












        36



















        If you want to monitor the disk read and write speed real-time you can use the iotop tool.



        This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top.



        To install iotop:



        sudo apt-get install iotop 


        To run it:



        sudo iotop





        share|improve this answer
































          36



















          If you want to monitor the disk read and write speed real-time you can use the iotop tool.



          This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top.



          To install iotop:



          sudo apt-get install iotop 


          To run it:



          sudo iotop





          share|improve this answer






























            36















            36











            36









            If you want to monitor the disk read and write speed real-time you can use the iotop tool.



            This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top.



            To install iotop:



            sudo apt-get install iotop 


            To run it:



            sudo iotop





            share|improve this answer
















            If you want to monitor the disk read and write speed real-time you can use the iotop tool.



            This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top.



            To install iotop:



            sudo apt-get install iotop 


            To run it:



            sudo iotop






            share|improve this answer















            share|improve this answer




            share|improve this answer








            edited Oct 6 '15 at 20:28

























            answered Sep 17 '13 at 14:24









            LarsLars

            4614 silver badges6 bronze badges




            4614 silver badges6 bronze badges
























                35



















                If you want accuracy, you should use fio. It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:



                Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.



                However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.



                In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.



                Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.



                If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32-128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.






                share|improve this answer























                • 1





                  I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:07






                • 1





                  Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:18






                • 1





                  Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

                  – Anon
                  Jun 26 '18 at 20:28






                • 1





                  I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

                  – Mikko Rantalainen
                  Jul 3 '18 at 7:46






                • 1





                  Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

                  – Anon
                  Jul 4 '18 at 18:33















                35



















                If you want accuracy, you should use fio. It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:



                Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.



                However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.



                In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.



                Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.



                If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32-128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.






                share|improve this answer























                • 1





                  I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:07






                • 1





                  Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:18






                • 1





                  Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

                  – Anon
                  Jun 26 '18 at 20:28






                • 1





                  I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

                  – Mikko Rantalainen
                  Jul 3 '18 at 7:46






                • 1





                  Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

                  – Anon
                  Jul 4 '18 at 18:33













                35















                35











                35









                If you want accuracy, you should use fio. It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:



                Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.



                However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.



                In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.



                Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.



                If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32-128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.






                share|improve this answer
















                If you want accuracy, you should use fio. It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:



                Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):



                fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting


                Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.



                However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.



                In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.



                Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.



                If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32-128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.







                share|improve this answer















                share|improve this answer




                share|improve this answer








                edited Aug 14 at 6:26

























                answered Jan 1 '18 at 18:14









                Mikko RantalainenMikko Rantalainen

                1,45614 silver badges20 bronze badges




                1,45614 silver badges20 bronze badges










                • 1





                  I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:07






                • 1





                  Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:18






                • 1





                  Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

                  – Anon
                  Jun 26 '18 at 20:28






                • 1





                  I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

                  – Mikko Rantalainen
                  Jul 3 '18 at 7:46






                • 1





                  Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

                  – Anon
                  Jul 4 '18 at 18:33












                • 1





                  I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:07






                • 1





                  Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

                  – Mikko Rantalainen
                  Jun 26 '18 at 7:18






                • 1





                  Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

                  – Anon
                  Jun 26 '18 at 20:28






                • 1





                  I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

                  – Mikko Rantalainen
                  Jul 3 '18 at 7:46






                • 1





                  Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

                  – Anon
                  Jul 4 '18 at 18:33







                1




                1





                I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

                – Mikko Rantalainen
                Jun 26 '18 at 7:07





                I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.

                – Mikko Rantalainen
                Jun 26 '18 at 7:07




                1




                1





                Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

                – Mikko Rantalainen
                Jun 26 '18 at 7:18





                Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than max_sectors_kb. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync does not matter for reading.

                – Mikko Rantalainen
                Jun 26 '18 at 7:18




                1




                1





                Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

                – Anon
                Jun 26 '18 at 20:28





                Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...

                – Anon
                Jun 26 '18 at 20:28




                1




                1





                I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

                – Mikko Rantalainen
                Jul 3 '18 at 7:46





                I set iodepth to 1 for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4 but never expect to see such numbers for real.

                – Mikko Rantalainen
                Jul 3 '18 at 7:46




                1




                1





                Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

                – Anon
                Jul 4 '18 at 18:33





                Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...

                – Anon
                Jul 4 '18 at 18:33











                25



















                bonnie++ is the ultimate benchmark utility I know for linux.



                (I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)



                It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.



                I have no idea if it is included in Ubuntu, but you can compile it from source easily.



                http://www.coker.com.au/bonnie++/






                share|improve this answer























                • 1





                  Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

                  – Anon
                  Jan 12 at 5:52
















                25



















                bonnie++ is the ultimate benchmark utility I know for linux.



                (I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)



                It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.



                I have no idea if it is included in Ubuntu, but you can compile it from source easily.



                http://www.coker.com.au/bonnie++/






                share|improve this answer























                • 1





                  Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

                  – Anon
                  Jan 12 at 5:52














                25















                25











                25









                bonnie++ is the ultimate benchmark utility I know for linux.



                (I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)



                It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.



                I have no idea if it is included in Ubuntu, but you can compile it from source easily.



                http://www.coker.com.au/bonnie++/






                share|improve this answer
















                bonnie++ is the ultimate benchmark utility I know for linux.



                (I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)



                It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.



                I have no idea if it is included in Ubuntu, but you can compile it from source easily.



                http://www.coker.com.au/bonnie++/







                share|improve this answer















                share|improve this answer




                share|improve this answer








                edited Jul 30 '18 at 13:51









                0xF2

                2,5742 gold badges23 silver badges47 bronze badges




                2,5742 gold badges23 silver badges47 bronze badges










                answered Feb 3 '14 at 16:13









                CortoCorto

                2513 silver badges2 bronze badges




                2513 silver badges2 bronze badges










                • 1





                  Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

                  – Anon
                  Jan 12 at 5:52













                • 1





                  Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

                  – Anon
                  Jan 12 at 5:52








                1




                1





                Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

                – Anon
                Jan 12 at 5:52






                Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.

                – Anon
                Jan 12 at 5:52












                22



















                Write speed



                $ dd if=/dev/zero of=./largefile bs=1M count=1024
                1024+0 records in
                1024+0 records out
                1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s


                Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.




                Read speed



                Run the following command to clear the memory cache



                $ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"


                Now read the file which was created in write test:



                $ dd if=./largefile of=/dev/null bs=4k
                165118+0 records in
                165118+0 records out
                676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s





                share|improve this answer





















                • 1





                  Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

                  – Anon
                  Nov 12 '18 at 7:35















                22



















                Write speed



                $ dd if=/dev/zero of=./largefile bs=1M count=1024
                1024+0 records in
                1024+0 records out
                1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s


                Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.




                Read speed



                Run the following command to clear the memory cache



                $ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"


                Now read the file which was created in write test:



                $ dd if=./largefile of=/dev/null bs=4k
                165118+0 records in
                165118+0 records out
                676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s





                share|improve this answer





















                • 1





                  Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

                  – Anon
                  Nov 12 '18 at 7:35













                22















                22











                22









                Write speed



                $ dd if=/dev/zero of=./largefile bs=1M count=1024
                1024+0 records in
                1024+0 records out
                1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s


                Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.




                Read speed



                Run the following command to clear the memory cache



                $ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"


                Now read the file which was created in write test:



                $ dd if=./largefile of=/dev/null bs=4k
                165118+0 records in
                165118+0 records out
                676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s





                share|improve this answer














                Write speed



                $ dd if=/dev/zero of=./largefile bs=1M count=1024
                1024+0 records in
                1024+0 records out
                1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s


                Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.




                Read speed



                Run the following command to clear the memory cache



                $ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"


                Now read the file which was created in write test:



                $ dd if=./largefile of=/dev/null bs=4k
                165118+0 records in
                165118+0 records out
                676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s






                share|improve this answer













                share|improve this answer




                share|improve this answer










                answered May 5 '16 at 22:12









                Limon MonteLimon Monte

                3973 silver badges8 bronze badges




                3973 silver badges8 bronze badges










                • 1





                  Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

                  – Anon
                  Nov 12 '18 at 7:35












                • 1





                  Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

                  – Anon
                  Nov 12 '18 at 7:35







                1




                1





                Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

                – Anon
                Nov 12 '18 at 7:35





                Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...

                – Anon
                Nov 12 '18 at 7:35











                14



















                some hints on how to use bonnie++



                bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER] 
                bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james


                A bit more at: SIMPLE BONNIE++ EXAMPLE.






                share|improve this answer
































                  14



















                  some hints on how to use bonnie++



                  bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER] 
                  bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james


                  A bit more at: SIMPLE BONNIE++ EXAMPLE.






                  share|improve this answer






























                    14















                    14











                    14









                    some hints on how to use bonnie++



                    bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER] 
                    bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james


                    A bit more at: SIMPLE BONNIE++ EXAMPLE.






                    share|improve this answer
















                    some hints on how to use bonnie++



                    bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER] 
                    bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james


                    A bit more at: SIMPLE BONNIE++ EXAMPLE.







                    share|improve this answer















                    share|improve this answer




                    share|improve this answer








                    edited Sep 28 '15 at 19:51

























                    answered Sep 28 '15 at 19:02









                    nyxeenyxee

                    3202 silver badges8 bronze badges




                    3202 silver badges8 bronze badges
























                        1



















                        f3 - Fight Flash Fraud



                        Check integrity, detect fake flash drives and test performance, all three in one shot.



                        More on this answer.






                        share|improve this answer
































                          1



















                          f3 - Fight Flash Fraud



                          Check integrity, detect fake flash drives and test performance, all three in one shot.



                          More on this answer.






                          share|improve this answer






























                            1















                            1











                            1









                            f3 - Fight Flash Fraud



                            Check integrity, detect fake flash drives and test performance, all three in one shot.



                            More on this answer.






                            share|improve this answer
















                            f3 - Fight Flash Fraud



                            Check integrity, detect fake flash drives and test performance, all three in one shot.



                            More on this answer.







                            share|improve this answer















                            share|improve this answer




                            share|improve this answer








                            edited Jun 6 at 20:40

























                            answered Mar 19 at 2:11









                            Pablo APablo A

                            4,5453 gold badges22 silver badges49 bronze badges




                            4,5453 gold badges22 silver badges49 bronze badges


















                                protected by Community Feb 12 '14 at 16:23



                                Thank you for your interest in this question.
                                Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                Would you like to answer one of these unanswered questions instead?



                                Popular posts from this blog

                                Tamil (spriik) Luke uk diar | Nawigatjuun

                                Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

                                Training a classifier when some of the features are unknownWhy does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?How to improve an existing (trained) classifier?What is effect when I set up some self defined predisctor variables?Why Matlab neural network classification returns decimal values on prediction dataset?Fitting and transforming text data in training, testing, and validation setsHow to quantify the performance of the classifier (multi-class SVM) using the test data?How do I control for some patients providing multiple samples in my training data?Training and Test setTraining a convolutional neural network for image denoising in MatlabShouldn't an autoencoder with #(neurons in hidden layer) = #(neurons in input layer) be “perfect”?