How to check hard disk performanceWhat are relevant disk benchmark settings for a desktop systemTrying to find out my SATA speedHow is the /tmp directory cleaned up?Why doesn't /tmp use tmpfs?How do I check the integrity of a storage medium (hard disk or flash drive)?Very poor NVME performanceBenchmarking a HDD using the Benchmark utility in “Disks” on Ubuntu 16.04How to check file-level write performance?Check disk benchmark result with the disk running the OSSlow speed while transferring files from pc to USB Hard driveTips for debugging Samba performance?Can an external flash or hard drive be used to boost performance, similar to Windows' ReadyBoost?Can't read/write on Hard DiskHow to check file-level write performance?External Hard drive gets slow while copying lots of dataHow do i check which process is reading hard disk?Disk performance results with iostat
Do monthly payments decrease on a mortgage later in the loan?
Arabia and the Middle Ages
why does my cheese cake not rise when cooked in a water bath
.htaccess with single page website
What is the maximum distance you can cause damage from?
What number would you bet on to maximize your chances of winning?
Is the genre 'fantasy' still fantasy without magic?
Why has no one requested the tape of the Trump/Ukraine call?
Can the treble clef be used instead of the bass clef in piano music?
Which attack stat does each starter Pokemon favour in Pokemon Sword and Shield?
Difference/relationship between power and significance
What should I tell a customer when my co-worker fails to show up to a meeting?
Is it possible to have multiple *Find* buffers in emacs?
Can a website incur liability for linking to a funding campaign for a lawsuit against it?
ArcMap 10.7.1: Is it still a 32-bit software
Is it possible to determine if a Pokémon is a powerful encounter?
What kinds of sanctions would be applied if you wrongly invoked the Fifth Amendment?
How to time code execution in TeX?
Who was the first human to sleep - perchance to dream - on the Moon?
Can anyone identify the aircraft in the background of this photo please?
40 amp range outlet as extension cord source
What does this docker log entry mean?
Change array in javascript into simpler object
how can traditional forms of magic compete against demon magic?
How to check hard disk performance
What are relevant disk benchmark settings for a desktop systemTrying to find out my SATA speedHow is the /tmp directory cleaned up?Why doesn't /tmp use tmpfs?How do I check the integrity of a storage medium (hard disk or flash drive)?Very poor NVME performanceBenchmarking a HDD using the Benchmark utility in “Disks” on Ubuntu 16.04How to check file-level write performance?Check disk benchmark result with the disk running the OSSlow speed while transferring files from pc to USB Hard driveTips for debugging Samba performance?Can an external flash or hard drive be used to boost performance, similar to Windows' ReadyBoost?Can't read/write on Hard DiskHow to check file-level write performance?External Hard drive gets slow while copying lots of dataHow do i check which process is reading hard disk?Disk performance results with iostat
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
hard-drive performance
add a comment
|
How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
hard-drive performance
2
Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .
– Anon
Jun 22 '18 at 5:16
add a comment
|
How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
hard-drive performance
How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
hard-drive performance
hard-drive performance
edited Dec 11 '12 at 5:22
Luis Alvarado
asked Dec 12 '11 at 0:22
Luis AlvaradoLuis Alvarado
178k143 gold badges495 silver badges669 bronze badges
178k143 gold badges495 silver badges669 bronze badges
2
Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .
– Anon
Jun 22 '18 at 5:16
add a comment
|
2
Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .
– Anon
Jun 22 '18 at 5:16
2
2
Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .
– Anon
Jun 22 '18 at 5:16
Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .
– Anon
Jun 22 '18 at 5:16
add a comment
|
9 Answers
9
active
oldest
votes
Terminal method
hdparm
is a good place to start.
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec
sudo hdparm -v /dev/sda
will give information as well.
dd
will give you information on write speed.
If the drive doesn't have a file system (and only then), use of=/dev/sda
.
Otherwise, mount it on /tmp and write then delete the test output file.
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s
Graphical method
- Go to System -> Administration -> Disk Utility.
- Alternatively, launch the Gnome disk utility from the command line by running
gnome-disks
- Alternatively, launch the Gnome disk utility from the command line by running
- Select your hard disk at left pane.
- Now click “Benchmark – Measure Drive Performance” button in right pane.
- A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.
How to benchmark disk I/O
Article
Is there something more you want?
11
I would recommend testing/dev/urandom
as well as/dev/zero
as inputs todd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.
– Ian Mackinnon
Nov 8 '12 at 16:23
3
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
6
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
1
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
2
The/tmp
filesystem is often using a ramdisk these days. So writing to/tmp
would seem to be testing your memory, not your disk subsystem.
– Zoredache
Mar 27 '14 at 16:44
|
show 10 more comments
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s
28
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
2
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
1
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
|
show 1 more comment
I would not recommend using /dev/urandom
because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.
Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
with that method you get output:
sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s
real 0m0.441s
user 0m0.004s
sys 0m0.124s
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
3
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
add a comment
|
If you want to monitor the disk read and write speed real-time you can use the iotop tool.
This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top
.
To install iotop:
sudo apt-get install iotop
To run it:
sudo iotop
add a comment
|
If you want accuracy, you should use fio
. It requires reading the manual (man fio
) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Increase the --size
argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size
and --runtime
parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.
Note that fio
will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat
in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.
If you have a good SSD and want to see even higher numbers, increase --numjobs
above. That defines the concurrency for the reads and writes. The above examples all have numjobs
set to 1
so the test is about single threaded process reading and writing (possibly with a queue set with iodepth
). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs
a lot (e.g. 4
should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32
-128
to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.
1
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
1
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less thanmax_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested thatfsync
does not matter for reading.
– Mikko Rantalainen
Jun 26 '18 at 7:18
1
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
1
I setiodepth
to1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, useiodepth=256 --numjobs=4
but never expect to see such numbers for real.
– Mikko Rantalainen
Jul 3 '18 at 7:46
1
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
|
show 11 more comments
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
http://www.coker.com.au/bonnie++/
1
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
add a comment
|
Write speed
$ dd if=/dev/zero of=./largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
Now read the file which was created in write test:
$ dd if=./largefile of=/dev/null bs=4k
165118+0 records in
165118+0 records out
676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
add a comment
|
some hints on how to use bonnie++
bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER]
bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james
A bit more at: SIMPLE BONNIE++ EXAMPLE.
add a comment
|
f3 - Fight Flash Fraud
Check integrity, detect fake flash drives and test performance, all three in one shot.
add a comment
|
protected by Community♦ Feb 12 '14 at 16:23
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
9 Answers
9
active
oldest
votes
9 Answers
9
active
oldest
votes
active
oldest
votes
active
oldest
votes
Terminal method
hdparm
is a good place to start.
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec
sudo hdparm -v /dev/sda
will give information as well.
dd
will give you information on write speed.
If the drive doesn't have a file system (and only then), use of=/dev/sda
.
Otherwise, mount it on /tmp and write then delete the test output file.
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s
Graphical method
- Go to System -> Administration -> Disk Utility.
- Alternatively, launch the Gnome disk utility from the command line by running
gnome-disks
- Alternatively, launch the Gnome disk utility from the command line by running
- Select your hard disk at left pane.
- Now click “Benchmark – Measure Drive Performance” button in right pane.
- A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.
How to benchmark disk I/O
Article
Is there something more you want?
11
I would recommend testing/dev/urandom
as well as/dev/zero
as inputs todd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.
– Ian Mackinnon
Nov 8 '12 at 16:23
3
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
6
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
1
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
2
The/tmp
filesystem is often using a ramdisk these days. So writing to/tmp
would seem to be testing your memory, not your disk subsystem.
– Zoredache
Mar 27 '14 at 16:44
|
show 10 more comments
Terminal method
hdparm
is a good place to start.
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec
sudo hdparm -v /dev/sda
will give information as well.
dd
will give you information on write speed.
If the drive doesn't have a file system (and only then), use of=/dev/sda
.
Otherwise, mount it on /tmp and write then delete the test output file.
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s
Graphical method
- Go to System -> Administration -> Disk Utility.
- Alternatively, launch the Gnome disk utility from the command line by running
gnome-disks
- Alternatively, launch the Gnome disk utility from the command line by running
- Select your hard disk at left pane.
- Now click “Benchmark – Measure Drive Performance” button in right pane.
- A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.
How to benchmark disk I/O
Article
Is there something more you want?
11
I would recommend testing/dev/urandom
as well as/dev/zero
as inputs todd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.
– Ian Mackinnon
Nov 8 '12 at 16:23
3
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
6
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
1
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
2
The/tmp
filesystem is often using a ramdisk these days. So writing to/tmp
would seem to be testing your memory, not your disk subsystem.
– Zoredache
Mar 27 '14 at 16:44
|
show 10 more comments
Terminal method
hdparm
is a good place to start.
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec
sudo hdparm -v /dev/sda
will give information as well.
dd
will give you information on write speed.
If the drive doesn't have a file system (and only then), use of=/dev/sda
.
Otherwise, mount it on /tmp and write then delete the test output file.
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s
Graphical method
- Go to System -> Administration -> Disk Utility.
- Alternatively, launch the Gnome disk utility from the command line by running
gnome-disks
- Alternatively, launch the Gnome disk utility from the command line by running
- Select your hard disk at left pane.
- Now click “Benchmark – Measure Drive Performance” button in right pane.
- A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.
How to benchmark disk I/O
Article
Is there something more you want?
Terminal method
hdparm
is a good place to start.
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec
sudo hdparm -v /dev/sda
will give information as well.
dd
will give you information on write speed.
If the drive doesn't have a file system (and only then), use of=/dev/sda
.
Otherwise, mount it on /tmp and write then delete the test output file.
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s
Graphical method
- Go to System -> Administration -> Disk Utility.
- Alternatively, launch the Gnome disk utility from the command line by running
gnome-disks
- Alternatively, launch the Gnome disk utility from the command line by running
- Select your hard disk at left pane.
- Now click “Benchmark – Measure Drive Performance” button in right pane.
- A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.
How to benchmark disk I/O
Article
Is there something more you want?
edited Aug 2 at 20:06
Community♦
1
1
answered Dec 12 '11 at 0:34
PantherPanther
84.4k14 gold badges164 silver badges265 bronze badges
84.4k14 gold badges164 silver badges265 bronze badges
11
I would recommend testing/dev/urandom
as well as/dev/zero
as inputs todd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.
– Ian Mackinnon
Nov 8 '12 at 16:23
3
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
6
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
1
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
2
The/tmp
filesystem is often using a ramdisk these days. So writing to/tmp
would seem to be testing your memory, not your disk subsystem.
– Zoredache
Mar 27 '14 at 16:44
|
show 10 more comments
11
I would recommend testing/dev/urandom
as well as/dev/zero
as inputs todd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.
– Ian Mackinnon
Nov 8 '12 at 16:23
3
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
6
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
1
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
2
The/tmp
filesystem is often using a ramdisk these days. So writing to/tmp
would seem to be testing your memory, not your disk subsystem.
– Zoredache
Mar 27 '14 at 16:44
11
11
I would recommend testing
/dev/urandom
as well as /dev/zero
as inputs to dd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.– Ian Mackinnon
Nov 8 '12 at 16:23
I would recommend testing
/dev/urandom
as well as /dev/zero
as inputs to dd
when testing an SSD as the compressibility of the data can have a massive effect on write speed.– Ian Mackinnon
Nov 8 '12 at 16:23
3
3
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
There is no such "System ->" on my Ubuntu 12.04 Unity. Or at least I haven't found it. And I do not see that disk tool neither within System Settings... O_o But I finallly managed to run it: /usr/bin/palimpsest
– Fran Marzoa
Nov 30 '12 at 22:00
6
6
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
Note that since 12.10 it's simply called Disks and can be found through Unity.
– Paul Lammertsma
Feb 14 '14 at 11:18
1
1
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
On Gnome this has moved to Applications --> System Tools --> Preferences --> Disk Utility. For those of use who hate Unity.
– Ken Sharp
Mar 13 '14 at 15:12
2
2
The
/tmp
filesystem is often using a ramdisk these days. So writing to /tmp
would seem to be testing your memory, not your disk subsystem.– Zoredache
Mar 27 '14 at 16:44
The
/tmp
filesystem is often using a ramdisk these days. So writing to /tmp
would seem to be testing your memory, not your disk subsystem.– Zoredache
Mar 27 '14 at 16:44
|
show 10 more comments
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s
28
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
2
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
1
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
|
show 1 more comment
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s
28
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
2
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
1
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
|
show 1 more comment
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s
answered Aug 18 '13 at 18:31
TeleTele
1,2319 silver badges6 bronze badges
1,2319 silver badges6 bronze badges
28
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
2
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
1
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
|
show 1 more comment
28
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
2
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
1
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
28
28
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
It's an answer using a different command/option than the others. I see it's an answer worthy of a post of its own.
– Alaa Ali
Aug 18 '13 at 19:01
2
2
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
Why have you used 384k as block size?
– Diego F. Durán
Jun 2 '14 at 14:39
1
1
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
@Diego There is no reason. It was just an example. You can use anything else. (between about 4k ... 1M ) Of course bigger blocksize will give better performance. And of course decrease the count number when you use big bs, or it will take a year to finish.
– Tele
Jul 25 '14 at 0:17
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
it's not reliable by bench mark tools like iozone and sysbench numbers are much much lower
– MSS
Aug 27 '16 at 10:00
1
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:38
|
show 1 more comment
I would not recommend using /dev/urandom
because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.
Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
with that method you get output:
sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s
real 0m0.441s
user 0m0.004s
sys 0m0.124s
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
3
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
add a comment
|
I would not recommend using /dev/urandom
because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.
Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
with that method you get output:
sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s
real 0m0.441s
user 0m0.004s
sys 0m0.124s
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
3
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
add a comment
|
I would not recommend using /dev/urandom
because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.
Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
with that method you get output:
sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s
real 0m0.441s
user 0m0.004s
sys 0m0.124s
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
I would not recommend using /dev/urandom
because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.
Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
with that method you get output:
sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s
real 0m0.441s
user 0m0.004s
sys 0m0.124s
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
answered Dec 6 '12 at 23:18
Pasi SuominenPasi Suominen
5014 silver badges2 bronze badges
5014 silver badges2 bronze badges
3
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
add a comment
|
3
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
3
3
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
Be careful with using zeros for your write data - some disks (such as SSDs) and some filesystems will have a special case path for it. This results in artificially high benchmark numbers when using zero buffers. Other highly compressible data patterns can also distort results...
– Anon
Nov 12 '18 at 7:40
add a comment
|
If you want to monitor the disk read and write speed real-time you can use the iotop tool.
This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top
.
To install iotop:
sudo apt-get install iotop
To run it:
sudo iotop
add a comment
|
If you want to monitor the disk read and write speed real-time you can use the iotop tool.
This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top
.
To install iotop:
sudo apt-get install iotop
To run it:
sudo iotop
add a comment
|
If you want to monitor the disk read and write speed real-time you can use the iotop tool.
This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top
.
To install iotop:
sudo apt-get install iotop
To run it:
sudo iotop
If you want to monitor the disk read and write speed real-time you can use the iotop tool.
This is useful to get exact information about how a disk performs for a particular application or task. The output will show you read/write speed per process, and total read/write speed for the server, much similar to top
.
To install iotop:
sudo apt-get install iotop
To run it:
sudo iotop
edited Oct 6 '15 at 20:28
answered Sep 17 '13 at 14:24
LarsLars
4614 silver badges6 bronze badges
4614 silver badges6 bronze badges
add a comment
|
add a comment
|
If you want accuracy, you should use fio
. It requires reading the manual (man fio
) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Increase the --size
argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size
and --runtime
parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.
Note that fio
will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat
in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.
If you have a good SSD and want to see even higher numbers, increase --numjobs
above. That defines the concurrency for the reads and writes. The above examples all have numjobs
set to 1
so the test is about single threaded process reading and writing (possibly with a queue set with iodepth
). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs
a lot (e.g. 4
should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32
-128
to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.
1
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
1
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less thanmax_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested thatfsync
does not matter for reading.
– Mikko Rantalainen
Jun 26 '18 at 7:18
1
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
1
I setiodepth
to1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, useiodepth=256 --numjobs=4
but never expect to see such numbers for real.
– Mikko Rantalainen
Jul 3 '18 at 7:46
1
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
|
show 11 more comments
If you want accuracy, you should use fio
. It requires reading the manual (man fio
) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Increase the --size
argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size
and --runtime
parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.
Note that fio
will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat
in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.
If you have a good SSD and want to see even higher numbers, increase --numjobs
above. That defines the concurrency for the reads and writes. The above examples all have numjobs
set to 1
so the test is about single threaded process reading and writing (possibly with a queue set with iodepth
). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs
a lot (e.g. 4
should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32
-128
to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.
1
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
1
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less thanmax_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested thatfsync
does not matter for reading.
– Mikko Rantalainen
Jun 26 '18 at 7:18
1
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
1
I setiodepth
to1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, useiodepth=256 --numjobs=4
but never expect to see such numbers for real.
– Mikko Rantalainen
Jul 3 '18 at 7:46
1
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
|
show 11 more comments
If you want accuracy, you should use fio
. It requires reading the manual (man fio
) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Increase the --size
argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size
and --runtime
parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.
Note that fio
will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat
in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.
If you have a good SSD and want to see even higher numbers, increase --numjobs
above. That defines the concurrency for the reads and writes. The above examples all have numjobs
set to 1
so the test is about single threaded process reading and writing (possibly with a queue set with iodepth
). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs
a lot (e.g. 4
should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32
-128
to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.
If you want accuracy, you should use fio
. It requires reading the manual (man fio
) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Increase the --size
argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size
and --runtime
parameters. Note that some media (e.g. most flash devices) will get extra wear from such testing. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it's best to test with file sizes that you'll actually see in real life. Otherwise your numbers may look too good.
Note that fio
will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat
in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.
If you have a good SSD and want to see even higher numbers, increase --numjobs
above. That defines the concurrency for the reads and writes. The above examples all have numjobs
set to 1
so the test is about single threaded process reading and writing (possibly with a queue set with iodepth
). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs
a lot (e.g. 4
should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32
-128
to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.
edited Aug 14 at 6:26
answered Jan 1 '18 at 18:14
Mikko RantalainenMikko Rantalainen
1,45614 silver badges20 bronze badges
1,45614 silver badges20 bronze badges
1
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
1
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less thanmax_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested thatfsync
does not matter for reading.
– Mikko Rantalainen
Jun 26 '18 at 7:18
1
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
1
I setiodepth
to1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, useiodepth=256 --numjobs=4
but never expect to see such numbers for real.
– Mikko Rantalainen
Jul 3 '18 at 7:46
1
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
|
show 11 more comments
1
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
1
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less thanmax_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested thatfsync
does not matter for reading.
– Mikko Rantalainen
Jun 26 '18 at 7:18
1
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
1
I setiodepth
to1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, useiodepth=256 --numjobs=4
but never expect to see such numbers for real.
– Mikko Rantalainen
Jul 3 '18 at 7:46
1
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
1
1
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
I just re-tested some devices. Using above sequential read test (2MB block size) I got 280 MB/s from Samsung SSD 850 EVO and 1070 MB/s from Intel 910 SSD. With 64k block size and otherwise identical commandline I got 268 MB/s from 850 EVO and 1055 MB/s from 910 SSD. At least for this kind of devices, using 2 MB block size seems to improve results around 1-5% even though it causes kernel to split requests to hardware. I guess even with kernel optimizations the overhead of submitting more syscalls is worse than splitting inside kernel.
– Mikko Rantalainen
Jun 26 '18 at 7:07
1
1
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than
max_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync
does not matter for reading.– Mikko Rantalainen
Jun 26 '18 at 7:18
Upon further testing it seems that I get the highest sequential throughput using power of 2 value that is less than
max_sectors_kb
. I changed the above example commands to use 1 MB block size because that seems to work with real world hardware. And I also tested that fsync
does not matter for reading.– Mikko Rantalainen
Jun 26 '18 at 7:18
1
1
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
Depending on how the drive is connected you may find that your iodepth was too low. You would have to watch what Linux is actually sending down to the device and what depth it's doing it at...
– Anon
Jun 26 '18 at 20:28
1
1
I set
iodepth
to 1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4
but never expect to see such numbers for real.– Mikko Rantalainen
Jul 3 '18 at 7:46
I set
iodepth
to 1
for random access exactly because real world programs often run algorithms/logic that does not work with depth any higher than 1. As a result, if such depth is "too low" your I/O device is bad. It's true that some SSD devices will benefit from depth higher than 32. However, can you point to any real world workload that requires read access and is able to keep up iodepth higher than 32? TL; DR: if you want to reproduce some insanely high read benchmark number with high latency device, use iodepth=256 --numjobs=4
but never expect to see such numbers for real.– Mikko Rantalainen
Jul 3 '18 at 7:46
1
1
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
Most "real world" programs aren't actually submitting I/O (o_)directly let alone asynchronously so all of our examples are in unusual workloads to push the limits benchmark territory (as they say, the best benchmark is your real workload). Having said that doing things like running multiple busy virtual machines are easily able to generate workloads with crazy high depths but where the I/O often looks random from the disk perspective and is a simple example of where you can see a huge speedup from things like NVMe. PS: setting numbers too high will reduce throughput so there's a sweet spot...
– Anon
Jul 4 '18 at 18:33
|
show 11 more comments
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
http://www.coker.com.au/bonnie++/
1
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
add a comment
|
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
http://www.coker.com.au/bonnie++/
1
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
add a comment
|
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
http://www.coker.com.au/bonnie++/
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
http://www.coker.com.au/bonnie++/
edited Jul 30 '18 at 13:51
0xF2
2,5742 gold badges23 silver badges47 bronze badges
2,5742 gold badges23 silver badges47 bronze badges
answered Feb 3 '14 at 16:13
CortoCorto
2513 silver badges2 bronze badges
2513 silver badges2 bronze badges
1
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
add a comment
|
1
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
1
1
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
Bonnie is flawed for disk benchmarking and can easily generate numbers that actually reflect non-disk aspects of your system so a high degree of care is required if you choose to use it. See Brendan Gregg's Active Benchmarking: Bonnie++ for details.
– Anon
Jan 12 at 5:52
add a comment
|
Write speed
$ dd if=/dev/zero of=./largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
Now read the file which was created in write test:
$ dd if=./largefile of=/dev/null bs=4k
165118+0 records in
165118+0 records out
676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
add a comment
|
Write speed
$ dd if=/dev/zero of=./largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
Now read the file which was created in write test:
$ dd if=./largefile of=/dev/null bs=4k
165118+0 records in
165118+0 records out
676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
add a comment
|
Write speed
$ dd if=/dev/zero of=./largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
Now read the file which was created in write test:
$ dd if=./largefile of=/dev/null bs=4k
165118+0 records in
165118+0 records out
676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s
Write speed
$ dd if=/dev/zero of=./largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
Now read the file which was created in write test:
$ dd if=./largefile of=/dev/null bs=4k
165118+0 records in
165118+0 records out
676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s
answered May 5 '16 at 22:12
Limon MonteLimon Monte
3973 silver badges8 bronze badges
3973 silver badges8 bronze badges
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
add a comment
|
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
1
1
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
Be careful with using zeros for your write data - some filesystems and disks will have a special case path for it (and other compressible data) which will cause artificially high benchmark numbers...
– Anon
Nov 12 '18 at 7:35
add a comment
|
some hints on how to use bonnie++
bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER]
bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james
A bit more at: SIMPLE BONNIE++ EXAMPLE.
add a comment
|
some hints on how to use bonnie++
bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER]
bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james
A bit more at: SIMPLE BONNIE++ EXAMPLE.
add a comment
|
some hints on how to use bonnie++
bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER]
bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james
A bit more at: SIMPLE BONNIE++ EXAMPLE.
some hints on how to use bonnie++
bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER]
bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james
A bit more at: SIMPLE BONNIE++ EXAMPLE.
edited Sep 28 '15 at 19:51
answered Sep 28 '15 at 19:02
nyxeenyxee
3202 silver badges8 bronze badges
3202 silver badges8 bronze badges
add a comment
|
add a comment
|
f3 - Fight Flash Fraud
Check integrity, detect fake flash drives and test performance, all three in one shot.
add a comment
|
f3 - Fight Flash Fraud
Check integrity, detect fake flash drives and test performance, all three in one shot.
add a comment
|
f3 - Fight Flash Fraud
Check integrity, detect fake flash drives and test performance, all three in one shot.
f3 - Fight Flash Fraud
Check integrity, detect fake flash drives and test performance, all three in one shot.
edited Jun 6 at 20:40
answered Mar 19 at 2:11
Pablo APablo A
4,5453 gold badges22 silver badges49 bronze badges
4,5453 gold badges22 silver badges49 bronze badges
add a comment
|
add a comment
|
protected by Community♦ Feb 12 '14 at 16:23
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
2
Similar question has been asked over on unix.stackexchange.com/questions/108838/… , stackoverflow.com/questions/1198691/… and serverfault.com/questions/219739/… .
– Anon
Jun 22 '18 at 5:16