- Storage White Papers
A question I get asked occasionally is; “How many IOPS can my RAID group sustain?” in relation to Enterprise class arrays.
Obviously the first question is to determine what the data profile is, however if it isn’t known, then assume the I/O will be 100% random. If all the I/O is random, then each I/O request will require a seek (move the head to the right cylinder on the disk) and the disk to rotate to the start of the area to read (latency) which for 15K drives is 2ms. Taking the latest Seagate Cheetah 15K fibre channel drives, each drive has an identical seek time of 3.4ms for reads. This is a total time of 5.4ms, or 185 IOPS (1000/5.4). The same calculation for a Seagate SATA drive gives a worst case throughput of 104 IOPS, approximately half the capacity of the fibre channel drive.
For a RAID group of RAID-5 3+1 fibre channel drives, data will be spread across all 4 drives, so this RAID group has a potential worst case I/O throughput of 740 IOPS.
Clearly this is a “rule of thumb” as in practical terms, not every I/O will be completely random and incur the seek/latency penalties. Also, enterprise arrays have cache (the drives themselves have cache) and plenty of clever algorithms to mask the issues of the moving technology.
There are also plenty of other points of contention within the host->array stack which makes this whole subject more complicated, however, when comparing different drive speeds, calculating a worst case scenario gives a good indication of how differing drives will perform.
Incidentally, as I just mentioned, the latest Seagate 15K drives (146GB, 300GB and 460GB) all have the same performance characteristics, so tiering based on drive size isn’t that useful. The only exception to this is when a high I/O throughput is required. With smaller drives, data has to be spread across more spindles, increasing the available bandwidth. That’s why I think tiering should be done on drive speed not size…
_uacct = “UA-1104321-2″;
- Netapp: The Inflexibility of Flexvols (12,293)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (11,454)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9,656)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (7,486)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (7,359)
- Review: Compellent Storage Center – Part II (7,322)
- Why Does Microsoft Hyper-V Not Support NFS? (6,713)
- How To: Enable iSNS Server in Windows 2008 (6,268)
- Data ONTAP 8.0 – Part III (5,746)
- How Many IOPS? (4,698)
- Comparing and Contrasting All Flash Arrays – All Vendors (150)
- The Virtual Machine is a Legacy Construct (21)
- Reinventing The Storage Array and Learning From Blackblaze (18)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (15)
- What EMC Should Have Done With VNX (14)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (10)
- Netapp: The Inflexibility of Flexvols (9)
- EMC Megalaunch – Speeding to Lead Balloon (8)
- How Many IOPS? (8)
- Enterprise Computing: Is DMX The Worst Array for Wastage? (8)