- Storage White Papers
I’ve not had much exposure to HP EVA storage however recently I’ve had a need (as part of a software tool project) to get into the depths of EVA and understand how it all works. The following is my understanding as I see it, plus some comments of my own. I’d be grateful for any feedback which help improve my englightenment or equally, knock me back for plain stupidity!
So, here goes. EVA arrays place disks into disk groups. The EVA system automagically sub-groups the disks into redundant storage sets (RSS). An RSS is simply a logical grouping of disks rather than some RAID implementation as there’s no underlying RAID deployment at the disk level.
Within each disk group, it is possible to assign a protection level. This figure is “none”, “one” or “two”, indicating the amount of storage to reserve for disk failure rebuilds. The figure doesn’t represent an actual disk, but rather an amount of disk capacity that will be reserved across the whole pool. So, setting “one” in a pool of 16 disks will reserve 1/16th of each disk for rebuilds.
Now we get to LUNs themselves and it is at this point that RAID protection comes in. A LUN can be created in a group with either vRAID0 (no protection), vRAID1 (mirrored) or vRAID5 (RAID-5) protection. vRAID5 uses a RAID5 (4+1) configuration with 4-data and 1-parity.
From the literature I’ve read and playing with the EVA simulator, it appears that the EVA spreads a LUN across all volumes within a disk group. I’ve tried to show this allocation in the diagram on the right, using a different colour for each protection type, within a disk pool of 16 drives.
The picture shows two RSSs and a LUN of each RAID protection type (vRAID0, vRAID1, vRAID5). Understanding vRAID0 is simple; the capacity of the LUN is striped across all physical disks, providing no protection against the loss of any disk within the configuration. In large disk groups, vRAID0 is clearly pointless as it will almost always lead to data loss in the event of a physical failure.
vRAID1 mirrors each segment of the LUN, which is striped across all volumes twice, one for each mirror. I’ve shown these as A & B and assumed they will be allocated on separate RSS groups. In the event that a disk fails, then a vRAID1 LUN can be recreated from the other mirror, using the spare space set aside on the remaining drives to achieve this.
Question: Does the EVA actively re-create failed mirrors immediately on failure of a physical disk. If so, does the EVA then actively rebuild the failed disk, once it has been replaced?
Now, vRAID5, a little more tricky. My understanding is that EVA uses RAID-5 (4+1), so there will never be an even layout of data and parity stripes across the disk group. I haven’t shown in on the diagram, but I presume as data is written to a vRAID5 LUN it is split into smaller chunks (I think 128KB) and striped across the physical disks. In this way, there will be as close to an even distribution of data and parity as possible. In the event of a disk failure, the lost data can be recreated from the other data and parity components that make up that stripe.
Question: What size block does the EVA use for RAID-5 stripes?
At this point, I’m not sure of the benefit of Redundant Storage Sets. They aren’t RAID groups, so there’s no inherent protection if a disk in an RSS fails. If LUNs are created within the same RSS, then perhaps this minimises the impact of a disk failure to just that group of disks; see the second diagram.
The upshot is, I think the technique of dispersing the LUN across all disks is good for performance, but bad for availability – especially as it isn’t easy to see what the impact of a double disk failure can be – my assumption is that it means *all* data will be affected if a double disk failure occurs within the same RSS group. I may be wrong but that doesn’t sound good.
Feel free to correct me if I’ve got any of this wrong!
_uacct = “UA-1104321-2″;
- Netapp: The Inflexibility of Flexvols (12,306)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (11,469)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9,669)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (7,506)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (7,376)
- Review: Compellent Storage Center – Part II (7,333)
- Why Does Microsoft Hyper-V Not Support NFS? (6,724)
- How To: Enable iSNS Server in Windows 2008 (6,277)
- Data ONTAP 8.0 – Part III (5,750)
- How Many IOPS? (4,710)
- Comparing and Contrasting All Flash Arrays – All Vendors (25)
- The Virtual Machine is a Legacy Construct (10)
- ScaleIO, EMC’s New Baby (6)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (5)
- Will Connected Data be Good for Drobo? (5)
- Violin Memory Delivers Converged Storage (4)
- Hitachi Attacks Migration Costs with Non-Disruptive Migration Feature (4)
- Enterprise Computing: Sun/Oracle Kicks Hitachi To The Kerb (4)
- Understanding the Value of Seagate Kinetic (3)
- Avere Systems Embraces Cloud with Cloud NAS (3)