So HDS’s announcement has turned out to be a complete disappointment. What it’s not:
- It’s not new hardware.
- It’s not providing more physical capacity.
- It’s not providing dynamic tiering.
- It’s not providing enhanced replication technology.
What is on offer is the ability to cluster USPs – a feature called Hitachi High Availability Manager. By cluster, this means connect two USP arrays together and have them work in an active-active configuration, with data replicated in either direction. This new feature seems to be answering only one problem - how do I get off my USP?
Back in 2004 (I think it was), when I first sat with HDS and had a presentation on USP, the key question (especially with virtualisation) was how to cope with the fact that a single USP is the SPOF (Single Point of Failure). People in the room viewed the USP like a network switch - subject to failure, requiring upgrade and so on. HDS was at pains to say that clustering USPs simply wasn’t necessary as the hardware was fully resilient. In fact, HDS have been at pains since then to offer 100% availability of data with the USP. So why is clustering a USP so necessary? How can we have higher than 100% availability? Let’s not forget, the level of availability of any system is based on the availability of the least resilient component, so if we have a USP (with 100.00% availability) virtualising external storage (with 99.9% availability), the weak point is the external storage. Clustering USP’s doesn’t improve this and never will. All HDS have answered with this offering is the migration issue. This isn’t a new feature.
Here are some questions that arise from the presentation and which weren’t answered:
- How far apart can my USP cluster arrays be?
- What’s the impact on latency?
- How is data integrity maintained?
- Does clustering also support TrueCopy, ShadowImage and COW Snapshots?
- Does clustering change my array World Wide Names, and if so, how?
- Can cluster arrays be at different microcode levels?
- Can clusters be TrueCopy secondary devices, if so what replication links are required?
- Do need specific multipathing software?
So, you may be asking why the odd title of this post. Have a look at here. It’s what the dolphins said before they left earth. Time to say goodbye to the USP-V as a player in the Enterprise array space.
- Netapp: The Inflexibility of Flexvols (9,957)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (9,390)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (7,851)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (5,826)
- Review: Compellent Storage Center – Part II (5,463)
- Data ONTAP 8.0 – Part III (5,060)
- Why Does Microsoft Hyper-V Not Support NFS? (4,884)
- Back to Blogging (4,451)
- How To: Enable iSNS Server in Windows 2008 (4,337)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (4,155)
- Hitachi Attacks Migration Costs with Non-Disruptive Migration Feature (16)
- Why Does Microsoft Hyper-V Not Support NFS? (14)
- ViPR – Frankenstorage Revisited (13)
- How To: Enable iSNS Server in Windows 2008 (12)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (11)
- Netapp: The Inflexibility of Flexvols (10)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9)
- Understanding EVA (7)
- Optimising Storage Architectures for SSD (6)
- Booting from PCIe SSD – Do We Need It? (6)