- Storage White Papers
This week we’ve seen an update on NFS benchmarks with Hitachi announcing their latest results (via Hu’s blog) using HUS File and FMDs (Flash Module Drives). Of course Hitachi wouldn’t be announcing if they hadn’t beaten EMC with the figures; no-one releases data that makes them look worse, after all. But what are the data behind those numbers and can they really be interpreted at face value?
The SPECsfs2008_nfs.v3 benchmark isn’t a simple IOPS measure but rather is an NFS workload comparison, using a blend of common NFS commands. At a high level, results show a “throughput” figure and a overall response time (ORT) value in milliseconds. In their latest published results, Hitachi achieved an overall 607,647 score at 0.89ms with a four-node system and 298,648 score at 0.59ms, representing the lowest response time for any system tested so far. At first glance, Hitachi beat EMC’s VNX8000, submitted the quarter previously. I’ve charted up the figures over the last 2 years and added some additional metrics for comparison. As you can see from Figures 1 & 2, a Hitachi system did indeed beat the EMC array both on throughput and ORT; Hitachi should rightly be pleased.
But wait – surely on pure throughput, the Huawei Oceanstor and Avere FXT 3800 are better, with much higher figures? Yes, it’s true, but we have to question the configurations. Huawei used a 24 node system in their test, Avere used a 32 node system. So, let’s review the node counts and redo the figures based on capacity per node (Figure 3). Now we see Hitachi much further ahead of EMC, who had to use eight X-blades to get the throughput in their system, putting them way down the list. Using these metrics, Avere and Huawei don’t look anywhere near as good and Hitachi have three of the top four spots. This calculation now shows that Oracle’s ZFS appliance stands out. It may not be the fastest in absolute terms but based on the number of nodes, the Oracle solution is a clear winner.
However, again we have to look at the specification. Figure 4 shows a graph of the presented system storage (TB) versus memory per system (GB). Oracle’s submitted solutions deploy masses of memory (as do Avere and Huawei), which clearly has a distinct advantage in throughput for this test. What does this mean? Well, having to deploy additional memory or many nodes means additional cost. Some other solutions have hundreds of hard drives deployed, all of which takes up space, power and cooling.
For the sake of completeness, one graph where EMC does outperform Hitachi is in Throughput per TB of capacity deployed (Figure 5). This does have to be taken into consideration in terms of the numbers of controllers, however, as raw storage alone can’t drive throughput.
The Architect’s View
Benchmark comparisons aren’t about the raw numbers. We need to very careful to ensure the hardware comparisons are fair and genuine. They should be about a number of things;
- What the customer’s requirements are – whether that’s low latency or high throughput
- What the customer’s efficiency concerns are – power, space, cooling
- TCO – ultimately it’s all about how much these systems cost per TB and per IO
Unfortunately costs, even list prices aren’t included in these calculations, unlike the Storage Performance Council’s Price-Performance figures. This means it’s easy to deploy a system with huge capacity, that wins the performance test but would be a poor choice financially. Getting back to HDS, their systems performed well across all tests. It would be good to see them and other vendors insisting that cost is put into the calculations, as ultimately the price paid has a direct impact on customer behaviour.
- Hitachi NAS Architecture for HUS and HUS VM (11 OCTOBER 2013, HU’S BLOG, HDS.COM)
- Hitachi Performance Leadership, The Teddy Roosevelt Way (10 OCTOBER 2013, DATA CENTER ADVISORS BLOG, HDS.COM)
- HDS gives EMC and NetApp a good benchmark kicking (10 OCTOBER 2013, CHRIS MELLOR, THEREGISTER.CO.UK)
Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.
Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).
Copyright (c) 2013 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.
- Netapp: The Inflexibility of Flexvols (12,349)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (11,508)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9,696)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (7,521)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (7,418)
- Review: Compellent Storage Center – Part II (7,380)
- Why Does Microsoft Hyper-V Not Support NFS? (6,771)
- How To: Enable iSNS Server in Windows 2008 (6,325)
- Data ONTAP 8.0 – Part III (5,767)
- How Many IOPS? (4,729)
- Comparing and Contrasting All Flash Arrays – All Vendors (87)
- The Maturing of Flash Storage (60)
- The Virtual Machine is a Legacy Construct (13)
- How To: Enable iSNS Server in Windows 2008 (12)
- XtremIO: What You Need to Know (Updated) (9)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (8)
- HP 3PAR 7450 All Flash Array (8)
- Netapp: The Inflexibility of Flexvols (7)
- Solid State Arrays: SolidFire (6)
- Review: Compellent Storage Center – Part II (6)