NetApp has introduced new AFF A-Series storage solutions that are being targeted at the requirement for high performance and low latency in AI training and inference use-cases. Is this an incremental evolution of the hardware or something fundamentally different?