KC Phua

Flash Performance, Integrated AI Put Hitachi Vantara in Spotlight

By KC Phua, APAC Technical Director for Data Management & Protection, Hitachi Vantara

To improve productivity, manage risk and drive down costs, business is moving towards artificial intelligence (AI) -based applications that work on real-time data telemetry by utilising machine-learning (ML) infrastructure, and 40% of businesses are also expected to invest in predictive analytics which, along with IoT data stream collection, generate huge volumes of data that must be processed rapidly.

To meet these emerging demands, Hitachi Vantara has spent a lot of time re-imagining architecture from edge to core to multi-cloud, to address storage and operational data challenges customers now face. Chief amongst those, are the massive siloes of infrastructure and stranded data that don’t work in the new, digitally-transforming world of interconnectivity.

Accordingly, we redesigned our core array with elastic scale and AI-driven automation, in a unique architecture designed around Hitachi Accelerated Fabric, and extended that capability across our entire infrastructure so that it will work across entire data centres, regardless of whether it is Hitachi’s tech or someone else’s tech.”

We’ve become the latest to integrate AI – in our Hitachi Ops Center software, giving root cause analysis, predictive remediation, and AI-assisted resource placement, balancing, and forecasting so that up to 70 percent of data centre workloads can be automated, providing faster, more accurate insights to diagnose system health and keep operations running.

Application trends are also driving the need for all-flash performance that is accelerated by nonvolatile memory express (NVMe) technology, in environments where colder datasets can be automatically tiered to the cost-effective capacity of hard disk drives (HDD), or migrated to the cloud.

To meet that varied enterprise workload requirement, Hitachi Vantara has now brought out the industry’s first storage array to offer a mixed NVMe solid-state disk (SSD), serial-attached SCSI (SAS) SSD and HDD environment that can scale up in capacity and also scale out for performance.

Covering everything from cloud integration to mainframes, The Virtual Storage Platform 5000 was introduced at the Hitachi Next 2019 conference in Las Vegas, and already has interest from Tier 1 financial institutions and telcos around the region, with with a large bank in India purchasing 10 units even before it was launched in October.

While NVMe Flash Arrays promise an unprecedented level of performance thanks to higher command count, queue depth and PCIe connectivity, most NVMe Arrays are boasting IOPS statistics of close to one million IOPS and latency in the low hundreds of microseconds.

But at 21 million IOPS and latency of 70 microseconds, the next generation, high performance VSP 5000 series storage array with NVMe, from Hitachi Vantara has exceeded the performance of any other system on the market.

How was this done? Wouldn’t NVMe provide the same performance improvements for any storage array? The answer is that NVMe is just one component of a storage array and the difference is in how it is implemented in the storage controller. The VSP 5000 with Hitachi Storage Virtualization Operating System RF (Resilient Flash) is flash-optimized for NVMe as well as SAS intermix and built around a new internal PCIe based switching fabric and AI/ML based operations center for greater performance, scalability, availability and ease of use.

While other storage systems vendors have rushed to deliver NVMe in their storage arrays, they haven’t made the necessary changes to release it from the constraints of older disk-based controllers.

The new offering is massively scalable, can be configured from two controllers to 12 controllers and has more than 69 petabytes of capacity. In addition, it can provide eight nines, or 99.999999 percent availability.

The series comes with dedupe optimization that uses advanced machine learning (ML) algorithms. The intelligence allows real-time determination of whether in-line and post-process dedupe should be used. That provides maximum data reduction with minimal performance impact, with up to 7:1 total reduction. To provide these advanced capabilities, Hitachi designed a new architecture, which utilizes a FPGA-driven acceleration by using a combination of low-latency NVMe technologies like flash and SCM (storage class memory).

And with its own operating system, the Hitachi Storage Virtualization Operating System, customers can mix NVMe and SAS flash media in the same system and support storage technologies including SCM (storage class memory) and NVMe over Fabrics.

For on-premise use, it stores data in both block and file formats, supporting a range of workloads from traditional mission-critical business applications to containers to mainframe data. And as it uses an architecture that provides immediate processing power without wait time or interruption to maximize I/O throughput, applications suffer no latency increases since access to data is accelerated between nodes even when the system is scaled-out.

The new storage array is also cloud-integrated, with the ability to seamlessly tier file and object workloads to the cloud, supporting a variety of cloud-based providers like Amazon Web Services and Microsoft Azure via a file storage gateway package that moves data to public clouds.

And because it is designed to accelerate and consolidate transactional systems, containerised applications, analytics and mainframe storage workloads to reduce data centre floor space and costs, we expect the VSP5000 to be the new challenger technology that large telcos and the finical sector will consider adopting, along with large enterprise organisations – putting Hitachi Vantara back into the storage spotlight in a big way.