
Enterprise users know that flash storage is the ultimate for speed, but there has been a lot of fear, uncertainty and doubt surrounding reliability and cost.
It used to be that the major roadblock to flash storage adoption was cost – the price-point of flash made it cost-prohibitive for all but the most mission-critical high-performance applications. Now with the reliability and cost of flash (using its attendant deduping and compression technologies) rivalling HDD, those days may be over.
Hewlett Packard Enterprise’s (HPE) general manager of storage, Paul Shaw, has penned a brief overview of the important considerations when moving to flash storage.
He discusses the trend of organisations switching to flash, and the changes that come with it regarding balancing performance between servers, networks and storage. For businesses to gain the full benefits from their flash investment, Shaw lays out three essential roadblocks to consider – network capacity, storage architecture and resiliency.
{loadposition ray}
1. Network capacity
Picture yourself on the freeway at 5:00 in the morning. Traffic is relatively light, and everyone is moving along at or close to the speed limit. Now add more cars as the morning commute kicks in, and things gradually slow down. Add more cars, and eventually, you’re approaching gridlock.
Flash media and suitable storage networks can solve the gridlock. Flash t is fast – up to hundreds of thousands of I/O operations per second (IOPS) with sub-millisecond latency. That’s orders of magnitude beyond the performance of spinning disk. But that ability to generate more read and write operations means more traffic for your storage network, moving data back and forth between storage and servers. And as network traffic piles up latency increases. The result is a traffic jam that slows down application performance.
For example, a common online transaction processing (OLTP) workload connected to flash storage can quickly saturate 8Gb/s fibre channel (FC) network components like host bus adapters (HBAs), network switches, and target adapters. Your storage network can become a bottleneck, preventing them from fully utilising their compute and storage resources.
To get the most from their flash investment, you need to consider a network upgrade. In the OLTP example, upgrading from 8Gb/s FC to 16 Gb/s FC can increase bandwidth and IOPS by at least 35% and improve storage latency by 2.5X or more. That’s the equivalent of adding extra lanes to a freeway to support more traffic. A network upgrade provides the added benefit of requiring fewer components (switches, adapters, etc.) to achieve bandwidth and latency targets, resulting in lower costs.
2. Storage architecture
It’s not just the network that can slow them down; the architecture of their flash array itself could prevent them from realising the full benefits of flash media.
Some vendors have entered the flash market by simply re-equipping existing disk storage arrays with flash media. In reality, it’s like dropping a finely tuned racing engine into the family minivan. Sure it will run faster, but the minivan has no hope of getting the full performance benefit from the horsepower the racing engine can generate because if you look end-to-end to all the elements that make up the performance, the engine only represents a fraction of the elements.
Similarly, the characteristics of flash require re-thinking performance through the end-to-end I/O path, including server connectivity, switches, storage controllers and backend connectivity to the solid-state drive (SSD) media. Much like a race car is optimised to get the full performance from its racing engine, a flash array and the supporting architecture should be optimised to support flash media.
Storage controllers and algorithms not designed specifically for the rigours of flash will not deliver the desired latency and I/O performance. Having sufficient bandwidth in their array is another consideration.
Typical dual-controller storage designs suffer from an inability to effective scale to keep up with flash performance as storage performance in flash arrays is a function of the controller performance.
3. Resiliency
Some vendors have taken a different approach, designing flash optimised storage arrays from the ground up. While this can alleviate the bottlenecks associated with legacy storage architectures, it can pose another set of challenges. Often the redesign comes at the expense of the Tier-1 resiliency and data services that you rely on, which can be a little like driving that race car without a helmet – everything is fine, until it’s not. Features like hardware and software redundancy, non-disruptive upgrades, transparent active-active failover, and remote synchronous/asynchronous replication are critical to their data centre but are not yet always standard offerings with all-flash arrays.
Deploying one of these systems can also mean accepting another separate and distinct storage architecture into their data centre, creating an additional storage silo and complicating your data protection strategy. To provide the best value, flash storage arrays should integrate with the tools they already use, enabling hypervisor and application owners to control backup and recovery processes directly from their preferred system management consoles. To achieve true data protection, they will also need to go beyond snapshots to create fully independent backup volumes that can be restored at the volume level in the event of a disaster.
Flash storage changes the performance balance between servers, network and storage, requiring you to re-think your architecture. Helping them realise the full benefits from their flash investment requires a balance of the right storage, the right storage architecture, the right data services and features, and the right network solution.