Any storage conversation that does not include thoughtful consideration of what is being stored is incomplete. The data in every environment has a unique profile which define its value to the business, and that value determines what services are needed to store, persist, protect, move and manage it. However, that general data profile is often changing over time and what was important yesterday may be less important today. Now that may have been true for the data profile years ago, but not today. Data’s temporal relevance is changing as businesses discover new ways to mine it for actionable intelligence.
Yesterday’s Data Profile
We know that the most critical data will always demand the lowest latency (highest performance) access. All-flash array messaging and value proposition is focused almost exclusively on performance, but what about other equally, or more important aspects such as reliability, scale and cost?
There is an accepted and sometimes justifiable expectation that high performance inherently carries a higher cost. However, the cost expectation for data that doesn’t require maximum performance is likewise lower. As a result, the “one size fits all” all-flash array approach is often simply too expensive to align with workload profiles and budget expectations and realities. To make things even more challenging, today we have another interesting phenomenon developing.
Tomorrow’s Data Profile
In the past, the value of data was placed on its creation date. Older data was said to be less valuable. This came from IT folks that were trying to justify the cost of storing data that may not have been touched for days, weeks or months. This is no longer the right measurement for determining data value. The data itself intrinsically determines its value, what it can be used for, and the decisions it can enable the business to make. This turn of events has a profound effect on what the data needs are from the storage upon which it resides.
As a result, the way that IT thinks about the cost and performance of storage is changing. The need to access the data quickly is becoming more and more important at a time when, unfortunately, the budget isn’t changing to accommodate more expenditure on storage, especially expensive all-flash capacity. The need to have quick access at low cost, which is in many ways antithetical to the current state of the storage market, is nonetheless a critical design pattern for storage systems. In addition, it is important to note that ALL storage requires high reliability, or high data availability. In addition, as time moves forward and data volumes continue to grow, sometimes exponentially, high performance access will by definition be required to an ever-increasing population of data. So we have, in effect, forces in direct opposition to one another — more and more data requiring faster and faster access at a time when high performance media cost is not dropping fast enough to keep pace with data growth.
So, while all-flash arrays do deliver high performance access to critical data, they still lack the broad applicability demanded by a diverse range of high growth enterprise workloads. They can be and often are employed to serve only the highest performance requirements for a small subset of enterprise workloads. But, this can and does create independent “silos” of storage, which can render the overall environment more difficult to manage, and inherently more OpEx-intensive. They also generally lack the ultra-high levels of reliability required for mission critical enterprise data, thus forcing IT to buy even more expensive generalized storage capacity and use high overhead RAID algorithms to provide necessary data protection. Lastly, contemporary all-flash arrays are not known to scale particularly well and are typically smaller solutions targeted at smaller, more static datasets.
So, as your data volumes continue to grow — which they inevitably will — and its value is becoming more important, you may need to buy multiple, disparate arrays, again adding to the management complexity and reducing the aggregated reliability of the environment.
This piece was excerpted from the executive brief titled “INFINIDAT Flash-Optimized Hybrid Array vs. Current All-Flash Arrays (AFAs).” Download the executive brief to learn where and how these solutions are best utilized in the data center.
About Randy Arseneau
Randy Arseneau is Chief Marketing Officer at INFINIDAT. He has been involved in information technology management for over 28 years in roles including developer, DBA, architect, performance engineer, strategic planner and senior executive. He began his career with a ten-year stint at Motorola, and later spent nine years in various management roles culminating in Director of APM Systems Engineering for VERITAS Software (via its acquisition of Precise Software). Randy has also been active in leadership and advisory capacities for a number of emerging technology startups. Randy joined Infinidat from Nutanix, where he ran the global sales enablement team during a period in which the company grew from an annualized run rate of $150M to over $500M in revenue, and more than tripled headcount to over 1,100 employees worldwide.