The True Cost of Deploying and Managing Storage Arrays
It’s 2016 and I’m still amazed how many enterprise storage solutions are still bound to concepts, designs and challenges I originally learned to deal with nearly 20 years ago.
Back then, managing storage was truly a complicated practice that involved:
Today, I’m amazed how many of these same pains, archaic concepts and designs are still around in flagship and even newer all-flash platforms. Unfortunately, the old problems are not gone; GUI’s have gone to great lengths to sweep the “dust” under the rug, treating the symptoms, being reactive, but not truly solving the core problems. For example: Why do we still have to count drives and RAID groups? Should I put 16 disks in this aggregate or maybe 24? What’s the impact of using SSDs, SAS drives or a combination? How is my database going to behave if I reduce the write cache? What happens to performance if I turn on compression and deduplication? What does it mean for an application to run at either 600us (microseconds) or 2ms response times? When should I ask my manager about buying those replication licenses? How long is it going to take the support engineer to replace a failed drive?
All of the above questions can be summarized in these two core, absolutely non-technical questions that everyone here should ask themselves, now and every day:
1. So what?
2. Who Cares?
Yes! So what? What’s up with this long list of complaints and rants, Adrian? After all, there’s always someone, somewhere, whether an internal or vendor provided resource, to get these “little” issues solved and move on with the business.
Here’s why this is important to any customer or end user: Storage technologies in their environment must minimize Complexity, reduce Risk and increase Value. These are the driving forces to a better business model, one that is profitable and sustainable. Any storage solution being considered must address these business requirements.
Why do storage customers get stuck paying for complexity? How many times have you decided which power plug to use based on which power grid energy comes from? The answer is “never.” And that’s precisely the point. Storage should be a non-issue. It should be easy, reliable, always there, performing, scalable and, of course, low cost.
Storage providers deliver capacity today with many levels of inherited risk whether IT shops are aware of it or not; these are the dark secrets of many storage providers. Imagine that if the team implementing a storage solution forgets a “cluster setting,” sets the wrong “queue_depth” value for best performance, foregoes a key, yet simple HBA driver “environmental variable” or runs out of “inodes” on a file system? What’s the risk involved and how much financial impact could each of these mistakes inflict on your organization? If you pause to think about the answer, you should re-visit your current implementation or you may be bound for an unpleasant surprise.
1. So What?
Well, what if storage vendors were to simply eliminate the vast majority of these issues? Would your business benefit, and how? Will there be any financial savings and would the business value creation make a difference? I certainly hope so.
I have written bin files, configured hundreds of RAID groups based on IO profiles, sized applications based on number of users, IOPS and Oracle AWR reports, yet all I ever really wanted was to have the best performing, most reliable, scalable, easy to manage and lowest cost capacity. If I can deliver this to my customers without the list of issues and complexities I have listed in this blog, I truly believe that I can make a difference in a customer’s business, and this is why I work for INFINIDAT.
2. Who Cares?
I care and our customers care. It’s all about value and the cost of such value. Our customers want everything, and they want it at low cost. “Everything looks great in PowerPoint,” I always say. But when it’s time to deploy, things break and the promised business value and technical specs are in jeopardy.
So, how to achieve all of these cool ideas without digging out my Ph.D. in Storage?
I don’t believe that throwing hardware at the challenge for the heck of it is the right answer. As an example, adding SSD/Flash will not guarantee absolute performance, simplicity or reliability; even worse, it will not guarantee lower cost. Also selecting from multiple platforms and models from a “trusted” storage vendor cannot guarantee less complexity or less risk.
New concepts needed to be developed in order to make a storage system capable of eliminating the trickier nuances of storage management and to simplify deployment, monitoring, installation and even acquiring new capacity. Something revolutionary has to break the mold of “me too” storage vendors.
I believe that throughout the years of seeing storage solutions being upgraded (in spec sheets) yet still seeing the same old issues in today’s data centers, no one, not even the “flashy” flash arrays nor the older legacy storage vendors are solving the true problems and delivering valuable business requirements. That is, until now, as done by INFINIDAT with the InfiniBox.
Of course, I won’t ask anyone here to blindly believe me (except my mother). But, I challenge you to challenge yourselves. Challenge your data center status-quo and challenge your current data center storage providers.
After all, it’s not about me, but about your data and the true cost of deploying enterprise storage.
About Adrian Flores-Serafin
Adrian Flores-Serafin, is General Manager for Mexico and Latin America at INFINIDAT. He has been involved in IT and storage technologies for over 20 years. Has performed innumerable roles including storage administration, tech-support, solutions architect, technical sales manager and WW business unit executive. His is a veteran of Sun Microsystems, Perot Systems, EMC and IBM.