Search
  • Admin

The Rise Of Hyper Coverged Infrastructure

Updated: Feb 16, 2018

When I was a customer and managing storage it was quite a labourious job to design, configure and migrate storage. You got to know a storage vendors specific way of doing things, whether this was RAID creation and the different levels and decisions. RAW vs Useable, LUN creation and administration, hybrid block and file etc. etc. This was OK until the next storage refresh cycle where potentially a new storage vendor was selected. Then a migration was planned and a new learning curve kicked off to understand how to do the same tasks on the new platfom.


I always found was that I was balancing the useage of the LUN/datastore to fully utilize the investment while at the same time mainitaining performance levels across the disk array enclosures. Sometimes I found that certain LUNs used for certain workloads were never fully utlized. Along came storage auto tiering which helped with easing some of this burden as did vSphere API's for Storage Awareness. It was still the case with major work on arrays I needed to log into a separate management portal on the controllers of the arrays. I also sized storage requirements for a new array based on the growth over the previous 3 to 5 years. The new array would then potentially have % free with a comfort level of "we will grow into it in the next few years" so there was lots of over-provisioning happening. Spinning rust and flash capacity sitting there in a ready state. I have not mentioned the FC zoning and masking that was also required! uggh...


Fast forward to now and we can see that Hyper Converged Infrastructure is really gathering momentum regarding simplifying the process of storage provisioning. Now I can use policies to specify at a VM or VMDK level what capabilities I want. What I love about policies is that they are elastic, we are no longer limited to something hardware defined. As an example I have have 4 hosts running HCI and then add another host, that policy will just see more storage added and consume, the policies that I have tied to the VMs will stretch across the additional host (cluster). I can also *potentially change a policy or policies and this will change the characteristics of the workloads in terms of performance and/or availability. By utilizing local storage/disks per host and creating a datastore cluster it really makes this process simple, when we add in all the Enterprise data services we would expect such as All Flash De-Duplication, Compression, Checksum, Availability and Stretched clusters we have an awesome next gen solution.


The other area we see is predictive costings, HCI allows us to grow is a much more linear way, where it's more of a add as we grow lego building block type approach.


Generally it's not block or file but object based storage. Its not a distributed filesystem but multiple mirror copies with IO distributed evenly. This makes it very efficient and scalable. Think of it as a RAIN (Redundant Array of Independent Nodes)


Traditionally I had a bunch of ESXi hosts, FC switches and storage arrays, when looking at HCI we now really only need the hosts and a 10Gbe network. I sometimes do whiteboards for customers drawing the traditional SAN, FC switches, and hosts and then beside it draw what we need to run HCI. When looking at it in this way it's quite powerful to understand the components we do not necessarily need anymore.


The public cloud has given us a complete abstraction layer where we care about the workloads and not so much the underlying infrastructure anymore. HCI makes this cloud like simplicity all run from an on premise prespecive.


As I work for VMware I am biased towards and really like vSAN but there are many vendors in this space. The slide below is an interesting look at where HCI storage market is headed in the future.


0 views