Benefits of hyperconverged infrastructure

Article courtesy of

How does hyperconvergence bring together all the important trends that enterprise IT struggles to handle? Here’s a look:

Hyperconvergence is the embodiment of the software-defined data center. Based in software, it provides the flexibility and agility that business demands from IT.
Cloud operators have their economic model figured out. Hyperconvergence brings to enterprise IT a cloud-like economic model that provides faster time to value for data center expenditures and lower total cost of ownership for the entire solution. Hyperconvergence offers the economic benefits of the cloud while delivering the performance, high availability, and reliability the enterprise demands.
Flash solves performance issues but has to be used carefully. Hyperconverged options combine flash and spinning disk to provide the ideal blend of capacity and performance, allowing IT to eliminate resource islands.
The converged infrastructure market provides a single-vendor approach to procurement, implementation, and operation. There’s no more vendor blame game, and there’s just one number to call when a data center problem arises.
In this article, I dive a bit deeper into hyperconvergence, showing you ten ways that it solves the challenges inherent in virtualized data centers.

Hyperconvergence is the epitome of the software-defined data center (SDDC), discussed in Chapter 3. The software-based nature of hyperconvergence provides the flexibility required to meet current and future business needs without having to rip and replace infrastructure components. Better yet, as vendors add new features in updated software releases, customers gain the benefits of those features immediately, without having to replace hardware.

Commodity hardware equals lower cost. The software layer is designed to accommodate the fact that hardware will eventually fail. Customers get the benefit of these failure avoidance/ availability options without having to break the bank to get the hardware.

Financial officers really like the opportunity to save a few bucks. The business gets to enjoy better outcomes than legacy data center systems offer, often at a much lower cost.

In hyperconvergence, all components — compute, storage, backup to disk, cloud gateway functionality, and so on — are combined in a single shared resource pool with hypervisor technology. This simple, efficient design enables IT to manage aggregated resources across individual nodes as a single federated system.
Mass centralization and integration also happen at the management level. Regardless of how widespread physical resources happen to be, hyperconverged systems handle them as though they were all sitting next to one another. Resources spread across multiple physical data centers are managed from a single, centralized interface. All system and data management functions are handled within this interface, too.

Agility is a big deal in modern IT. Business expects IT to respond quickly as new needs arise, yet legacy environments force IT to employ myriad resources to meet those needs. Hyperconverged infrastructure enables IT to achieve positive outcomes much faster.

Part of being agile is being able to move workloads as necessary. In a hyperconverged world, all resources in all physical data centers reside under a single administrative umbrella (see the preceding article). Workload migration in such environments is a breeze, particularly in a solution that enables consistent deduplication as a core part of its offering. Reduced data is far easier to work with than fully expanded data and helps IT get things done faster.

Hyperconvergence is a scalable building-block approach that allows IT to expand by adding units, just like in a LEGO set. Granular scalability is one of the hallmarks of this infrastructure. Unlike integrated systems products, which often require large investments, hyperconverged solutions have a much smaller step size. Step size is the amount of infrastructure that a company needs to buy to get to the next level of infrastruc- ture. The bigger the step size, the bigger the up-front cost.
The bigger the step size, the longer it takes to fully utilize new resources added through the expansion. A smaller step size results in a far more efficient use of resources. As new resources are required, it’s easy to add nodes to a hyperconverged infrastructure.

Very Borg-like, eh (in a good way)?
Hyperconverged systems have a low cost of entry compared with their integrated system counterparts and legacy infra- structure.

Automation is a key component of the SDDC and goes hand in hand with hyperconvergence. When all resources are truly combined and when centralized management tools are in place, administrative functionality includes scheduling opportunities as well as scripting options.

Also, IT doesn’t need to worry about trying to create auto- mated structures with hardware from different manufacturers or product lines. Everything is encapsulated in one nice, neat environment.

Virtualization is the foundation of the SDDC (see Chapter 3). Hyperconverged infrastructure options use virtual machines (VMs) as the most basic constructs of the environment. All other resources — storage, backup, replication, load balancing, and so on — support individual VMs.

As a result, policy in the hyperconverged environment also revolves around VMs, as do all the management options available in the system, such as data protection policies, which are often defined in third-party tools in legacy environments. With hyperconvergence, integrated data protection policies and control happen right at VM level.
VM-centricity is also apparent as workloads need to be moved around to different data centers and between services, such as backup and replication. The administrator always works with the virtual machine as the focus, not the data center and not underlying services, such as storage.

Hyperconvergence enables organizations to deploy many kinds of applications in a single shared resource pool without worrying about the dreaded IO blender effect, which wrecks VM performance.

How does hyperconvergence make this type of deployment possible? Hyperconverged systems include multiple kinds of storage — both solid-state storage and spinning-disk — in each appliance. A single appliance can have multiple terabytes of each kind of storage installed. Because multiple appliances are necessary to achieve full environment redundancy and data protection, there’s plenty of both kinds of storage to go around. The focus on the VM in hyperconverged systems also allows the system to see through the IO blender and to optimize based on the IO profile of the individual VM.

Hyperconverged infrastructure’s mix of storage enables systems to handle both random and sequential workloads deftly. Even better, with so many solid-state storage devices in a hyperconverged cluster, there are more than enough IO opera- tions per second (IOPS) to support even the most intensive workloads — including virtual desktop infrastructure (VDI) boot and login storms.

The shared resource pool also enables efficient use of resources for improved performance and capacity, just like those very first server consolidation initiatives that you undertook on your initial journey into virtualization. Along the way, though, you may have created new islands thanks to the post-virtualization challenges discussed earlier. Resource islands carry with them the same utilization challenges that your old physical environments featured. With hyperconvergence, you get to move away from the need to create resource islands just to meet IO needs of particular applications. The environment itself handles all of the CPU, RAM, capacity, and IOPS assignments so that administrators can focus on the application and not individual resource needs.
The business benefits as IT spends less while providing improved overall service. On the performance front, the environment handles far more varied workloads than legacy infrastructure can. IT itself performs better, with more focus on the business and less on the technology.

Although it’s not always the most enjoyable task in the world, protecting data is critical. The sad fact is that many organizations do only the bare minimum to protect their critical data. There are two main reasons why: comprehensive data protection can be really expensive and really complex.

To provide data protection in a legacy system, you have to make many decisions and purchase a wide selection of products. In a hyperconverged environment, however, backup, recovery, and disaster recovery are built in. They’re part of the infrastructure, not third-party afterthoughts to be integrated.

The benefits of hyperconvergence are clear:

Comprehensive backup and recovery, and affordable disaster recovery.
Efficient protection without data rehydration and re-deduplication — and the inefficient use of resources that result.
A single centralized console that enables IT to respond quickly.
Even though integrated systems don’t deliver the broad benefits associated with hyperconvergence, its single- vendor support model is superb. Fortunately, single-vendor design, delivery, and support are also features of hyperconvergence. Customers get one point of contact for the life of the system, from initial inquiry to system stand-down. Because hyperconverged systems are tested, customers have less need for pilot projects, and those that choose to run them can do so for shorter periods. While
integrated systems companies provide a list and pre-test the various firmware versions for the different devices within the stack, in a hyperconverged system, there is only a single manufacturer and only one upgrade to be done. Reduced complexity in these processes translates directly to saved time and lower operational costs.

A single-vendor approach provides economies of scale in procurement, operations, and support.

Show More

Leave a Reply

Your email address will not be published. Required fields are marked *