Cloud Computing & Hosted PBX News – Dallas, TX
Cloud Computing & Hosted PBX News – Dallas, TX

Virtualization 2.0: A Foundation for Successful Cloud Computing

Virtualization – not quite the nirvana it was promised to be. We expected exponentially better efficiency, higher availability and huge savings for IT budgets. However, now that the honeymoon is over, most organizations feel slighted. Not only have the promised benefits never been realized, but IT organizations also have been saddled with ever-increasing user demand and out-of-control costs – not to mention virtual sprawl, vendor lock-in and high provisioning effort.

With all of these issues, enterprises are looking to solve the problems of “Virtualization 1.0.” Last year, a Gartner study showed that CIOs were looking to cloud computing in more strategic ways, in the hope that the cloud will improve IT operations.

So cloud computing will fix all this, won’t it?

Actually, cloud computing will just compound the problems of virtualization unless we adopt a new management model because the problems of Virtualization 1.0 largely stem from a single undeniable fact: The average human brain cannot keep up with the complexity of a virtualized environment.

In every virtualized IT organization, there is a smart guy, or group of guys, spending a significant portion of their time provisioning virtual machines (VMs). While provisioning a VM is conceptually simple, there is a vital decision to be made: On which physical machine should the VM run? The importance of this simple question cannot be underestimated, and it can be a really complex task to determine the right answer.

Let’s start with the easy stuff: Which physical machines have the capacity to run the workload? Which are running the right hypervisor? Now, here are the harder questions: On which physical machine would the workload most efficiently fit (perhaps you have 1,000 of them)? Which machines have been reserved for a particular task (perhaps because of their high cost or particular configuration)? Are there any special security or governance requirements that limit where this VM can be geographically placed? And now for the killer: Is there anything already running on the physical machine that would cause a compliance issue if we place the new VM there?

You are getting the idea, but we are not done yet. Remember, this needs to be done every time you provision a new VM. But since VMs come and go, you really need to do it every time you restart a VM.

Even if your guys are all Einstein, this is going to be practically impossible. And even if you could do it exactly right, every time, there’s another problem: High-end Virtualization 1.0 solutions include features like high availability and resource scheduling that move VMs automatically – and break everything you just worked out.

Far from fixing it, cloud computing just makes this problem exponentially worse. More machines, more locations and more people provisioning machines equals more complexity. Far from being the enabler of the cloud, virtualization becomes the inhibitor.

How do we solve the problems of Virtualization 1.0?

Virtualization strategy needs to evolve past relying on humans to make each deployment and management decision ad hoc. Enterprises need automated, business-policy-driven provisioning and management. Virtualization 2.0 is that evolution and is built upon three key foundations: separation, delegation and allocation.

Separate the physical and the virtual, separate the application team from the IT infrastructure organization. IT contributes compute, network and storage resources to a resource cloud, and virtual enterprises (a logical unit of users) consume resources from it. Virtual enterprises never access the physical layer and they neither know nor care from where their resources come. IT maintains control of the physical infrastructure and can give multi-tenancy control to various aspects of the virtual infrastructure to authorized users.

Delegate self-service provisioning to virtual enterprises in complete safety because of that separation. Virtual enterprise users access image libraries to spin up pre-configured corporate images that maintain company standards. IT no longer needs to spend days or weeks provisioning according to user demand.

Allocate
resources to self-service virtual enterprises according to business policies. When a new VM is created (or restarted), the policies determine how that VM is deployed. For example, the CIO sets a policy for the compliance rules his enterprise must follow; that enterprise’s VMs would be automatically deployed based on that policy. Or let’s say the CIO wants only the most expensive hardware used for certain applications – IT sets a policy to make sure the VMs are automatically deployed accordingly. The same could apply for a green policy or even performance. Policies ensure that VMs are deployed automatically according to security, compliance, efficiency, cost and performance rules.

How do we really benefit?

IT responsiveness skyrockets because the time that was previously devoted to provisioning can be used elsewhere. Value-add activities like capacity planning are now possible. Increased agility due to on-demand deployment enables development teams to test what-if scenarios. Utilization greatly improves and server efficiency can be optimized. Security and compliance concerns are mitigated because the system cannot deploy anything unless it adheres to policy. Virtual sprawl is minimized because virtual enterprises manage their VMs under resource limits – encouraging them to take down defunct machines to free up unused resources when they approach their limits. Users are empowered to control their own VMs, IT has better control over resources, and the CIO can control costs and budgets with business policies.

How do we actually implement this?

This kind of business-policy-driven automation is only possible with the right management tool, one that integrates with your existing management tools and is fully customizable to your needs. It should enable you to avoid vendor lock-in, which prevents your business from being as competitive as possible. Gartner Group reports that by 2012, 49 percent of enterprises expect to have a heterogeneous virtual environment. Enterprises will want to use the free hypervisors for non-critical applications but still be able to use the expensive hypervisors when necessary.

IT departments must be empowered with enterprise-class cloud management software built on open standards and the three fundamentals to manage their entire, globally deployed infrastructure in order to fully realize the benefits of cloud computing. Without the Virtualization 2.0 trifecta of separation, delegation and allocation, any cloud solution will suffer from the same problems of Virtualization 1.0. However, with the new model, the load placed on IT staff can be reduced and savings can be realized through dynamic provisioning based on policy and minimal management efforts.

Without the capabilities and policies of Virtualization 2.0 in place, CIOs may find their heads stuck where their data is not – in the clouds.

Source

Brian