Server virtualisation is a great advancement in server technology as a means to make servers more efficient and thereby help companies not only get more for their money, but also to actually save some money that they can spend on other things. It is breaking down a lot of barriers even as it erects virtual ones inside servers of all stripes and sizes.
For one thing, the barriers between platforms are eroding because of server virtualisation hypervisors. And system administrators have to deal with multiple operating systems, a whole new paradigm for computing, and a new management software layer, one that is not entirely cooked on most platforms.
The amount of complexity in virtualised server environments is growing at a staggering rate, and mainly because being virtual instead of physical allows so many things to happen. A virtual machine is really a large disk file that encapsulates the state of an entire software stack minus the bits of it that are actually running in a slice of main memory, doing useful work.
That means you can back it up in one fell swoop, move it while it is running from one physical machine to another (provided you move memory state at the same time), or do one-to-many backups for disaster recovery or to propagate system images to remote locations and myriad departments. You can keep a jukebox full of zillions of different software stacks and do regression testing on pre-production applications, allowing developers to test code much quicker and on a wider variety of scenarios than was possible in the physical world, where it might take days or weeks to get a real server provisioned properly for a test. You can lock down a server so only specific functions are accessible, and provided there are no security holes in the hypervisor itself, make a much more secure as well as portable software stack.
These are all great things. But you are going to pay for them twice, once to buy them, and once again to use them. A few years from now when virtualisation seamlessly spans servers and storage, it will be incredibly complex. Managing the wheels within wheels will be the hard task that companies like VMware, Citrix Systems, IBM, Microsoft, Virtual Iron, and so many others will try to make money on. With VMware having a market capitalisation above $33bn, you can bet anyone with some code is trying to sell the idea to a venture capitalist, private equity firm, or anyone else who may want to ride this virtualisation wave up.
This complexity and the cost of managing virtual server sprawl, which knows no physical bounds, is the inevitable counterbalance to the great benefits virtualisation provides.
When virtualisation first took off on mainframes, it was about letting VM support multiple instances of a mainframe operating system like MVS or OS/390, but eventually VM/ESA took on a life of its own. Today, it is basically used to create the Integrated Facility for Linux on mainframes, and it is safe to say that the IFL has been one of the key fuels that has kept the mainframe business chugging along. This virtualisation has also engendered massive footprint consolidation, there are only 10 000 mainframe footprints in the world.
When IBM brought logical partitioning to OS/400 in 1999, it started out with OS/400 being the control domain of a number of subdomains also running OS/400, and it engendered a wave of server consolidation. It was not long before customers started asking IBM to plunk Linux inside an OS/400 guest partition, and soon thereafter the so-called Virtualization Engine hypervisor, which allowed OS/400 and Linux to sit side-by-side and atop a much thinner hypervisor than OS/400, was rolled out into the market. It took two more years for AIX and this hypervisor to be tweaked so it could support dynamic AIX logical partitions.
Because IBM controls all of the hardware in the Power-based servers as well as two main operating systems in the machine, it can certify all the bits to work with the Virtualization Engine hypervisor, which might be getting a new name (PowerVM) in the future. It is the same microcode that came from OS/400 way back when, which stole a lot of good ideas from the IBM mainframe and its VM environment.
Operating system providers and hypervisor providers are trying to get every possible operating system running on their hypervisor as a guest and every possible hypervisor supporting their operating system. The very same vendors who could never say anything nice about each other in a physical server world find that they have to be nice to each other in a virtual world because if they do not play nice, they do not get to play at all.
This is an unintended consequence of server virtualisation, but it is perhaps the most important and one that will serve IT customers well.
Without cooperation, customers cannot virtualise and consolidate platforms in a fluid manner.
There will be some issues, though. The heat is on server component makers to get certified for an ever-wider array of hypervisors, and those companies supplying hypervisors and management tools on top of them are working feverishly to get RAID controllers, network adapters, and other peripherals certified.
The list of certified hardware is embarrassingly small compared to the wealth of support for Linux, Windows, and Unix operating systems for the same peripherals. The relatively captive Unix product lines and the absolutely captive z/OS, i5/OS, and OpenVMS lines have less of an issue here. There is less hardware choice to begin with and one company supplying the hypervisor, the operating system, and the iron. This makes things simpler and integrated, but customers pay a premium for their iron to avoid hassles.
No virtualisation software vendor admits to this gap between the myriad kinds of hardware and the kinds that are technically supported. You have to read the hardware compatibility lists very carefully before buying anything these days. Even if you are not going to virtualise today on a server, you might in the future, and if you invest in the wrong technology, you are stuck in the physical world.
Moreover, just because you can get a hypervisor to run does not mean it is necessarily supported by its vendor, or that the version and release of the operating system you want to run is supported by its creator. In some ways, the virtual server world is a lot more restrictive than the physical one.
The fact that it takes money to do the kinds of certifications that are necessary is why there are commercial Linux distributors (Linux has the same issues, and has for a decade), and why Citrix and its hardware partners are going to have to invest heavily in the testing of components for the Xen hypervisor. Ditto Microsoft with Hyper-V (formerly known as 'Viridian'), Virtual Iron with its eponymous hypervisor, Sun with xVM, and even VMware with ESX Server.
The very restricted nature of hardware support for hypervisors is actually going to be the software dog wagging the hardware tale. Those devices that are most broadly supported by various hypervisors are going to sell, and those who lag in getting certified for hypervisors and the operating systems will simply not.