The cost of provisioning a server or desktop has collapsed thanks to virtualization, thin client, multicore CPUs and ubiquitous gigabit networking in the data centre. Indeed, in the last 3 years virtualization software itself has tumbled in price from thousands of dollars per unit and is now given away for free with many Operating Systems.
So what happens when server hardware reaches true commodity pricing levels? What happens when the necessity for new capital equipment expenditure goes away, and the power to spawn whole IT estates ends up in the hands of business units or end users? Virtual system instances surge to meet demand (no bad thing) but the CIO and his team are left responsible for reliability, security, and compliance of an uncontrollable virtual estate. Not all organisations have a powerful CIO or IT function, not everyone is able to effectively enforce central policy on such a fluid infrastructure.
Early adopters of virtualization have already found this out the hard way, and are now trying to cope with this uncontrolled growth in the number of virtual systems.
We have been here before. During the 90's the cost per megabyte of hard drive storage (remember when we used to think about storage in terms of megabytes?) plummeted. Storage proliferated in the data centre, on the desktop, and in a hundred types of portable devices. The struggle to manage this storage still rages today. Do you know where your confidential data is? Perhaps it's on the SAN, and on John's desktop PC, or his laptop, you know, the one he lost on the train. We waited more than 10 years for tools to help us manage this uncontrolled growth in storage. Even now with data de-duplication, filesystem snapshots, disk encryption, and (somewhat) affordable SAN and NAS, managing storage is a daily struggle with huge associated costs.
Common sense tells us that it is better to fix problems now, before they become chronic. How much easier would it have been to manage today's terabytes of storage if all those powerful tools were available to us in 1990?
How can we take the lessons learned from storage and apply them to today's problem of virtualization sprawl?
What if you could devolve the power to create, destroy, and hibernate virtual machines to your authorized users in a carefully controlled way? What if virtual machines were automatically decommission after a projects pre-determined end-date? What if you could report on and produce billing records for virtual machines according to their consumption of physical resources?
If you could do all these things today, how would that help you manage virtual server sprawl now and in the future? How much time would it save, and what would all that be worth to you?
For more than a year 360is have been working with a group of experienced virtualization practitioners at one of our clients, who have recently spun-out a software company, and are taking just such a Virtual Resource Management platform to market. DynamicOps announced the launch of VRM in June and based upon its maturity in production environments are already welcoming new users. We recommend it for medium and large organizations looking to rein in existing virtualization sprawl or stop the problem before it starts. Vendor neutral, multi-platform, and security conscious, DynamicOps and 360is are planning an autumn seminar in the heart of London's financial centre to introduce VRM and and explore how it allows you to re-gain control of your (virtual) IT estate.
If you would like to be invited, please get in touch.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment