When it comes to IT management, every task is often treated as a singular event. That not only forces IT administrators into a more reactive (as opposed to proactive) stance, but it also makes maintaining a data center more labor-intensive and expensive as the environment scales larger. In theory, virtualization was supposed to lower the cost of IT by increasing server utilization rates. In reality, virtual machine sprawl has increased the complexity of the IT environment, while only having a marginal impact on those utilization rates. There’s a reason for this: without any effective process in place for the dynamic allocation of resources, IT organizations are allocating a peak amount of resources that a virtual machine requires, in order to make sure applications running on that VM will meet performance requirements. Compounding that issue is that, with the advent of virtualization, the actual number of workloads in need of management has increased, even as performance concerns utilization rates improve only marginally. “A lot of organizations have grown the number of application workloads that need to be managed with really changing the physical footprint all that much, said Edward Haletky, CEO of the IT consulting firm The Virtualization Practice, LLC. The end result is x86 server environments where the state-of-the-art when it comes to utilization is around 50 percent. And in reality, it tends to be much lower. But by way of comparison, mainframe environments routinely see utilization rates that typically range well over 90 percent. When Robert Reynolds became Lead Virtualization Architect for Indiana University, the first thing he noticed was that utilization rate issue. The university simply couldn’t afford to waste that kind of server capacity, requiring Reynolds—who has a background in mainframes—to become a little inventive. The school turned to VMTurbo, a provider of workload management tools for virtual machine environment, for tools that allowed it to holistically manage the data center environment. The end result, said Reynolds, were utilization rates for x86 servers in the 80 percent range. A big part of the issue with x86 server environments, he added, is that IT organizations have been conditioned to expect low utilization rates. VMTurbo let the university parse the environment on an hourly basis, and thus anticipate most issues before they occured. As a result, Reynolds said the university could now more easily see where it could maximize utilization rates: “You really have to be willing to take an objective look at your infrastructure.”


VMTurbo CTO Shmuel Kliger says most organizations, inundated as they are with systems-management data, never get a chance to take an objective look at their systems: “They wind up collecting a lot of data that has no context because they are not looking at it with high enough abstraction.” VMTurbo works by organizing all the data in a way that essentially creates a marketplace in the data center. The VMTurbo engine determines what resources are available to process a particular workload in the least costly manner possible given the performance goals. “The goal is to give the IT organization the information they really need to make an intelligent decision,” said Kliger. IT vendors are trying to solve this fundamental problem in ways that usually require an IT organization to engage in fork-lift upgrades, the better to gain access to more intelligent servers and sophisticated networks. But it could be years before most IT organizations have the financial means to accomplish such goals. The real issue, believes Neebula Systems CEO Yuval Cohen, is that most IT organizations don’t have the tools needed to discover dependencies and applications. To address that, Neebula came up with ServiceWatch, Software-as-a-Service that identifies which business services are being enabled by what specific IT infrastructure components. “Without the ability to do root-cause analysis everybody just winds up playing the blame game,” Cohen said. Determining which application workloads are most important to businesses has always been a major challenge for IT. To help IT organizations address that issue, EMC recently funded the creation of a Technology Business Management Council (TBMC), which is publishing an eBook on how to better manage IT.

Supply and Demand

According to TBMC Council President Chris Pick, the goal of the TBMC is to create a definitive framework for running IT organizations like a business, by managing the supply and demand of IT services. Far too many IT organizations have a hard time identifying which specific business processes are associated with particular applications, as well as what IT infrastructure is associated with those applications. The end result is an overreliance on IT tools that create irrelevant reports. “There needs to be a lot less geek speak between IT and the business,” Pick said. IT organizations are also facing a good deal of external pressure, especially from executives considering whether their needs are better served by a third-party service rather than their internal IT’s offerings. “Internal IT organizations are trying to cope with the loss of their monopoly on IT,” Pick added. Cory Miller, vice president of engineering services for Bremer Financial Services, believes it’s critical to delineate which level of services are coming from a third-party provider versus internal IT. A critical first step towards achieving that goal, says Miller, is to think like a service provider. That means defining tiers of applications and the level of service required by each. None of that can happen, however, until IT organizations have repeatable processes in place that scale. Frameworks such as the IT Infrastructure Library (ITIL) or any number of other alternatives are obviously a good start in that direction. There’s still is no substitute for a proactive approach to management in the data center. The days of waiting for something to break and then fixing it are coming to a close, if for no other reason than the growing complexity of enterprise IT means that something critical will break every day. Anything short of a proactive approach to managing IT as truly automated service is simply waiting for the next accident to happen.   Image: .shock/Shutterstock.com