Win VMWorld Passes

The New Normal

by March 22nd, 2013

The IT world is being shaped by a confluence of changes, including the proliferation of virtualization, the expectation of “always on” IT, and the increasing speed of business enabled by technology. Every progression we achieve in enabling lower costs, greater agility and reduced risk takes us farther and farther from the “normal” mode of operations of recent years. For many organizations, we’re not experiencing incremental improvements, but, rather, a wholesale restructuring of IT to deliver services—the “new normal.”

For many IT shops, the present mode of operation presumes that an environment will inevitably deviate from its ideal or “desired” state. Managing the environment often focuses on being alerted to and dealing with abnormal conditions.  In fact, there are management solutions that “learn” what is normal in the environment so they know when things are not normal and alert you to it.

Can this approach keep pace in the “new normal” IT environment? Not likely.

Virtualization “breaks” processes, tools and approaches designed for the physical world.  It forces us to adapt.  And that’s why legacy approaches, like learning what’s “normal,” alerting for anomalies and predicting when something could go awry can’t keep pace with the “new normal” virtualization brings.

As I speak to more and more customers about what they are trying to do to manage real-time operations in this new paradigm, I am very pleased to see how receptive people are to hearing about and trying a new approach. The new approach is one that assumes that an environment can be kept in its desired “healthy” state—one where you can assure performance while driving the environment to its most efficient state. VMTurbo’s approach makes the question of “how long does it take your management system to learn what’s normal?” moot.

Simple questions to ask yourself:

  • Would you prefer to keep your environment humming along in a state you desire or would you prefer to be notified every time it becomes abnormal or is going to become abnormal so that you can make a decision?
  • Would you prefer to automate decision-making and leverage software-defined control to continuously tune your environment vs. rely on administrators’ ability to interpret dashboard data to make decisions when the environment deviates from the desired state?
  • Would you prefer to have your virtual environment run on “autopilot” by leverage software-defined control in your virtual environment or have administrators make manual modifications as data comes in?

The challenge of keeping a virtual environment in a continuous state of health is so complex, most people have resigned themselves to the fact that environment will break and they need to get better at finding out if and when it could happen rather than preventing it.

Software-defined control is here and it is capable of solving this incredibly complex problem in the modern data center. To contend that learning “normal” is an important problem to solve would be to deny what you already know: normal is an application getting the resources it needs when it needs them, while utilizing resources as efficiently as possible. No more, no less.