by Shmuel Kliger February 12th, 2013
The world of IT is moving from a hardware infrastructure to a software defined one, as in Software-Defined Data Center, Software-Defined Storage, Software-Defined Networking, and Software-Defined “X”. To deliver and enable this new software-defined world, we must have control.
The traditional IT operation process involves collecting a huge amount of data, monitoring the environment to detect anomalies, alerting Operations when the environment is either in a “bad” state or about to be in a “bad” state, and then forcing Operations to navigate, drill down and analyze, a massive number of metrics, trends and reports to troubleshoot and identify the root cause. At the end of it all, you’re (hopefully) able to fix the problem and restore the environment to a “more healthy” state. However, these labor-intensive, time-consuming activities take place while application performance is impacted and quality of service is degraded. This heavy lifting lies in the critical path of restoring service and, therefore, is usually done under pressure. Operational teams are continuously firefighting and finger-pointing, and the quality of service is unpredictable. Is this control? Far from it!
Hundreds of tools exist to support this mode of operation. But are you really in control using them? No! At the end of the day, all these tools provide is visibility into your environment. It almost worked in the pre-virtualization era, but in the highly dynamic software-defined world – this approach doesn’t cut it. Visibility is NOT control. Here’s an illustration of my point:
Prior to the arrival of Hurricane Sandy on the East coast of the US in October 2012, we had all the possible information collected through a variety of mechanisms. We had the best predictive analytics to process and analyze the huge amount of data and, as a result, we knew exactly where and when the storm would hit. We knew that it would be a Category 1 hurricane, and we knew many other statistics. We had perfect visibility! But did this mean we were in control? Unfortunately, no. Having all the information only meant that we knew what was going to happen and when. We could brace for the resulting disruption, but we were never in control of dealing with it.
The software-defined world requires Software-Defined Control (SDC). SDC is a platform to drive to and maintain the environment in a desired state. The desired state is the “healthy” state in which application performance is assured while the environment is utilized as efficiently as possible. It is a tight, closed loop of monitoring, analysis and control in which the software continuously analyzes the monitored metrics across the entire environment and drives the control actions to keep the environment in the desired state.
Employing SDC transforms IT operation. It separates the alerting, heavy lifting, troubleshooting and fixing from the control process. It eliminates the constant firefighting and finger-pointing, and assures quality of service and smooth operations. SDC simplifies IT operation. To deliver SDC one must solve the Intelligent Workload Management (IWM) problem, an intractable problem of how to assure application performance while utilizing the environment as efficiently as possible. You can read all about how we do that here.
To do it requires a unified control system that spans the infrastructure from the top down, solves the IWM problem, and focuses on utilizing the controls – now exposed in software thanks to all the “software-defining” – to maintain a desired state. To solve the IWM problem we use a federated heuristic, the Economic Scheduling Engine (ESE), which abstracts the environment and is able to drive the intelligent actions required to control IT in the desired state. It’s the only platform that delivers a closed control loop of planning, onboarding and operating virtual data centers. It is Software-Defined Control and it is transforming IT operation.
Join our New Release webcast on February 14th and I’ll explain it in greater detail, in person (virtually).