by Geeta Sachdev June 4th, 2013
I recently met a CIO at a very large New England-based service provider. This company likes to collect data and keep detailed metrics on everything. In fact, as part of a drive to automate, the company calculated the quantity of alerts their large data center generated in the last year. How many do you think?
The answer? 56 million.
Wow. To appreciate how incredible a number that is, let me break it down.
That’s more than 153,000 alerts every day, 6,000 every hour, or over 100 per minute! How on earth could any human, even a really hard working, highly efficient intelligent person, possibly track and address 153,000 alerts a day?
This company also calculated that of the 56 million alerts per year, only 133,000 were tracked and addressed. That’s .24% or less than one-quarter of one percent. So why bother instrumenting everything, setting thresholds and collecting so many alerts when almost all of them are ignored? That’s a good question.
What if you could prevent the problems that generate alerts in the first place? What if you could get out of reactive mode and actually take control of your virtual environment? And, what if, in the process, you could make sure that applications get the resources they need to perform optimally AND use those resources so efficiently that you eliminate unnecessary capital investments in capacity and operational expenses in manpower.
Today’s virtualized data center cannot scale with IT Operations staff monitoring and responding to alerts—especially not at 100 per minute. It doesn’t matter if you have 50,000 virtual machines or 50. In a dynamic virtual data center, there’s likely to be a change or move every few minutes, which is far too many for any one person to manage reliably. There is a better way to approach the problem. Let software manage this.