Blogs

Data center history and the impact on today’s performance management

John Gentry, Vice President of Marketing and Alliances –

When it comes to the data center, history is important. If you understand the history and evolution of data center storage, you see why performance management has broken down. Let’s take a look at the last 30 years in data center history so we can put today’s performance management situation into context.

The first cloud

In the 1970s and 1980s, there was the mainframe. The mainframe was actually the first cloud, because in reality, it was a “shared” system. It was closed, so performance management was never an issue. However, mainframes were massive and expensive, and few enterprises could afford them. Furthermore, the expertise required to manage and maintain this technology was enormous.

Open systems and the decentralization of IT

Next came the open systems client server. You still had your traditional architecture, with a front end tier, business logic and a massive storage array. Thus began the decentralization of IT. The biggest challenge with this setup was that the people managing performance were actually the ones selling the tiers, but they were only focused on the tier they were selling and not the systems as a whole.

The Internet and throwing more “stuff” at infrastructure

Then came the Internet. With the advance of Internet technology, the same stack configuration existed, but it was connected, which gave rise to disaster recovery and business continuity. More network performance management was needed, and the answer to performance issues was usually “blame it on the network.” The only way to increase storage performance was to throw more “stuff” at the infrastructure. The challenge today is that virtualization permeates all the way down through the stack, but it’s still reliant on legacy infrastructures. The business logic is fundamentally reliant on systems that cannot easily scale out or sit in a highly virtualized infrastructure.

The question is, how do we manage assets that are tied to a complex infrastructure path? The tools available are still delivered by the companies being monitored. How does your performance monitoring solution scale out in the same way you want your new systems to scale out?

Fixing a broken performance management system

Given this history and build to today, how can you, the IT manager, fix this broken system? First, you have to understand multiple generations and multiple vendors across the entire infrastructure and manage the infrastructure from the hypervisor layer to the LUN. You have to move past application performance management and the focus on the end user experience, and instead focus on moving through the infrastructure layer in a consolidated view.

Because of the history presented here, most data centers today have built up some disparate systems, whether from a combination of legacy systems and new technology or because of an amalgamation of multiple vendors and arrays. This situation leads to challenges in performance monitoring, as end-to-end visibility becomes difficult. Various vendors may provide tools to give visibility into their disparate component or array, but holistic visibility has always proved difficult because of a lack of vendor agnosticism.

There are two things we must do to amend this system: acknowledge that there is a problem with a lack of visibility and find the expertise to solve it. Having a service level agreement (SLA) that encompasses the entire IT stack is more important than ever. Building a team with expertise in all areas of the stack, all the way down to the legacy infrastructure, is the best way to deal with storage performance and ensure everything is done correctly at every tier.

Get the details of why performance management is broken in this video with John Gentry and Storage Switzerland analyst George Crump.