Tuesday, July 15, 2008

Kehoe - Futility Computing

By John Kehoe - 15 July 2008

For some time we’ve witnessed the push for utility computing: technologies such as server virtualization, storage virtualization, and grids that shift loads. Then there’s data source virtualization: natural language queries that retrieve a steaming heap of data from a mix of sources without being transparent about how it all got there. Sounds like the tomatoes the FDA can’t track down.

It’s best described as "Futility Computing," an idea Frank Gens of IDC came up with it in 2003.

Here's why utility computing is problematic.

First, the technologies have had a long maturity curve. Remember when a certain RDBMS vendor (who shall remain anonymous because I might need a job someday) promised the first grid capable of dynamically shifting load? We've been in pursuit of heterogeneous storage virtualization for a long, long time. Has there ever been a cluster that wasn’t a cluster-[expletive]?

Second, utility computing “solutions” are money spent on the wrong problem. The argument can be made that there are savings to be had by creating a utility structure. We save rack space, fully utilize storage, cut the electric bill and reduce HVAC requirements. We even get to do a nice little PR piece about how green we are and how we're saving the polar bears because we care. But what is the real cost? Do we have the right hardware for scalability? Can our business solutions exploit virtualization or will performance be degraded with the utility approach? What is the risk of vendor lock-in? Does the utility solution support the mix of technologies we already use, or do we need separate tools to support our mix of technologies? Is virtualization robust? Above all else, how much obscurity do we introduce with the utility model? Not only do we risk distorting our costs, but with all the jet fuel we'll need to burn flying consultants back and forth to keep the virtualization lights lit, we may not be doing the polar bears any great favors after all.

Most shops have a Rube Goldberg feel to them: applications are often pieced together and interconnected to the point where they make as much sense as an Escher drawing. IT doesn’t know the first place to start, let alone know what all the pieces are (which is why SOA and auto-discovery are pushed, but that is another diatribe). Any virtualization effort requires a complete understanding of the application landscape. Without it, a utility foundation can’t be established.

One byproduct of virtualization initiatives is the further stratification or isolation of expertise. The storage team is paid to maximize storage efficiency and satisfy a response time service level agreement (SLA). The Database Administrator (DBA) has to satisfy the response SLA for SQL. The middleware team (e.g., Java or .NET) has to optimize response, apply business logic and pull data from all sorts of remote databases, applications and web services calls. The web server and network teams are focused on client response time and network performance. Everybody has a tool. Everyone has accurate data. Nobody sees a problem.

Meanwhile, Rome burns.

Unfortunately, nobody is talking to one another. The root of the problem is that we have broken our teams into silos. That leads us to overly clever solutions: servers that automatically shift load while sorting this nastiness out, storage that shifts around in the background, or even virtual machines moving about. We never stop to ask: does any of this address our real problem, or is it just addressing the symptoms?

This brings to mind some metaphors. The first is an episode of the television series MacGyver (Episode 3 ‘Thief of Budapest’) where MacGyver has to rescue a diplomat. During his escape, the villains (the Elbonians) shoot and damage the engine of his getaway car. Mac goes under the hood to fix the engine with the car traveling at highway speed. Outrageous as it may sound, this is not too far from the day-to-day reality of IT. Of course, IT reality is much worse than this: the car is going downhill at 75 MPH on an unlit, twisty road, the Elbonians are still shooting away, and the car is on fire.

The second metaphor that comes to mind is an anti-drug television campaign that ran in the US during the 1980s. It opened with a voiceover, "this is your brain" accompanied by the visual of an egg. This was followed by another voiceover, "this is your brain on drugs" with a visual of the egg in a scaldingly-hot frying pan. The same formula is applicable to an application on utility computing: this is your application; this is your application on a stack of virtualization.

As we fiddle, we’re pinching pennies on the next application that our business partners believe will give it competitive advantage. Because we’ve discounted application performance, and supplemented that by having no means of finding where business transactions die, we’ve put functional requirements at significant risk. In fact, we’ve misplaced our priorities: we spend prodigiously on utility enablement, silo-ing and obscuring IT while simultaneously ignoring end-to-end performance. We do this because we take performance for granted: vmstat and vendor console tools are all we need, right?

Very few virtualization/utility models succeed. Those that do have common characteristics. There is a clean application landscape. People have a shared understanding of the applications. They have dedicated performance analysis teams staffed with highly capable people. They have very low turnover. Finally, they have a methodology in place to cut though the silos and pinpoint the cause of a performance problem. From personal experience, about 1.5% of all IT shops can say they have all of these things in place. That means 98.5% are underperforming in this model.

Without the right capability or environment, a utility approach is going to cause more harm than good. It is possible to get away with some virtualization deployments: one-off VMs are easy, and some degree of server consolidation is possible. But look out for the grand-theft-datacenter utility solution. It’s pretty violent.




About John Kehoe: John is a performance technologist plying his dark craft since the early nineties. John has a penchant for parenthetical editorializing, puns and mixed metaphors (sorry).

No comments:

Post a Comment