Tuesday, October 27, 2009

Southwest Airlines, a Great Cloud Provider...

Some of your organization may be very rich and able to afford dedicated executive transportation infrastructure. But I am willing to wager that it isn't as efficient, reliable or available as Southwest Airlines' highly scalable, multi-tenant environment; their fleet of 737's.

I'm not saying that I like flying around on an airline more than on a private jet, but they're expensive. I like Southwest Airlines because they are not. What is more fascinating is that ALL of the adjectives traditionally associated with Cloud Computing can be applied to this organization:

On demand, pay as you go, dynamically provisioned, granular, elastic, opaque, transparent, standardized, highly reliable, highly scalable, flexible, agile, risk mitigating, zero cap-ex, reduced barriers to entry, new business model enabling, metered payment, multi-tenant, dynamic, open, interoperable, without vendor lock-in, consumerized, automated, speedy, constant "state of state of the art", secure, utilizes economies of scale and skill, and above all, cost effective!

Let's just take a couple of my favorites, standardized, scalable, available, secure, multi-tenant and interoperable.

SWA, unlike most other airlines, is highly standardized. They have one reservation system and do their best to drive all traffic to the most cost effective user interface to that system; Southwest.com. The airline only operates one airframe, the 737, thus they leverage standard maintenance practices, piloting and servicing skill across their system. And they have standard pricing so that passengers can predict travel costs and budget accordingly.

As a function of standards, their organization scales easily, which leads to cost effectiveness, agility and elasticity. Again, unlike most other airlines, for SWA to introduce new destinations, their initial investment is limited, so they can expand and contract their portfolio of service quickly.

Their fleet of standardized 737's is highly redundant, highly available and secure. Not only does the airline take advantage of FAA's security infrastructure, it extends it with its own procedures. The 737 is also muti-tenant enabling it to maximize its load efficiency and because the airline maintains more than one if an individual plane goes offline, another is available almost immediately to insure minimal service interuption.

Lastly, SWA promotes interoperability both internally and externally. It goes without saying that all of their airplanes fly in the same sky as the rest, and passengers can fly on one jet just as easily as another, but you can get on a Southwest jet and connect with a completely different airline. They also guaranty service through interoperability by being able to re-route and reschedule service based on their needs or those of their passengers.

I could go on and on, but what is most impressive about SWA is that it can fully absorb the cost of its systems' maintenance and operating costs and still provide enough value to turn a profit. Let's face it, if IT organizations ran like Southwest Airlines, they wouldn't be considered cost centers.

But they don't. Most IT organization are still operating the equivalent of the company jet. They run dedicated datacenters with a hodge-podge of hardware, operating systems, databases and applications. All of this complexity is leading to higher and higher operating costs and less agility and flexibility for the overall organization. This is justified by the rational that THEIR data is so important that it warrants special treatment.

In the long term, it's obvious that those shops continuing to "fly their own jets" will ultimately drive their companies out of business as they loose competitiveness. But in the short run, there needs to be a hybrid approach. Yes, the savviest strategy mandates that organizations should maintain some of their dedicated resources for critical data and processes and slowly experiment with moving the not so crucial stuff to the Cloud. All the while realizing that this compromise has implications and opportunity costs of its own.

So what is this Cloud thing?

Wow, a couple of years ago, I thought I was really on to something with this whole Cloud thing, it was good stuff, on-demand, elastic pricing, virtualization, economies of scale, blah, blah, blah. But now, I have to say, I am not so excited. After spending the last two years talking to CIO's, CEO's, COO's and most importantly CFO's about all the Cloud has to offer, it seams the only ones who are really benefiting from the buzz are the CMO's (of old technology companies). That stands to reason, their job is to spin. What concerns me, is the nebulousness that the Cloud term has begun to evoke. So just for the record, let's define what a Cloud is, what Cloud provider is and what Cloud Computing is.

First, there is ONLY one Cloud and it encompasses the entire universe outside of our atmosphere. This analogy is very important. The Cloud is the Nebulous, all of it, including the Solar System, the Milky Way, and all of the galaxies and universes that lie beyond what is contained in our natural environment called Earth. For this discussion, it is important that the Cloud be nebulous and outside of our control because it hammers the point. It is also important that it be seen as a provider, after all, what is out there is ultimately source of all life and energy within our environment. In that light, the Sun is the simplest example of a Cloud provider.

When we apply the Cloud analogy to business services there are literally millions of Cloud providers...For the purposes of this discussion, I define a Cloud service provider as an entity which delivers its complete value without the need to modify the clients environment. Think eBay, Bank of America, Southwest Airlines, AT&T, PG&E, etc. These are all Cloud providers, and once connected to their clients, all share some simple characteristics:

1. They all provide access to utilities or services built on a system or network that doesn't require modification to the client's environment.
2. All services can be provisioned dynamically in a completely self service fashion.
3. All services are available on a metered and granular pay-for use basis.
4. All services can be highly tailored via self service or by evoking a client advocate.

What is Cloud Computing then? From the user's perspective, it is simply the extension of this metaphor to the availability of services traditionally associated with a computer, i.e., data management and processing. Just as I don't have to buy the airplane to fly to San Francisco, I shouldn't have to purchase a data center to automate processes or manage information. In other words, just as the network is the Dial Tone, the Cloud is the Computer. The user doesn't care where the resources are nor to whom they ultimately belong, only that they are reliable and available. End of discussion.

It is my belief that all of the noise around this revolution in computing comes to bare when the critical asset that is being managed is addressed; data. Data management is complicated stuff, and it has to be to insure that data management professionals are gainfully employed. But just like financial management, real estate management, and time management, it is an art that can be mastered. Like all of these assets where individuals seek the services of professionals, the key is to look at Information Management as an art and not a technology. It is the same way we think of travel as a destination rather than a trip. In other words, the physical movement of one's body is a service guarantied by the provider. The provider might employ different technologies to insure delivery, but that is not the user's concern. The provider benefits directly from centralization, virtualization, and standardization, but the user doesn't, and shouldn't care so long as service is available at his price point. In fact, if delivery is not guaranteed, no payment is expected. Can you imagine if your financial planner insisted that you purchase his calculator before he set down to planing your retirement?

Monday, October 12, 2009

Thinking Outside the Firewall

There is so much buzz and hype around Cloud Computing that I thought I might need to jump on the bandwagon, so I am announcing that I am now a "Cloud Advocate" what ever that means...

Well first, one must understand what the Cloud is. From my perspective, it is an electrically charged and over-hyped buzz word that can be applied to any technology that is being produced, marketed and sold by any IT vendor in today's marketplace. It is an adjective, an adverb and a noun all wrapped up in one nebulous (and Cloudy) word that immediately demonstrates that the technology to which it is associated is in vogue.

However, I don't see the Cloud as a technology. It really depends on your perspective, but to me, Cloud is either an architectural strategy or a business strategy. Architecturally, to refer to Cloud Computing simply means that computing resources are being utilized from outside of one's firewall. From a business perspective, Cloud Computing simply means that computing resources are being provisioned from outside of one's firewall. In either case, the cost and complexity are drastically reduced as a function of technical innovations such as multi-tenancy or business innovations like metered usage.

So if Cloud Computing is an architectural and/or business strategy, then, obviously the reason why it is being pushed by so many technology vendors is too cash-in on the hype. But why aren't the traditional sources for strategy and architecture consulting jumping on this bandwagon in droves?

They are starting to. But there are some fundamental business implications of adopting a Cloud strategy that may or may not bring serious consequences to bare for their business models. As a result, in the past 6 months, I have been retained to provide strategic overviews to the "Cloud" strategists at some pretty large traditional IT vendors, solutions providers and even the some within the venture capital community. I guess that gives me the right to coin myself a "Cloud Advocate" and to start charging good money for my perspective (even though I am giving it here for free).

The question that I keep getting is "What is the Killer App for Cloud?"

It depends, but usually, my answer to that question is this, "The Cloud means the eventual demise of vendors selling DATABASE DRIVEN APPLICATIONS" and therefore the killer application for Cloud is no applications at all. Instead, the Cloud needs to be viewed for what it truly is; a business abstraction layer on top of a universal network of computing resources which presents a ubiquitous platform for infinitely mashing up data to address user and organizational needs with regard to specific and situational instances. So the entire internet is the Cloud and therefore any platform built on it simply cannot exist behind a firewall because the majority data is resident outside the firewall.

Am I saying that applications won't be written anymore? No, I am simply saying that the commercial incentive to develop, market and sell applications will diminish to a point at which the return will no longer justify the effort. This is a function of two things; the abundance of existing applications and the immediacy of updates to them. When an application is written once and deployed universally to its users it enforces processes and data standards, but without either, the application itself is not tremendously valuable to the economy at large, it only has value to its specific users in their specific situation. However, the data it contains does have value outside of the immediate user community and, with proper structural and security questions addressed, has infinite value to a plethora stakeholders outside the firewall.

We are seeing this trend already in the Biotechnology and Pharmaceutical space as well as health care and several other industries. Data from outside the firewall, from sources like Google or national and commercial databases is being mash-up with information from traditional on-premise systems to provide new insight. Research, marketing and sales people all know that to limit your data set is to limit your horizon. In fact every day, I mash data from sources such as Hoovers, LinkedIn, Twitter and endless RSS feeds with my own CRM database to provide more complete profiles of my prospects and clients. But that is a discussion for another blog...

This brings me to my own question, if there is value in bringing data from outside the firewall and mashing it with data from inside, then what's the use of firewalls at all? Shouldn't the emphasis be on the data not the network? Obviously, its time to start thinking outside the firewall...

Monday, October 5, 2009

Adoption for Orphaned Applications

Every organization has them, every IT director hates them, every manager needs them. What are they? They are the unique, specific and situational applications that make each organization different. They represent the competitive advantage potential that could distinguish a good company from a great one, a successful division from a looser, profitability versus bankruptcy. They are the enterprise applications that everyone agrees are needed but never got built.

From Sand Hill Road to Wall Street to Bangalore, the waste baskets of venture capital firms large and small are filled to the rim with crumpled up plans for yet another "killer" application. Sneak into the budgeting meetings of any CIO in corporate America and you will witness of an ever increasing list of applications that users need but can't be built or bought. Talk to any executive or line of business manager and you will hear frustration over a belief that if there were better automation, a more streamlined process or the availability quicker and better data, their organization would be that much more effective.

Yet the unique applications to address these issues are rarely built. Why...?

Inevitably, the answer always boils down to risk. VC's won't fund an idea that doesn't promise a huge return. You will never see a commercially available application for managing witch craft potions backed by VC. It's the same reason that most savvy IT personals will always exhaust their efforts researching branded solutions before they attempt to build something on their own, since a build ultimately robs them of their most expensive resource...time.

It seems custom development is just too risky. So how does one estimate the risk? Obviously it is a function of cost. The money at risk in custom application development falls into three categories; hard funds, soft funds and opportunity. For the hardware, software and infrastructure on which the solution will be built, it is relatively simple come to a number. In terms of time and effort of internal, professional or contract resources, the number gets more nebulous. But the real risk lies in the opportunity costs of doing nothing, doing something and doing it wrong or just buying and delaying the eventual integration and user adoption problems. This long range cost is like deferred interest, sooner or later it must get paid.

As the cost of technology continues to fall (Moore's Law) the availability of basic technology components increases, the risk of failure in terms of hard costs is becoming more tolerable. In fact, the new motto is fail and fail quickly since the real barrier is time. We all know time is money, but simply accounting for developers' time isn't sufficient as that cost can be "off-shored" for close to nothing. Even if the hardware, software and development effort were all free, 75% of all custom development projects would still fail.

The real reason, and thus the high costs associated with the risk, is the difficulty in translating domain expertise into technological solutions. Every executive knows the frustration of taking the time to sit with technical resources, map out a solution, invest in the hardware/software infrastructure and dedicate their own time to describe a solution that they eagerly anticipate just to have it be something completely different than they expected. At some point in this nose dive process, the app will become "orphaned" and the later it happens, the more costly it is.

So what is the vehicle for most executives to put some structure around their processes and evolve them into productivity? Well, nine times out of ten, the idea is abandoned all together. In the unusual case that it isn't, typically the answer starts with a spreadsheet. Spreadsheets are ubiquitous, every manager learned Lotus or Excel in college, and these days are the default development and BI tool for most business executives. Just about anyone can figure out how to use them, even CEO's.

But spreadsheets have limits. As personal productivity tools they allow individuals to organize and structure data for better analysis, but when one actually wants to work with large volumes of data, automate its capture and collaborate with others, the shortcomings quickly become obvious.

Where spreadsheets fall short, databases shine; they easily handle massive amounts of data, will accept one at a time inputs or bulk uploads and enforce structure and conformity for collaboration and standardization. They are also excellent for acting as the "single source of truth" for a group or an organization and single database can host hundreds of applications, although few organizations adopt this concept.

Their downfall is complexity, expense and maintenance. Further, they take time to implement, a process which more complicated if the database is accessible to the internet. Given the cost and complexity, time and effort needed to implement database driven solutions, most executives retreat shyly from them unless the expertise and resources are already a sunk cost.

How does the Cloud impact this?

Outside of very large companies where such costs are sunk, now databases are virtually available "in the sky" for next to nothing. As a result the power of enterprise class databases applications is now available to leadership in small and mid-sized companies. In fact, with their cost and complexity no longer an issue, the problem now is TOO many databases. Where most companies used to be faced with the acquisition of one or two large applications to act as a "systems of record" now companies have 100's of SaaS vendors banging down their doors, each running on different databases with different structures, entities and nomenclatures. Navigating this paradigm is becoming a real challenge for the modern CIO.

Witness the rise of literally thousands of situational applications available now as a service for just a few dollars per month. Recently, a Google query of CRM vendors produced over 5,000,000 hits. Spend the hours necessary to whittle that down to real vendors and one still must contend with 10's of thousands. Even if the CRM functionality is only a couple of dollars per month, when added to the HR, expense management, reporting and so on, those tiny numbers quickly become huge. Add to that the cost integration, training and simple configuration and the platform approach quickly becomes the beacon of sanity.

What then is the platform for the orphaned applications? Or any enterprise or organizational database driven application for that matter? What is needed is a set of tools that is as easy to use as a spreadsheet that sits on top of one these databases in the sky. And that instance better be trusted. Give a CEO simple tools and access to such a cloud based environment and theoretically the days of the orphan applications could be over. In fact, the days of vertically dedicated database driven applications probably are too.

So please, help us find homes for these orphaned applications, a place where they can be loved and used and be of service to the organization that need them. A warm and happy platform on which to live, somewhere in the Cloud...