Forecast, partly cloudy….
For the last couple of years the concept of clouds has dominated the enterprise IT agenda. Dial up just about any periodical focused on enterprise computing and you will see cloud stuff everywhere. Walk through enterprise computing shops and the picture isn’t quite the same. Hype has always preceded adoption when it comes to this kind of thing so the disparity could simply be that the walk hasn’t caught up to the talk just yet. But there is also a pretty well understood history to the enterprise adoption of IT services that can be factored into the adoption equation.
Clouds, like most IT phenomenon, arrived not through a single technical innovation but through a confluence of innovations, maturing infrastructure and economic circumstances. Perhaps the single most influential of these had nothing to do with enterprise computing and everything to do with the consumer adoption of mobile applications. Hence, if you look at where merchant-based cloud computing is thriving, you will likely find a hive of consumer oriented, web based, mobile applications but not the bulk nor the bread and butter of enterprise computing.
So why is this?
Back in the early 90’s Dave Hill, then an analyst at the Yankee Group, did an insightful analysis into the economics of outsourcing. Outsourcing had suddenly become very attractive to entities who operated large scale computing facilities and it was the mega deals that transferred the assets of these shops over to the IBM’s and EDS’s of the world that headlined tech news. Turns out two things were in play in driving these deals and both of them had little to do with pure technical innovation. First, the economics. The technology involved, large mainframe, was mature, well understood and arguably expensive when compared to emerging promise of client/server deployments. So if you were a CIO you probably came to the realization that you could put your shop on a completely lower cost of ownership trajectory but first you had to figure out a way to exit your current platform and do it in a way that provided continuity. The next was technology. Not a single innovation but a combination or confluence of innovations that produced the promise of a lower cost client/server computing platform. So outsourcing became an attractive option to lowering switching costs between existing and emerging enterprise class architectures.
So are the same factors currently in play?
To get close to answering this you first have to answer a couple of other questions. First, does mobile computing in the form of merchant based clouds, in and of itself constitute an economically viable enterprise class architecture? Next, what are the switching costs and, all other things being equal, security, reliability, etc., does it make sense to substitute current enterprise IT infrastructures for those being hyped in the press?
Let’s take things in order. The pundits in the press would jump all over the first question saying that both SaaS and IaaS are proven enterprise class options. And they are but that does not mean that they are proven enterprise class architectures. Mobile, web-based computing derives from multi-tenant, service oriented, scale-out architectures. Granted these architectures afford most of the features that enterprise class users want, agility, security, scalability, resilience, ubiquity and lower TCO. However, the current generation of data center class enterprise applications, even web enabled ones, do not necessary conform to these techniques. Furthermore, most of the applications that were developed for distributed client/server deployments don’t necessary conform to these techniques either. So unless you are going to consume all of your applications as third party, proprietary services, chances are you aren’t going to undertake the conversion of your current application portfolio independent of the vendors that provide those applications to you. And that’s good news for the likes of Microsoft or Google who can deploy some native cloud business apps that enterprises might like to consume, e-mail, word processing, collaboration, etc. But what about the thousands of other applications that enterprises rely on to conduct business?
So we’re back to switching costs, who bears them and when should they be bourn. Most vendors, other than the ones who are actively building clouds, applets or social networks, are not in any hurry to convert their current code base to service oriented, scale-out architectures. So if the economic justification were strong enough it would fall to the users to engineer around the vendors current reluctance to convert their code to cloud native architectures.
So where do the user’s economic incentives lie?
Well, mostly in the deficiencies of the incumbent client/server architecture, which has grown difficult to manage and maintain, is inherently difficult to secure, increasingly costly when it comes to the integration of the application portfolio and doesn’t meet user expectations for seamless mobility. Security issues alone could push this over the economic goal line but you don’t need a merchant based cloud to solve that, a data center (aka private cloud) could probably suffice. But that might not meet the need to achieve the agility, scalability, resilience, ubiquity and lower TCO that enterprises currently crave.
So do innovations exist that move enterprises in the right direction?
Turns out there are but they don’t actually provide a new architectural platform so much as emulate many of its attributes while at the same time producing many of the benefits that enterprises are after. Server Based Computing techniques like Citrix Xenapp and VMware’s VDI are allowing enterprises to bridge the gap that currently exists between the design of their incumbent applications and the benefits available from cloud like architectures. In this sense they don’t constitute a new architecture so much as a means for providing many of the benefits of clouds while at the same time sustaining the continuity of existing application portfolios. There are trade-offs when going in this direction but when considering agility, security, scalability, resilience, ubiquity and lower TCO the payoff is clearly there and these techniques can be used to successfully create private cloud infrastructures.
But in the context of a comprehensive architectural overhaul, these techniques serve more like the role that outsourcing data centers played back in the ‘90’s. They are a step in a much longer journey that involves the re-write of an entire generation of applications.
So, until the industry makes the transition to truly cloud like architectures, the forecast for most enterprise users will remain partly cloudy.
Even without the pulbic cloud we are already seeing examples of patients data being shared across different health care providers. In many if not most cases this is exactly what you want to have happen but patients aren’t yet aware that if one provider buys their EHR from another local hospital system that their entire medical record has just been shared with all of the providers (who have a need to know) in the system. An example in Seattle recently involved a young woman who went outside of her insurance plan and paid out of pocket for a confidential GYN procedure at a small community clinic. When she returned to her primary care provider she was asked about what she thought had been a confidential surgery. The community clinic however was part of Swedish medical center which sold their EHR to the Polyclinic so the records were shared. Was their informed consent? Perhaps but clearly the patient wasn’t aware that this would happen and felt vulnerable.
Great blog it’s not often that I comment but I felt you deserve it.
Thanks for the info