Kirk Klasson

Going Native

Recently, I was captivated by a small blurb in Venture Beat about a company that had just left stealth mode after securing it’s latest round of funding. The whole notion of leaving stealth mode conjures up images of a rouge space ship lowering its cloaking device because it needs it to operate in a visual electromagnetic spectrum.

If people can’t see you, they probably won’t buy your stuff.

This particular vehicle is named Vfunction and its stated value proposition is to help enterprises convert their monolithic legacy applications to well designed microservices for native cloud deployments using ML techniques. In today’s tech speak this amounts to a nearly consummate value proposition. It has all the right elements. Native clouds. Artificial Intelligence. Ancient Infrastructure bordering on collapse. In fact, it’s got so many cliché story lines that one wonders if it is a technology company, a Hollywood script or a genetically engineered acquisition proposition. Right up to and including their focus on a large well defined segment of legacy code: Java. If I didn’t know better I’d swear this whole thing was designed to be sold to Google, where, like a jealous lover with a solid gold candelabra, it will bludgeon Oracle to death in order to avoid further licensing fees.

But then it dawned on me that these guys might be on to something. This wasn’t some small niche software idea or a modest inconsequential transformation tool. Its value proposition was trained on the largest addressable market in enterprise techdom; all the legacy technology that had already been acquired and installed and like every previous generation of technology had been inevitably rendered obsolete.

Back in July of 2010, in a post entitled “Forecast, Partly Cloudy”, we explored the various incentives surrounding the adoption of cloud computing. And the technological features were all very familiar. Scalability, reliability, security, etc. But in the scheme of things, back then, and even today, the most compelling dimension wasn’t technological; it was economic. And the foremost economic criterion was switching costs or, more precisely considered, switching costs as a function of total cost of ownership. Same frontier that stymied mainframe shops that wanted to move to client/server in the ’90’s. And for most enterprise technology buyers, not to be confused with enterprise software sellers, they were prohibitive. Sure, back in 2005, I distinctly remember talking to several technology luminaries about the prospects for Restful HTTP API’s and agreed that it was a foregone conclusion that we had glimpsed the future. And, sure enough, some fifteen years later, we had.

But more often than not, the future is not what’s technologically feasible, but what’s most economically practical. And to most business managers, incumbent assets with enduring utility, can be stubborn things.

One of the most attractive prospects of cloud computing, then and now, was the prospect of elasticity. The idea that loosely coupled, discrete services could dynamically scale with demand. Back then “demand” was considered the exponential stress on computational services when end user transaction volumes stressed fulfillment applications of mobile and internet based start-ups that couldn’t afford to provision applications “at scale”, the founding value proposition of Amazon Web Services or as we sometimes say service-oriented time share facilities.

But enterprise users had a slightly different perspective. To many of them “demand” came in the form of on-prem resource contention, where “storms” of transactions would cascade from disk to memory to cpu depending upon what applications had been invoked at what point in the business process cycle and relieving this resource contention was as important to them as selling pet food, mobile games and dating sites to the latest generation of dot com start-ups. Loosely coupled, dynamically scalable, uniformly orchestrated microservices employing Restful HTTP API’s seemed like an economically viable solution. But if you examine conventional containerized, multi-tenant public cloud environments you’d find that the resource contention problem still persists in the form of “noisy neighbors”.

So the question remaines, how do you get there?

Well, the best way to get there is to start at the beginning and follow it to the end. In this case the beginning is Software Defined Infrastructure (SDI), a market variously defined by industry analysts as somewhere between $40B and $50B annually and growing at between 24% and 28% CAGR, roughly equivalent to the market for AI. And like AI, increasingly constrained by a lack of technical talent to support all that “lifting and shifting” required to achieve architecturally pure, native cloud deployments. However, in the current scheme of things, SDI is rapidly becoming another aging legacy technology component, virtual machines being introduced in the 1960’s and gradually refined since then. VMware was founded in the ’90’s; the same time as Java, the first code base Vfunction selected to address.

So, consider this.

We are more than 20 years past the initial commercial instances of VM’s and Java. More than fifteen years past Restful HTTP API’s. And more than ten years since “Forecast, Partly Cloudy.” My, we have come a long way. And yet we are still chasing native cloud computing in the form of pixelized microservices running on containerized, albeit multi-threaded, von Nuemann based computers ( see AI: Waking the Neuromorphic – February 2018). In the bargain we are consuming a lot of software and services simply to partition and re-configure the hardware underneath it. Perhaps we don’t yet have a solid grasp of what truly native means. Perhaps we don’t yet possess a consummate understanding of the economics of native clouds. Perhaps we’ve been ignoring Bell’s Law of Computer Classes and need to get back to paying closer attention to it.

C. Gordon Bell’s “Law” suggests that every so often, roughly ten years or so, there will be a confluence of innovation that rejiggers the computer industry, a thesis he first sketched out in an article entitled “The Mini and Micro Industries” (see Requiem for a Business Model – January 2011). His notion was prescient for a number of reasons not the least of which was he anticipated Clayton Christensen’s Innovator’s Dilemma and disruptive technology thesis by a couple of decades. Next, he predicted that this confluence of innovation would cause two critical outcomes, the disruption of well established incumbents and the creation of heretofore untapped opportunities.

For the past several years the enterprise technology industry has been flirting with just such a moment, first with memristor techniques ( see HPE Builds a Boat in It’s Basement – December 2016) and then, more recently, with biomimetic techniques such as neuromorphic computing. Both are tantalizing technologies but neither seems to be pointed at an opportunity that will cause them to ignite. Then, towards the end of 2020, Ampere Computing announced a line of cloud native processors, a glimpse of what might lie ahead. But apart from being very Docker friendly its hard to see how this is anything more than a reiteration of multi-core, von Nuemann architecture. Meanwhile, memristor-based Computation-In-Memory and Memory-In-Processor has been around for at least the last five years.

Part of Bell’s original thesis centered on the idea that general purpose architectures were advantaged over special purpose architectures because it would always be easier to create specific special purpose applications using software, hence general purpose architectures could threaten far more market opportunities in order to thrive. It’s how Intel’s X86 came to take over the world. Bear in mind that when Bell first wrote Mini’s and Micro’s the entire computer industry, all up, all in, was about $150B world wide. Today’s cloud market is north of $300B and roaring like a Force Five tornado. So special purpose architectures that can dominate a specific segment will do quite well. And it appears that we are headed in just that direction as more and more segment specific architectures emerge and succeed.

The question then becomes is there an architecture encompassing data, memory and cpu that’s ready to go native? Cloud native? Now would seem to be its moment.

Graphic courtesy of SuZQ Art and Images all other images, statistics, illustrations and citations, etc. derived and included under fair use/royalty free provisions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Insights on Technology and Strategy