HPE Builds a Boat in its Basement
But will it still float if they ever get it out?
Recently, Hewlett Packard Enterprises (HPE) has been losing ballast like a leaky freighter in a raging storm. Coming off the much anticipated separation of HP into HP and HPE there have been a number of additional strategic transactions executed by HPE including the sale of its services and software units creating what its management refers to as a much slimmer, more nimble, better focused competitor. Back in May, HPE sold its services group to CSC for what appeared to be little more than an asset for equity swap, declaring somewhat facetiously that the transaction had “turned around” its services business when “turned over” would have been a much more apt description. Sanguine with its deal making prowess, HPE then went on in September to sell its software units to Micro Focus, an operation best known as a place where software shops go to die, for terms almost identical to those of the prior transaction.
Obviously, when making changes of this magnitude, there is usually a guiding philosophy behind it, a well thought out and compelling strategic rationale along with the requisite amount of elite consulting advise in order to ward off any shareholder lawsuits. But what these deals lacked in imagination they more than made up for in public relations, at least according to what HPE was telling its shareholders. Pronouncements touting HPE’s new focus and agility may present a credible façade; however, marketing messages about “accelerating next” and “the hybrid cloud” aren’t exactly whetting anybody’s appetite, unless, of course, like some in the press have already suggested this is beginning to look like some serious chumming of the private equity community.
Bon Voyage…
Growing and pruning portfolios of technology businesses is not at all unusual inside the tech industry. Generally, when a technology firm approaches a billion dollars in revenue it begins to look at promising adjacencies, assuming, of course, that it hasn’t already been acquired. Often these opportunities are obvious in that can they leverage current skills and common customers. Frequently, they exploit a familiar economic model so the basis of generating returns on deployed assets and capabilities is well understood. This phenomenon is so common in the technology industry that most companies don’t even bother bringing it up when discussing the basis of their own strategic rationale, deeming it an a priori assumption of all things strategic. This is often when words like synergy and accretive, things that actually never appear in nature, get thrown around boardrooms like Frisbees in an intermural college tournament. At a certain scale, the portfolio concept expands beyond the scope of mere immediate adjacencies and enters the land of industry segments (see Value Based Strategy Formulation – February 2011). This is where the notion that a shared, common business model can continue to grow while serving unique, discrete markets usually ends and with it goes most of the management and boards of directors that brought us to this threshold in the first place. It is no coincidence that most management teams cannot survive this transition because the cultural prerequisites that brought them here are not likely to sustain them once they’ve arrived.
Portfolio management at an industry wide scale is not for the faint of heart. It works well when the tide is rising and looks foolhardy, if not down right dangerous, the first time the tide runs out. In this case lets assume the tide consists of realizable returns, profitability and productivity, from innovation, both technological and economic, for both suppliers and buyers. And there is a history of what happens when you’re standing at the helm when the tide begins to change. For firms of a given size, building a portfolio across industry segments usually begins to be attractive once the players in your own space have consolidated and share lock-up of your preferred market segment ensues. Check your Herfindahl and get back to me. Suddenly share of revenue and income by industry segment becomes more than casual cocktail chatter. Likewise, divestiture of business units in adjacent industry segments, a la HPE, is a signal that things in one or more segments are about to or have already changed. (see Sis, Boom, Bah – December 2015)
For simplicity’s sake lets assume that, historically, the information technology industry, serving enterprise customers, can be segmented by the accumulation of integration that produces value (productivity) and that the value in the form of operating income (profitability) can serve as a proxy for the degree to which enterprise customers esteem specific instances of accumulated integration. Let’s further assume that, in general, integration compounds or accumulates the closer you get the end customer.
So with this model in mind we can layout the various segments of the IT industry as follows:
Accumulation of Integration →
Micro- | Components | Systems | Applications | Large Scale Systems |
Processors | Sub-systems | Integration |
With this simple depiction we can already see that HPE is pulling up stakes from the right hand side with its recent divestitures and putting a whole bunch of chips on the “the Machine”, the revolutionary memristor boat HPE is building in its basement in Fort Collins, CO over on the left hand side.
So what does HPE know that we don’t?
Let’s go back and have a closer look at what tides can do when put in motion.
Deja Vu all over again
Back in computing’s adolescence, otherwise known as the 1980’s, the IT industry looked much like the figure outlined above, albeit occupied by a slightly different set of players. Sure, IBM was around, questioning its own survival, so was HP, a shop that introduced its first computers in the ‘60’s. But so was DEC, SUN, Compaq, Wang, Data General and a host of others whose survival is no longer in doubt.
The 80’s were marked by two key factors. The first was enormous price/performance gains for midrange machines generally attributable to the introduction of wide-word or 32-bit computing. Seen at the time as the logical extension of existing 16-bit computing architectures this single innovation opened up new markets that hadn’t been previously tapped. The next was the advent of the PC. At the time, few surmised how quickly the Wintel architecture would come to dominate the fate of midrange system vendors relying on component integration strategies during the 90’s. (see Requiem for a Business Model – January 2011) For the players scattered up and down the integration value chain, Wintel wasn’t merely a tide, it was more of an industry wide tsunami that changed computing altogether.
Stepping back, the average operating income for players scattered across the accumulation of integration spectrum in the 80’s looked a bit like this.
Bear in mind that in 1986 IBM at $73.1B was the largest hardware, software and services vendor in the industry, over five times the size of its nearest rival. If you added all the revenues from all the other players, from all the other points of integration, across the entire IT enterprise value chain they might barely equal those of IBM. But that was part of the beauty of IBM back then; a Coaseian colossus whose economies of scale and scope straddled the entire technology value chain (see The Invisible Hand – November 2013).
Riding this wave were the system vendors whose ability to marshal components into full-fledged, user manageable computing facilities under proprietary operating systems was robustly rewarded. Never mind that successful, independent business application software vendors were few and far between, outside of Lotus, Aston-Tate and WordPerfect, practically nonexistent. Never mind that there was a recession that took the legs out of the semiconductor players after they had doubled down on DRAM capacity. Never mind that there was a RISC vs. CISC architecture argument raging between the workstation and midrange acolytes. Never mind that there were really only a handful of independent large-scale system integration vendors. Never mind that a 1956 consent decree was about to accidentally anoint the biggest juggernaut the industry had ever imagined.
These guys were printing money.
On the crest of this wave was Digital Equipment Corporation (DEC). By the mid to late 80’s the VAX clustered architecture had propelled DEC to the number two worldwide system vendor behind IBM. At the time IBM was a firm so self-infatuated it couldn’t remember what business it was actually in and many assumed, including some IBM employees, that it wasn’t a tech company at all as much as it was a bank that leased water-cooled, raised flooring.
By the time the 90’s rolled around the industry was facing a different set of issues. The PC had become an accepted business platform and the shift to client/server computing began in earnest as did outsourcing big iron (see Forecast Partly Cloudy – July 2010), along with the rise of independent application software and system integration vendors. Operating systems became less relevant and began to coalesce thanks in part to Microsoft’s dominance as well as RISC vendors convincing users that they would never have to port their applications again once they ported them to UNIX. That’s right, you had to port your application so you wouldn’t have to port your application. The economics of this argument were simply confounding but, as we shall see, they are still a factor today. And, one minor footnote, the Internet had made its debut.
Suddenly, everybody was making money but the systems vendors whose value proposition was rapidly evaporating in the face of the Wintel X86 juggernaut. DEC, the darling of the 80’s system shops found itself caught between the need to attack and defend both the micro processor end as well as the systems portion as well as the large scale systems integration end of the value integration continuum. And DEC was not without its weapons. The 64-bit Alpha architecture that DEC had on its drawing board could have put it back in the price/performance lead. Large-scale VAX clusters could have been the basis of a thriving systems integration business. But to pursue either option would have meant letting go of other important and existing lines of business. In the end, DEC found that it couldn’t pursue both ends of the integration spectrum at the same time. And it was this dilemma, along with management’s inability to focus on its customers, that literally pulled DEC apart.
One foot on the boat, one foot on the dock, yeah, we got this…
First conceived in 1971 and prototyped by HP in 2008 the memristor has become a kind of holy grail for HP. The concept, currently dubbed “the Machine”, is based on an almost limitless pool of non-volatile memory merged with multiple concurrent states of non-linear processing combined with a silicon photonics bus. The performance is expected to be nothing less than mind numbing, that is, of course, once they have actually finished the operating system, dubbed “Carbon”, that’s meant to run it. Until then we’ll just have to take HPE’s word for it, which all seems faintly reminiscent of the hype DEC’s Alpha architecture generated in the early-90’s. As recently as 2012-2013, HP began to make mention of a timetable for commercialization of this tantalizing technology. But then in the summer of 2015 HP acknowledged that there was no way to firmly establish exactly when this breakthrough, however engineered, might be commercially available. It might make its debut somewhere between now and 2018 or 2020 or who knows when. And chances are, once it does arrive, you’ll have to port your applications to it.
The notion of memristor computing holds enormous promise but then again so does cold fusion. Researchers in academia and commercial labs are beginning to suggest that it might prove to be the first “brain-like” computation resource mankind has ever produced, a field of research called neuromorphic, synaptic emulation. A curious conclusion since life-based computation is primarily analog in nature. However, just as in the case of DEC’s Alpha, whether “the Machine” becomes successful or not might have less to do with its practical instantiation and more to do with the context that accompanies its eventual introduction.
Wintel didn’t succeed because it was hands down a better computation solution; it succeeded because it was hands down the most ubiquitous computation solution, desktop, server or big iron of its day. There were simply more instances of Wintel than there were of any other competing design. And it was these instances that created the network effect across the value integration continuum that propelled Wintel to the top of the enterprise computing heap. According to IDC, in 1990 there were 72,900 large and midrange units shipped worldwide. In that same year there were 15,142,800 small and personal computing units shipped, nearly all of them x86 based. So one has to wonder, all other things being equal, what will be the most ubiquitous computational solution come 2020? Will they be general purpose, pooled and partitioned in data centers or special purpose and spread all over the Internet?
Some in the tech and VC community are pulling on this same thread and the conclusion they have come to is that just as mobile devices propelled the rise of cloud computing for hosting common services, sophisticated IoT will become the next major wave and that the sheer volume of data will necessitate a move of computational intensity back to the remote device at the end and not the center of the network; a thesis that has been dubbed by some as edge computing. Their reasoning is that the sheer amount of data consumed, processed and adjudicated by highly sophisticated remote, autonomous devices to achieve real time results cannot afford the latency of cloud computing architectures. These devices will have to sense and understand their environment and circumstances, employ autonomous machine learning, and reach conclusions and make decisions in real-time (see The Next Cambrian Explosion – February 2016). A car traveling at 70 miles an hour can ill afford weak radio links to come to an instantaneous decision about an approaching 18 wheeler; a machine performing vascular surgery can’t afford to equivocate during a procedure in the face of ambiguous blood pressure readings waiting for Watson to respond.
The critical element in these examples is that the importance and volume of sensory data involved in these decisions will be enormous and the transmission of that data to and from a centralized resource for purposes of real-time decisions would be unacceptable. This happens to be one of the oldest problems in computing. Most of the sensory data in these examples is best expressed and assimilated as analog phenomena that lend themselves to simultaneous analog assessments. However, for the last 40 years, analog data has been voluminously “tokenized” in binary representation in order to be processed by assertory, transistor-based calculations. Simply sensing and projecting the flight of a baseball requires a computer to push enormous amounts of tokenized data and yet five year olds do it all the time without even thinking.
So it might be reasonable to ask whether the problem pushing computation of to the edge is one of data transmission between sensory and computational resources or is it one of data assimilation by intelligent autonomous analog devices? And if it is an issue of assimilation, can memristor computing play catch like a five year old?
Any port in a storm
It has been suggested that memristor computing might be the only technique that could hold the current worldwide state of the Internet in memory for the purposes of instantaneous analysis and inquiry. That feat alone would be worth the price of admission. But in the pedestrian world of “how many widgets might the world want today?” the cost of the answer might be more than the question is worth. So exactly how many instances of memristor computing does the world need? Put another way, how many problems is “the Machine” uniquely qualified to solve? A couple? A dozen? Or one really hard question that appears, even in nature, and possibly in board rooms, several million times a day?
Today, web services, AWS in particular, look like a Wintel juggernaut. At $13B and growing it seems an unstoppable Open Compute Kraken that will own the IT industry. In truth, in real dollars, AWS is smaller than DEC was in the mid to late 80’s; smaller still if you factor in inflation, smaller still if you allow that the IT industry has exploded in value since the 1980’s. And there is more than a grain of truth to the idea that cloud computing has less to do with technology and more to do with the arbitration of value across the traditional enterprise integration continuum (see Something Weird This Way Comes – June 2015). AWS looks more and more like the IBM of the early 80’s, a one-stop leasing powerhouse, whose roots stretch back to the timesharing operations of the 60’s. So, if operating income is the proxy by which the tide is determined, who then rides the crest in the 2020’s? Ever larger, cheaper instances of sharable, parsable networked resources? Or ever smaller, concentrated instances of autonomous intelligent analog agents? The outcome may determine whether and when HPE ever gets “the Machine” to market.
The former is an obvious and immediate opportunity but one that might be ill suited to the technology. It would assume we are back to using massive computing capabilities to solve thousands of instances of mundane containerized problems. So that would mean partitioning “the Machine” to handle pedestrian problems assuming that non-volatile, non-linear computing is a suitable candidate for parsing and partitioning. More suitable than say a couple of thousand open compute boxes or a Nutanix farm of virtualized hyper converged servers are able to already provide? HPE is already a leader in high performance computing and already a player in hyper converged computing. Extending this value proposition would seem like a museum quality decision; don’t bother moving the boat, it floats just fine in the basement.
The alternative would be to re-conceive the memristor as an autonomous, self-contained, analog computing device capable of independently assimilating and assessing analog data at near instantaneous speeds. This might postpone its introduction even further as the work in neuromorhpic computing is still in the fundamental research stage. But this might not be an issue given that the requisite sophistication of autonomous devices that might require this approach is probably several generations away. Even HPE acknowledges that one of today’s hottest IoT players measures fill levels in trash containers.
So, given this, we may never see a memristor and never even know we missed it or catch our first glimpse by the mid 2020’s and wonder how we ever lived without it.
The wreck of the Antikythera
Somewhere between the dates of 200 and 100 BC a bronze mechanical device emerged that with simple inputs could confirm or project the location of celestial bodies and thereby infer your approximate location. We know this because we found one submerged in the Aegean Sea. It was, inarguably, our earliest instance of a hand-cranked, analog computer. Even today we don’t fully understand who invented it or how it worked but we do know that it was found on a boat; an autonomous conveyance that could use all the inertial intelligence it could muster.
High tech orthodoxy insists that the tide of computing moves from centralized to de-centralized and back again and with it moves the fortunes of its players. Often these tides coincide with significant innovation but can be equally occasioned by the economic consequences, profits and productivity, that both sellers and buyers seek to enjoy.
It could be argued, and some have, that we have reached the end of the era of continuous gains in productivity from existing technology as evidenced by the rise in cloud computing; it succeeds not so much as a technological innovation but rather as an economic innovation of service-oriented time sharing. In this capacity, cloud computing promotes a consolidation or flattening of the traditional enterprise integration continuum. The value it provides has less to do with a shifting computing paradigm and more to do with the arbitrage of discrete upstream and downstream integration opportunities. Couple that with no cost, lease based financing and buyers begin to take note, at least until interest rates swing the other way.
With respect to HPE and memristor, it looks like they are doubling down on a what they suspect is a major innovation; something that holds a discontinuous potential up against incumbents, a cohort that just happens to include HPE. What we know of discontinuous innovation is that it succeeds best when it eschews established value propositions in favor of attacking those that currently don’t exist (see Requiem for a Business Model – January 2011).
Assuming that HPE isn’t coyly planning to strike its colors to the richest privateer, it might want to steer a slightly different course. The world is not breathlessly waiting for the next marginal improvement in hyper converged computing. In fact, if you looked at the share shift potential of even a discontinuous innovation in the high performance server segment you might recommend that HPE look to acquire some software companies. Wait a minute, didn’t they just, well, never mind.
Or, they could leave the memristor in the lab and continue to prove the potential of neuromorphic, synaptic emulation. The risk is that it may never pan out. But if it does HPE could introduce the world to a truly unique, discontinuous and potentially ubiquitous innovation – autonomous, intelligent, agent-based, analog computing.
For a more recent take on the status of HPE’s strategy check out the Epilogues tab and look for Wither, Dither and Die – September 2017 and HPE: Getting the Boat Out of the Basement – July 2018.
Cover graphic courtesy of the Viking Ship Museum, Oslo, Norway all other images, statistics, illustrations and citations, etc. derived and included under fair use/royalty free provisions.
Thanks to the excellent guide
This is really useful, thanks.