Apple Intelligence: Better Late Than Never or Better Never Late
Apple’s prolonged absence from the AI tech melt up has been the biggest elephant ever to dance on a conference room table since the transistor was invented. Having basically created the category of natural language assistants with the introduction of Siri over a decade ago (see Prophets on the SILK Road – October 2012), they soon realized that promoting a bespoke ontological agent wasn’t a sustainable value proposition and concentrated instead on building a replica of the mothership deep in the bosom of Cupertino in the shape of a giant areola. (see The Dawn of Agency – May 2018)
So a couple of weeks back at their WWDC Apple set the twitterazzi a blaze with the big reveal of their pending AI announcement. Which was what exactly? A ChatGTP option? Architectural partitioning for hand held and cloud processing? Apple’s vaunted privacy provisions? A board observer to check out the snack selection at OpenAI? It was probably the most vapid announcement in Apple’s history but the stock jumped 7% at the mere mention of it, not a small number given Apple’s market cap.
The investment thesis behind this jump seems to be that Apple’s share of the hand held market is so massive that it can bend space-time to create a mind altering Market Value Added warping of economic phenomena. And that might just be the case but if you look really closely there’s a mounting wave of skepticism about the inherent value, tech market-caps notwithstanding, of genAI chat bots in both the consumer and enterprise markets.
For starters, this stuff doesn’t seem to be very sticky, as in traction in addressable markets, rates of adoption as well as the “stays on the wall” variety. Sure, cloud players are hoovering up GPU’s for data center build outs as if there is no tomorrow or no viable alternatives even in the face of a litany of TPU, LPU, IPU and NPU announcements up to and including the first generation of authentic stem cell wetware of human brain tissue for enhanced AI performance.
Next, the learning curve of “as-built” solutions doesn’t seem to be improving; it’s not getting cheaper with every n + 1 deployment. Apart from the LLM model makers themselves and their attention mechanisms, the cost to lower the delivered price and maintenance of these solutions doesn’t appear to be cheaper with each additional instance. A lot of this cost stems from the bespoke nature of the data classification and ingestion. With static data it was easier to arrive at an agreed to schema that was relatively long lived and easily modify and maintained. This doesn’t appear to be the case now. The easiest way to increase an addressable market is to lower the time and cost to value. So, until Scale 1 or Oracle or some yet unidentified player comes along and simplifies the LLM schema and data ingestion problems this is going to be a long slow slog to value for most enterprise buyers.
Next, the rate of innovation in this space seems to be accelerating while the value of innovation seems to be declining. This is in part an artifact of how early we are in the life cycle of these innovations as well as the open sourced nature with which rivals have chosen to pursue its commercialization (see “AI: The Next Cambrian Explosion – February 2016) Recently, there have been several reports detailing the patent activity surrounding AI and it has been astounding. Curiously, IBM seems to be maintaining its leadership when it comes to cognitive technology patents but still hasn’t been able to cash in. Back in 2017, in a post entitled “AI: What’s Reality but a Collective Hunch”, IBM expected revenue from its cognitive computing offers to reach about $10B in 2024. Time marches on and back in April of this year as part of its Q1 press release IBM mentioned, “Our book of business for watsonx and generative AI again showed strong momentum, growing quarter over quarter, and has now eclipsed one billion dollars since we launched watsonx in mid-2023…” In that same post back in 2017 Forrester Research was cited for suggesting that AI will produce $1.2 trillion dollars in “insight value” by 2020, whatever that might mean. Meanwhile, the global economy went flat on its back due to Covid and is barely back to what it used to be.
Recently, The Economist mounted a search for AI’s global economic impact in an article entitled “What happened to the artificial intelligence revolution?” and couldn’t find it. On just about every dimension you can think of, revenue, margin, productivity, evidence of AI’s impact on likely enterprise consumers either doesn’t exist or can’t be discerned from the rest of the usual number noise. Likewise, Goldman Sachs just went on the record reversing last years speculation that genAI would increase global GDP by 7% to saying that they can’t find any “killer apps” and doubt that they might find any for at least another decade.
So, is Apple’s move to join the AI gold rush too late or maybe, given recent economic analysis, a tad too early? Given the lack of traction attributable to Intellectual Property it probably doesn’t matter.
We seem to have reached a curious moment in the adoption of genAI, provable economic impact has yet to emerge but chumming the markets have produce a veritable feeding frenzy. But what exactly are they feasting on? Given the open sourced pedigrees of genAI and the labor intensive data management protocols of current generation solutions it seems that value instantiation is more a matter of technique than technology.
It’s a little bit like watching a cooking competition hosted by Guy Fieri. The competitors don’t own the ingredients, they don’t own the recipes, they don’t own the pots and pans, they don’t own the stoves.
And the only reason anybody is watching is because there is nothing else on.
Graphic courtesy of SuZQ Art and Images all other images, statistics, illustrations and citations, etc. derived and included under fair use/royalty free provisions.