Kirk Klasson

The Quantum of Quan: A Hitchhiker’s Guide to Quantum Neuromorphology

So, in my free time I’ve been doing a lot of thinking, you know the usual stuff, where I left my car keys, will the Fed destroy the economy, how will the usual group of numbskulls start World War III and, when they do, will prepping really make a difference? Then I stumbled on to this article in MIT Technology Review by Sankar Das Sarma entitled “Quantum computing has a hype problem” and I thought, “Damn! that’s what I was thinking!” I mean how many qubits does it take to resolve tensor field theory into something practical? C’mon man, everybody talks about gravity but nobody does anything about it.

But then I read that quantum practicality was just around the corner and not a moment too soon thanks to promising new techniques like topological quantum computing or NISQ or trapped-ions or quipus. Then there’s my personal favorite, AI qubit error correction where we impart the fundamental bias of AI to prevent the decoherence of in-flight computational qubits creating a completely new field of computing called ISC, or Idiot Savant Computing, where you can achieve instantaneous deciphering solutions to specific problems but you’ll never know on which date they may actually work. Recents tests have shown that ISC can achieve sub-second Linear A deciphering sometime in the next two million years. You know the thing, mass but not momentum, position but not trajectory, cause but not causation, at least not simultaneously.

Well, that’s quantum for you.

By now everyone knows that somewhere deep in a basement at IBM or Google there’s this negative 270 degree “zero energy” freezer where they keep all the meatballs, err, qubits, and every now and then they make them dance and issue a press release. The only question is exactly when will all these meatballs, err, qubits, be ready for prime time? And, once perfected, exactly what problem will they be able to solve that other types of conventional computing haven’t been able to master? Fluid dynamics? Nope. Protein folding? Nope. Deep encryption cracking? Maybe, but not if entangled quantum encryption becomes widely available in the interim. With each passing day the list gets shorter and shorter. Pretty soon all that’s gonna be left is quantum field dynamics, multiverse transponders and Mr. Peabody’s Wayback Machine. Put another way, what daunting trillion dollar problem is going to command multiple billion dollar quantum IPO’s?

Show me the money!

So just when it seems that quantum computing hype is going to take a breather the geniuses in the white lab coats start spitballing and out springs, wait for it, quantum neuromorphic computing, the perfect combination of peanut butter and Cheetos, or peanut butter and asphalt or peanut butter and name your own orthogonal ingredient.

At first blush this appears to be a two-brick solution to what is essentially a non-existent problem. The theory behind all two-brick solutions is that if you can’t get one brick to float, find another brick and bind them both together, once securely fixed, toss them back in the water and watch what happens. But that might not be the case in this instance. A great deal of the interest and value in proposed neuromorphic solutions arises from the fact that they are not necessarily assertory, they are neither 1 or 0, on or off, lit or dark, both their memory and computation can be analog approximations. Instead of rigid, static digital tokenizations, neuromorphic computing relies more on AI techniques that involve pooled iterative stochastic convergence of weighted neural nodes to resolve specific questions. ( see “AI: What’s Reality but a Collective Hunch?” – November 2017.) This is where the multi-state probabilistic nature of qubits could become a desirable neuromorphic characteristic; it is precisely this squishiness that makes quantum computing so appealing to neuromorphic applications.

Source: “Quantum Neuromorphic Computing” Danjijela Markov and June Grollier. Top: A schematic rending of AI deployed over conventional computing platforms. Bottom: The same approach rendered with quantum computing techniques .

 

So, it seems that quantum computing has significant protean, mimetic potential encompassing even neuromorphic techniques and technology but it really won’t matter if it’s confined to a freezer in a basement. The problem with meatballs, err, qubits, is that they are easily distracted. Left on their own they tend to meander, lose their coherence, start humming Doo-wop off key which is why you need to keep the suckers near absolute zero which is neither cheap or convenient.

However, in the last several months, several promising initiatives have emerged nearly simultaneously from universities and private enterprises promising room temperature quantum computing. Nearly all of these solutions involve photonics to manage and manipulate qubit behavior at an atomic level. Many of them employ a technique dubbed Nitrogen-Vacancy in Diamond Lattice and an Australian enterprise called Quantum Brilliance hopes to market a desktop version of such a solution within a year.

Meanwhile, slightly less ambitious efforts are pushing the boundaries of existing neuromorphic platforms, those that employ already proven low-power, memristor and analog capabilities. It is easy to forget that almost all current iterations of biomimetic neuromorphic computing are classified as experimental and as such have only been “in the wild” for but a few years. Researchers are still trying to figure out how best they can be employed. Recently, a group using a BrainScaleS-2 analog chip and in-the-loop surrogate gradient training have achieved some notable results specifically when it comes to time-to-classification and time-to-inference parameters, measures that have yet to be applied to any proposed quantum neuromorphic platform. What’s equally impressive is that they are achieving these results using <200mW for a fully configured instance of the platform, a parameter that could probably be improved by reducing specific analog to digital component incompatibility.

Since no one has actually built and benchmarked a quantum neuromorphic platform it would be difficult to project exactly what its performance characteristics might be but the prospects are tantalizing. Time-to-inference for tensor field dynamics would likely be off the charts, assuming there were any charts to begin with. (see “Stalking the Snark” – May 2019). In the meantime, however, it would appear that improving the near-term prospects for both existing analog and proposed quantum neuromorphic platforms would lie in the incorporation of sophisticated photonic componentry, resolving analog to digital conversion in the former and managing the in-flight meatballs in the latter.

The original promise of neuromorphic computing ( see “AI: Waking the Neuromorphic” – February 2018) was low power, ubiquity and mobility, just like the rest of the wetware that populates and roams this planet. Instead of chasing a one trillion dollar investable problem buried in a basement, it would need to chase one million, trillion pedestrian problems out in the wild, every single day – a value proposition that should be more than familiar to the computer industry, especially the platform portion of it. And if it isn’t the size, weight and cost of a package of playing cards powered by a couple 2032 batteries then, all things being equal, it just won’t float.

No matter how many other bricks you might want to tie it to.

 

Cover graphic courtesy of Wikipedia, a depiction of Linear A a Minoan character set used around 1800 BC that has yet to be deciphered by any advanced computer technology.  All other images, statistics, illustrations and citations, etc. derived and included under fair use/royalty free provisions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Insights on Technology and Strategy