Kirk Klasson

Stalking the Snark: Invariant Counterfactuals, Transcendental Deductions and the Future of AI

I’d like to start off by acknowledging that this is the kind of post that if it were a drug, a machine tool or a firework, the government would insist that it be accompanied by a warning. Something along the lines of… “toying with this crap while inebriated could be bad for you. Especially if you’re fond of fingers, pets, toddlers, prized possessions, significant others, etc. Use by persons living in places like Key West, Orleans and Belize is strictly prohibited.”

Now that we’ve got that out of the way, here goes.

By now, thanks in no small measure to folks like Judea Pearl, observers of all things AI are slowly coming to the conclusion that, for genuine progress to be made, beyond refining existing techniques of marshaled nodes and cascading vectors for purposes of stochastic convergence, the entire domain will need to deal with the daunting challenge of causality. This has come about due to a number of factors not the least of which is the introduction of biases through the propagation of models for purposes of supervised or unsupervised learning techniques. The fly in AI’s big jar of ointment being that the conclusions it produces, while often reliable and useful, are more often than not impenetrable and improvable. (see AI’s Inconvenient Truth – July 2018) This might not amount to much if all you’re doing is trying to figure out whether proximity to beer really does improve diaper sales, however, controlling a speeding, autonomous 18-wheeler in a snow bound multi-car pile up might be another matter.

For several years, maybe even generations, academia has relegated causality to the application of Bayesian Networks. These can be useful in well-bounded circumstances where the variables are known and largely static, conditions that normal life and normal people rarely subscribe to for much of anything. As problems become more complex and context dependent, causing variables and data sets to expand exponentially, it has become harder and harder to stuff real world problems into Bayesian sized boxes.

The challenge of causality is that it often arises contextually and from variables or data that are not present in the information submitted for ML or DL analysis. For instance, contagion, such as that associated with influenza, need not proceed from established vectors. So, no matter how detailed the disease data might be for a given population and strain, the chances of your flu shot actually being effective are dismally low. And why is that? Strains of influenza are preciously social, moving from host to host and species to species, swapping “howdy’s” and DNA as they go. Along the way they mutate. Currently, no amount of captured data can accurately predict a future mutation and contagion of a viral influenza. And the government won’t put that warning on your flu shot.

But what if existing disease data were combined with knowledge of viral mutation and contagion to produce a hypothetical set of variables that might result in a reliable predictor of future infections? And what if that hypothetical set of variables could be tested with real and hypothetical data to determine the relative strength of contributing variables to the determination of future infections. If the hypothetical variables survive rigorous testing they might be considered counterfactuals or potentially reliable variables in the construction of casual relationships. If such an arrangement of variables always produces an accurate causal relationship it might be considered an invariant counterfactual or one that incontrovertibly describes a specific cause and effect, invariant in that regardless of what context, circumstances or data are applied to the proposed relationship, it always correctly predicts the expected outcome. Of course, you still couldn’t get an effective flu shot because that might necessitate the man-made creation of a mutant strain of flu, a dystopian monstrosity that just might kill us all.

A counterfactual is a possible, potential or hypothetical causal relationship that satisfies the realization of a specific outcome or effect whose actions and influence may not be explicitly observed but that nonetheless does not contradict that which is already known concerning any particular phenomenon. In this regard, counterfactuals are practically indistinguishable from the programmatic biases that inform the learning techniques incorporated in existing AI models, a kind of inexplicable intuition, or vector amplification, aka backwards propagation for purposes of curve fitting, that informs the current recognition of sounds and shapes, objects and artifacts. Pretty slippery stuff. The difference, if there is one, is that the latter is a codification of known, experienced behaviors of feature, sounds and symbol discriminators, an a posteriori supposition, while the former is a synthetic deduction, an a priori construct derived from the essence of variable interaction not necessarily captured by data. If this seems to be the realm of mystics, shamans and grifters as well as the province of noted philosophers the likes of Descartes, Hume and Kant, you are on the right track.

Ego cognito, ergo sum

Most of human kind’s progress in fields like physics, genetics and haiku has come from a type of counterfactual or thought experiment. For instance, what if temporal perceptions were predicated on the speed of the observer and not the motion of the phenomenon that are being observed? The arrow of time still points in the same direction. So this hypothetical doesn’t violate any known data or intuition of time, yet it explicates other factors that were heretofore hidden. This simple counterfactual is the basis of the special theory of relativity. Similar conundrums abound in physics. Entanglement, dark matter, accelerating expansion, quantum gravity. All great candidates for hypothetical counterfactual propositions. What they lack is a synthetic a priori construct that binds the knowable effect to the inexplicable cause. This has been both a bane and a boon to our species since its inception. And we have been clever enough to conceive of some very effective work-arounds.

The very notion of causality comes from temporal apprehension, a perception of the precedence of circumstance and outcome. For instance, a cause always precedes and never follows an effect. Why should that be? Simple, because if it did it wouldn’t be a cause; it would be something else. But why is so much certainty afforded this temporal construct? Is it because the world was ordered thusly? Turns out that the more we drill into the cosmic and quantum the more that may not be the case.

Or was it because we made it up?

Turns out, some pretty clever people believe we simply made it up, not for want of alternatives, but for some pretty compelling, if not always effable, reasons. For a brief while western philosophy was consumed with the origin and nature of thought and it’s role as a framer of experience and reality. It was during this period that the idea that our perception of the world was not entirely objective but rather subtly imbued and informed by the faculties of the observer. If the behavior of objects and phenomena seemed to conform to trusted and familiar patterns perhaps the common denominator of their interplay belonged to us and not them and our very observation of their behavior lent them meaning they may not by themselves posses.

For instance, temporal intuition is a function of memory and persistence. As living beings we persist and memory allows us to discriminate between moments of past and present. No inanimate object possesses this ability. Similarly, binocular, stereoscopic vision creates spacial intuition. And these two elements, time and space, are the basis for the acquisition, assimilation and expression of nearly all human experience. Try making a sentence without them. However, these two intuitions are synthetic, they do not in and of themselves belong to outside objects or phenomena, we made them up, and finding them useful, kept them around. Kant referred to the application of synthetic intuition, a priori knowledge, in the assimilation of external experience, a posteriori knowledge, as transcendental deduction. He liked the notion so much that he designated four different, unique categories of such deduction not the least of which is causality.

Cum hoc ergo propter hoc

Consciousness, the act of awareness, of self and surroundings, is part synthetic, predicated beyond available data, and part analytic, predicated with available data. Thus far AI applications are primarily analytic in composition and execution, mountains of data that substantiate stochastic conclusions for purposes of potential recognition. A word, a sound, an object. If we were to time stamp all the data fed into a ML model meant to recognize hot dogs it wouldn’t make a bit of difference as to whether or not hot dogs are recognized as no amount of temporal discrimination would serve to abet that purpose. (see AI: ”What’s Reality But a Collective Hunch?”– November 2017) The time stamps wouldn’t amount to anything more than a spurious correlation, all the hot dog models would have them, but having them is of absolutely no significance or consequence.

So, just as data can be spurious, so too, can counterfactuals. If an invariant counterfactual is one that always satisfies all criteria of context, circumstances and data would it also be always provably correct or might there be spurious, invariant counterfactuals that satisfy all these criteria and yet have absolutely no significance? The answer is more than likely yes. But perhaps a better question, given our own frame of conscious reference, is would we even be able to tell? Let’s face it, human consciousness, the one that we subscribe to, is invariably anthropomorphic. Would we even recognize a valid, novel frame of consciousness if we encountered one? The answer is more than likely maybe not.

But the prospect is beyond tantalizing.

What if all the data collected by the Large Hadron Collider could be marshaled into a single, queryable quantum corpus and subject to just one line of questioning: What manner of being made this?  What presuppositions possessed it? What synthetic intuitions inhabit it? And what invariant counterfactuals support it? One believable answer would obviate millennia of speculation and doubt and open a door to unlimited human possibility; an unbounded singularity, a truly transcendental deduction.

But what if such an exercise also produced infinite, spurious, invariant counterfactuals, leaving mere mortals no option but to individually examine and confirm or dismiss each one? Even Dante couldn’t conceive an occupation of such crushing recompense.

Totum quod splendet ut aurum

Lewis Carroll was a master of nonsense, purveyor of dangling syllogisms and errant guide of lost sequiturs, leading nowhere but perhaps to his own bemusement. In his “The Hunting of the Snark, an Agony, in Eight Fits” we are certain why the hunt exists, to find the Snark, but despite an abundance of information concerning the antagonist we never really find out why. One might presume from its description that a Snark is a delicacy, which accounts for the desire to hunt it “with forks”, but of all its fabled attributes, flavor isn’t one. The members of the hunting party are all known by their occupations, all save one, and they all begin with the letter “B”, making them “B-ing’s”, but there is no peg for that hole either. The use of “Agony” in the title bears specific religious connotations, but nothing in the text explicates how that might apply. All of this has inspired far more analysis than the length of the original poem, as scholar after scholar proposed the true meaning of the work, each a counterfactual argument with little or nothing to show for it. The Snark is a cause without knowable causation.

We stand before a golden door whose lock is ours to open.

Whether current computational endeavors will leads us there is yet to be determined. We may yet need more philosophers, more Bellmen and Barristers, more Boojums and Snarks to set us on the proper course. Recently, the current godfather of all things AI, Geoffrey Hinton, in an interview with Wired, casually declared that synthetic, machine-based consciousness was a misplaced expectation. The “essence” of things could and would be exposed and once exposed antiquated notions such as human consciousness could be disposed of as essentially unnecessary to machine based reasoning. Human consciousness, after all, has its limits. We are corporeally confined and losing this mortal coil doesn’t seem to offer many transcendental benefits, at least that we’re aware of. On the other hand, machine made perspectives of invariant counterfactuals could prove enormously useful if they provide us a different way to look at things.

For instance, under certain conditions anthropomorphic consciousness can be uniquely disadvantageous. Let’s take the case of entanglement. Two photons launched from a common source race to opposite ends of the universe. Along the way, a crafty physicist alters the polarization of one of the photons while constantly measuring the polarization of the other and Bazinga!, instantly, across millions of light years, the polarization of the opposite photon changes. This creates a cognitive dissonance in humans known as “spooky action at a distance”. Our spacial intuition suggests that such a transposition isn’t possible as information can’t tansverse that great a distance instantaneously.

Perhaps the problem lies in our spacial apprehension. What if in quantum reality the photons occupy the same space-time no matter how far apart they travel?

Seems like the province of shamans and grifters is closer than we suspect.

 

Cover image, fragment of Henry Holiday’s original illustration from The Hunting of the Snark, where the Snark pleads the pig’s case in the Barrister’s dream, courtesy of MacMillan & Co’s original manuscript, all other images, statistics, illustrations and citations, etc. derived and included under fair use/royalty free provisions.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Insights on Technology and Strategy