Kirk Klasson

Work in the Time of Covid…Well, that didn’t take very long

I was speaking with a high school teacher recently and she was explaining some of the frustrations that arise from conducting classes remotely. Apparently, during a regularly scheduled Zoom class with multiple participants, she noticed one young man’s demeanor seemed somewhat static. As the class progressed she could see him periodically nod and go “um hum” in apparent assimilation of the proceedings, not at all unlike the linguistic idioms incorporated in Google’s Duplex agent (see The Dawn of Agency – May 2018).

As her suspicions mounted the teacher decided to put a question to this participant to see if he was indeed following along. That’s when she discovered that there was really no one there, just an endless video loop of the student plugged into a Zoom session.

Now it would be easy to see how this kind of fraudulent participation would draw the ire of a teacher. No one likes having their time wasted. But, in a way, he might well deserve some extra credit if not a shot at a Ferris Bueller Scholarship for Excellence in Indolence, anticipating a growing trend in remote relationships, sculpting and buffering outbound visual interactions using AI.

Back in March of this year, in Work in the Time of Covid, it was suggested that video platforms would begin to incorporate AI to augment human interpretation of non-verbal communications. Subtle expressions that might convey unexpressed emotions that require more empathy or explanation, those moments when management needs to lean-in and listen harder. What we’ve seen thus far is AI employed in the manipulation of images for the purposes of rendering them more “life-like”, turning human interaction into highly compressed, artificial visual facsimiles, all in an effort to craft and project a better you.

Recently, Nvidia announced a cloud-native, remote work, AI based platform called Maxine. Apparently using some very clever GAN techniques Nvidia was able to reduce facial expressions to a series of algorithms. This, in turn, promoted significant bandwidth conservation using asymmetric compression, where a small amount of data can be used to render high fidelity representations of synthetic facial expressions. These and other AI techniques can also be used to manipulate gaze and eye alignment, noise cancelation, background adjustments and a number of other valuable features.

The only question then is why stop there?

While you’re at it, why not adjust a participant’s limbal ring and pupil dilation making them go from an ordinary pedestrian caffeinated state to one of an oxytocin laced Japanese anime? Or correct annoying linguistic affectations like up-talking, a condition that seems to afflict most female management? Or supply situationally correct physical gestures, chin on fist for listening, eye glass adjustment for conveying amplified attentiveness, palms out for a “hold on a moment” moment. Heck, why not a control-alt-command for a this-couldn’t-be-more-sincere if I were there in person expression? Why not turn your video conferencing from a joyless chore into a hypnotic seance that only Natasha Romanoff could provide?

Because we’re gonna need a second GANs technique to sort out all inauthentic crap that the first one has devised.

Now go back to your room and put on some pants so we can get this session started.

 

Cover image courtesy of CDC all other images, statistics, illustrations and citations, etc. derived and included under fair use/royalty free provisions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Insights on Technology and Strategy