Kirk Klasson

The more I buy gadgets, the better I like furniture….

Affordances and the Luddites who love them…

Sad week, this week, with the passing of Steve Jobs, a true and uniquely American genius, not particularly when it came to computing, although he obviously had his bonafides when it came to bits and bytes, but especially in what it means to be human.

He recognized more than anyone else that it didn’t matter a whit how awesome a processor was unless an average person could do something magical with it. Which makes this week even more ironic because the announcement of the 4S showcased the processor and the OS, which caused most of the twitterati to deem it a dud. That’s how you market enterprise servers not consumer devices. Where was the magic? Where was the jaw dropping, mind blowing ability to facilely manipulate computing capabilities to achieve outcomes we hadn’t even considered possible? Where was the human factor in the mastery of the machine? Some would say that was what Siri was all about but I’m not convinced that will exactly pan out.

Which brings me to affordances.

Back in the day, if you spent anytime at PARC, you’d stumble into conversations about affordances. Nine times out of ten you wouldn’t even know you were talking about them until one of the veterans pointed it out to you. The concept of affordances was originally introduced by psychologist James J. Gibson in 1977 which he defined as all “action possibilities” that are latent in the environment or the objects with which we, as actors, may interact whether we were or were not expressly aware of those possibilities. Later on, in 1988, Donald Norman, arguably the father of cognitive engineering, extended the concept to refer to the human-machine interaction possibilities that are readily perceivable by the actor, ahem, user. No surprise then that Mr. Norman did a stint at Apple beginning in 1995. Basically, an affordance is a clue that informs us how to interact with the objects that surround us and, thanks to cognitive engineering, clues are increasingly and intentionally designed into our environment and the technology we use. That is why when we approach a door with a handle mounted vertically we are inclined to pull it, if mounted horizontally inclined to push it and if shaped as a knob inclined to twist it. It is the same reason why we inherently know that cans can be kicked, buttons can be pushed and chains can be pulled even if we don’t remember the origin of these clichés or why they are now considered the affordances of other actors and not necessarily the environment they occupy.

Whether we know it or not affordances are a big and important part of the magic that Apple brought to market, the innate ability to sybolically manipulate computational capabilities to do extrodinary things.

But to be useful these actionable metaphors need to remain consistent and congruent. Which is why switching between metaphors can be frustrating, such as when a vendor decides to alter the user interface or a device manufacturer changes the way options are presented to the user. When Microsoft decided to go to the “ribbon” metaphor for its office applications users balked and they slowly walked it back. When Caller Id was first introduced you had to view a terminal that displayed characters to see who was calling. So here is a device whose activation was presented aurally to the user that required you to get up and run to another room to determine, visually, the source of the call. The metaphor wasn’t congruent and the service wasn’t used. Caller ID didn’t become useful until the caller could be announced, aurally, and the user, ahem, actor, could hear it from another room.

Which brings us back to Siri.

A spoken interface to computational resources has long been the holy grail of computing techdom. Language is slippery stuff, loaded with homonyms, synonyms and colloquialisms. Contextual disambiguation of spoken ideas is one of the most complex problems out there. And one of the earliest frontal assaults on this problem also occurred at PARC in a project dubbed MURAX. But if we assume for a second that Siri has solved all of this, no mean feat, not to be confused with feet, by any measure, to be considered effective it would also have to “know” when spoken commands are less effective than graphic, visual metaphors in completing a given assignment. The reason this is important is that not being able to differentiate which is the appropriate affordance will cause the “actor” to become frustrated and mistrust the applications used because the user was presented with a “false affordance”.

There is a story on Wikipedia that elucidates the difference between Gibson’s and Norman’s notion of false and effective affordances. Here you have it:

If an actor steps into a room with an armchair and a softball, Gibson’s original definition of affordances allows that the actor may throw the recliner and sit on the softball, because that is objectively possible. Norman’s definition of (perceived) affordances captures the likelihood that the actor will sit on the recliner and throw the softball.

I think Norman’s got it roughly right. Furniture, due to centuries of refinement, has very efficient affordances. You intuitively know where to plant your ass. Hand held devices have also developed very familiar affordances; given their size, weight and grip, they could easily replace skipping stones. So if you come upon chair and don’t know what to do with it, I’d suggest you throw it out.

‘Cause…the more I buy gadgets, the better I like furniture.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Insights on Technology and Strategy