Artificial intelligence (AI) has huge potential to change our world and how we interact with it. You’ll find examples everywhere, and in nearly every industry.
For example, in the world of finance, AI can keeps track of a stock portfolio, making automatic adjustments in positions, executing smart trades and other strategic decisions.
At the sophisticated heart of the smart car, AI programming calibrates running the vehicle and even figures out which service station in the area has the best service.
Manufacturing and industry use AI for automatic operations such as scheduling, property management and factory operations.
But while the potential is easy to see, it is difficult for many to understand how AI really works. This is compounded by a persistent cultural resistance by some to AI, based on common fears and concerns about the use of technology in everyday life.
Cultural resistance
It’s not difficult to find examples of a cultural resistance to AI adoption, as I described in my previous blog entry ‘AI: Utility –and even a little fear – in the eye of the beholder’
Our visceral reactions come from deep-seated sources, including the visionary thoughts and imaginations of science fiction writers.
I believe that a fictional misunderstanding and the resulting fear are in some ways influencing the very development of AI, placing a large and distracting focus on what is termed ‘anthropomorphism’ (making AI platforms look, sound and act like humans). Eliezer Yudkowsky at the Machine Learning Institute, 2008 writes about this in his paper, ‘Artificial Intelligence as Positive and Negative Factor in Global Risk.‘
Human behavior is sprinkled with examples of subconscious resistance to things we logically know are good for us, like when a parent makes you eat your vegetables or the dentist makes you floss your teeth. But, with some coxing – and a stern lecture from our dentist – we realize their full value.
I believe adoption will be faster and our approach more successful if we understand why those around us – especially non-AI specialists – think about AI the way they do.
AI’s Ethos and Pathos
To start, let’s look at AI’s ethos and pathos, which really began to develop in the theoretical mathematics circles of the 1950s only to be further shaped by a heavy dose of science fiction.
For half a century, the potential that AI could be used to solve real-world problems has been in the minds of visionaries, coming to light when the term ‘artificial intelligence’ was coined in the 1950’s and the mathematical dialog around it began.
I am thinking now of the example of John McCarthy, of Dartmouth, MIT and Stanford fame, referred to by some as ‘ the father of AI.’ For decades the concept of AI remained closely held among the great minds in society, especially the theoretical mathematicians and physicists shaping the cutting edges of our understanding of how computers could be used to manipulate the world around us.
This is largely where the AI ethos has been cultivated and germinated, successfully laying out a strong theoretical foundation for AI. By ethos, I am referring to the cultural and ethical phenomenon that develops around a system of understanding, in this case, an evolving understanding of what is AI.
These early days of AI technology, in the infancy of modern computing, were thrilling. Great minds including Turing began to postulate what it would be like if a machine could think. By the mid-1950’s one could submit a simple question to a very large and expensive computer and receive a simple answer. Remarkable for the time, a research scientist might play a game of checkers with a computer or use the computer’s resources to prove mathematical theorems. The potential to use computer technology as a tool to magnify the thinking power of a human was, for the first time, really tangible.
However, because the AI ethos germinated in such a confined space, it evolved far out of reach for the average IT and tech-savvy crowd, let alone the average consumer – resulting in a fascinating AI pathos that is largely based in fear and negative connotations.
By pathos, I am referring the use of dialogue and knowledge to, as Aristotle would say, awaken underlying emotions or feelings, many of which are not positive in reference to AI.
Rather than learning about AI from McCarthy and the other visionaries, we have the science fiction community to thank for much of our sensational public perception.
Science Fiction and the AI Pathos
Remember HAL 9000 in 2001: A Space Odyssey? Even millennials recognize the famous lines, ‘Open the pod bay doors, HAL.’ ‘I’m sorry, Dave. I’m afraid I can’t do that.’ HAL left Dave floating in space. The underlying theme is that intelligent machines will someday control us, even in life and death situations. This image has ingrained itself into the pathos of our contemporary culture.
Fast forward to the Matrix Trilogy and the depiction of Agent Smith, who is essentially an AI code persona that is the nemesis to Neo, a real human prophesied to save his whole race from the computer-generated world called the Matrix. No wonder people are afraid of AI!
Many depictions in film and literature are AI machines that take human shape and behavior – some favorable, others very unfavorable. HAL talks with Dave in a calm voice, but deals Dave a potential death sentence. Add this to the AI pathos. Agent Smith is deceptively human-like, but we learn he is part of the Matrix and is always attempting hide the truth and squash the rebellion of real humans. Add this to the AI pathos.
We instinctively give ‘smart’ machines human characteristics that make them less enigmatic and threatening.
If they are animated, likable, and look like us, then they are good, even though they may not be able to solve any real problems. This is a big part of the AI pathos. We attempt to transform that which is somewhat unapproachable and scary into something more easily understood. We anthropomorphize them so that we can deal with them.
Irony of anthropomorphism
However, there is irony in attempting to make AI machines more human-like. We take what is not easy to understand about powerful machines that show intelligence and we portray them as humans instead of as machines, resulting in AI-injected machines that are scary to us. The resulting pathos says that an intelligent machine must be human-like to be useful in our world.
I believe the pursuit of creating artificially intelligent, human-like robots that make AI less scary has in some ways distracted the AI world today. AI has the potential to solve many enterprise problems today without the need for anthropomorphism. AI research is being done to perfectly mimic a human being, but much of the underlying AI framework could just as easily be applied directly to addressing, for instance, urgent business problems that are now surfacing from a glut of data.
Solving the fundamental problem
I don’t think robots are bad. Rather, I think there is a more fundamental problem to solve first. It makes more sense to first make AI useful to organizations –public, private, educational– in much the same way the growing number of analytics and decision management solutions are used today.
At DM, our platform is not a he or a she; it doesn’t have a face or a voice, because it doesn’t really need those attributes to solve real world data problems.
Other developers are following this path as well – and we hope more will follow. We believe that by not trying to shroud a user interface in likeness of a human, we can actually make AI more appealing to those who might benefit most: enterprise users and other decision-makers facing mountains of data and content. I’ll explore those possibilities in my next installment. Stay tuned!