In the first blog of our series, I briefly introduced how AI works and some of its limitations. This blog will cover common fears surrounding AI and why I believe there is no inherent reason to fear AI itself.
Arguably the most common fear surrounding AI is the fear that it will eventually take over — rendering humans irrelevant. What particularly unnerves people is the prospect of machines replacing specific white-collar jobs, often perceived as requiring intellectual prowess.
It is a fact that AI models sometimes outperform humans at certain tasks. An appropriately trained AI might be better equipped to identify defects in a car tire than a human. AI could excel in analyzing or drafting contracts, proofreading documents, or enhancing research efforts by offering a more intuitive interface to search engines. And undeniably, this might lead to the obsolescence of some jobs, as has been witnessed with various forms of automation in the past.
A counter-argument, however, suggests that other, more valuable roles will supplant many of the jobs lost. Throughout history, technological revolutions have replaced jobs and also created new opportunities, often with better pay. This isn’t to downplay the potential societal upheaval. We can address this challenge with gradual change, retraining programs, more flexible societal structures for job transition such as improved social welfare and healthcare systems, and an emphasis on solid educational foundations that ease transitions into new professions.
While AI could displace certain jobs, it could also bring societal benefits by creating new opportunities, given we equip people with the tools to manage this disruption.
Geoffrey Hinton recently discussed the potential dangers of AI expressing concern about AI surpassing human intelligence. Such an eventuality may be a concern in the distant future, but it’s like a 16th-century individual worrying about a weapon capable of obliterating a whole city. We don’t fully comprehend even a single neuron’s workings, how the brain processes information, or what sentience might entail. We haven’t yet created an AI with the intelligence level of an ant which is why I’m not losing sleep over the idea that AI will render humans irrelevant.
Hinton also acknowledged the misuse of AI by malevolent actors (an inherent risk with all new technology), which brings us to the second common fear surrounding AI: the potential of misusing AI for nefarious purposes.
The most concerning potential misuse of AI is the development of fully autonomous weapons. The issue isn’t that we’ll inadvertently create a Skynet-like entity that chooses to eliminate us out of self-preservation. The concerns are: firstly, poor programming could lead the AI to target friendly forces erroneously, and secondly, up until now, every act of aggression has ultimately been executed by a human — whether it’s physically pulling a trigger or launching a missile.
Who bears responsibility if an AI system takes over the decision-making process of identifying a target and initiating an attack? If guilt or regret are no longer factored into the act of killing, human life could be diminished to the equivalence of a non-player character (NPC) in a video game.
It’s still early to precisely predict how AI might be misused. Repressive governments could leverage AI to detect dissent and curb freedoms, privacy could be further invaded, weapons could become autonomous, and AI could facilitate the spread of confusion, distrust, and even the creation of new weapons, toxins, or pathogens. Deep fakes and machine-generated images could be exploited by political activists or hostile governments to fuel discontent and distrust.
Counteracting this threat requires heightened public awareness and more vigilant intelligence and law enforcement agencies. The use of AI in criminal activities or terrorism can be mitigated — not necessarily by regulating AI, but by more effectively targeting the criminals and terrorists who exploit it.
The third and final fear pertains to explainability and lack of transparency. The reasoning behind an AI’s decision-making is often unclear, with the process resembling a black box. Fortunately, this problem is gradually being addressed through techniques that ‘open the box’ surrounding what influences the decisions AI makes. For instance, an AI with explainability can not only answer if an image depicts a dog, but also why it thinks it is a dog.
This is important because the pitfalls in relying too heavily on statistics, and hence on AI, are fourfold.
- Erroneous assumptions or flawed data can lead to inaccurate conclusions.
- While statistics (and AI) can predict trends for a population, they may not accurately represent individual cases.
- Correlation doesn’t imply causation – just because two variables appear related doesn’t mean one causes the other.
- Misinterpretation of results can lead to detrimental decisions.
AI models are probabilistic in nature, and users must be careful not to mistake their predictions as definitive outcomes. The current lack of explainability and transparency may lead to costly errors, such as incorrect medical diagnoses, stock purchases, or loan issuances. However, technological advancements and increasing awareness might help overcome this challenge. By introducing more transparency into AI’s decision-making processes, we can foster a greater level of trust in the technology.
There are no straightforward solutions to these issues and no single response to these fears that will put everyone’s minds at ease. Every technological advancement brings fresh challenges. For instance, the second industrial revolution dramatically reshaped our civilization by empowering the middle class. Still, it also enabled the mass production of devastating weapons like bombs and chemical warfare agents used in World War I. Similarly, while profoundly beneficial in so many ways, the Internet has also exacerbated repression and privacy loss and facilitated criminals’ communication and interaction.
AI is revolutionizing various fields, from manufacturing, which enables automated defect detection, to healthcare, which assists in diagnosis. It’s reshaping the legal profession by aiding contract creation and propelling the development of self-driving cars. Current projections suggest that AI could add more than $15T to the global economy by 2030, transforming lives, reducing costs, and facilitating better decision-making.
Calls to halt or slow down AI research, often seen as the solution, stem from these common fears and a naive belief that only aids our competitors and adversaries and is rooted in a lack of understanding of AI. We’re at an intriguing juncture in human history. We’ve created an exceptionally powerful tool and it’s up to us to determine its use. We can let these irrational fears dictate our path, or we can seize this opportunity to create a better world.
Contact us today to learn more about what AI can do for your business.