The fear of AI is as profound as it is ancient.

In Greek mythology, Hephaestus created a giant bronze automaton named Talos to defend the island of Crete by hurling large boulders at enemy ships. To my knowledge, Talos strictly adhered to his programming. However, Talos’ existence ended when Jason (of Argonaut fame) discovered how to destroy it by extracting the ichor, the blood of the gods that granted Talos life and possibly sentience.

Yet not all autonomous entities from historical narratives were as benign. In a 16th-century legend, Rabbi Judah Loew ben Bezalel created the Golem to protect the Jewish community in Prague from persecution. Sadly, the Golem’s programming proved to be less reliable than that of Talos, leading the Rabbi to destroy it.

Fast-forward to the twentieth century and the narrative around AI hasn’t shifted dramatically from that of the Golem. We create intelligent machines to assist us, but inevitably we lose control of them. This theme echoes through modern storytelling, from Arthur C. Clarke’s ‘2001: A Space Odyssey’ to ‘The Terminator’ and ‘The Matrix’ series.

Since the first appearance of autonomous devices in literature or mythology, there has always been a latent fear of them – a form of xenophobia or fear of the unknown. 

In the coming weeks, we will roll out a “Fear of AI” series that will address common concerns about AI, such as potential misuse, lack of transparency and widespread misunderstandings about its nature, and explain why there is no inherent reason to fear AI itself. Our first introductory post in the series will give a basic overview explaining how modern AI works.

Nowadays, AI mostly evolves from machine learning, a process in which the machine learns to perform tasks by recognizing patterns in a multitude of examples. This amounts to sophisticated statistical inference.

While impressive, programs like ChatGPT or DALL-E essentially operate on the same principles. ChatGPT 3.5, trained with 175 billion parameters and a 96-layer deep neural network, predicts responses based on statistical relationships between words and phrases. Despite the seeming conversational ability, it is important to note that the underlying engine solely performs statistical inference. ChatGPT essentially serves as a ‘statistical parrot,’ incapable of creating anything new, but rather generating statistically probable responses based on its training data. It lacks understanding or conscious appreciation of the input it processes.

However, today’s model of deep learning based on neural networks drastically simplifies the complexity of neurons, overlooking important aspects such as temporal dynamics, neurochemical effects, adaptation processes, and structural plasticity. The term ‘neural networks’ is somewhat misleading and might more accurately be termed ‘adaptive layered weighted graphs.’

Such networks miss many aspects of genuine intelligence like creativity, common sense, reasoning, emotions, general adaptability, logic, temporal associativity and extrapolation. Consequently, the prospect of sentient machines emerging from current technology is virtually nonexistent. Our AI solutions are still narrow, task-specific and incapable of performing outside their training domain. For instance, an advanced language model might generate text, images, or sound, but cannot recognize tire defects, identify tumors, predict proton structures, or drive cars. Hence, fears surrounding AI’s potential dangers are largely unwarranted.

Nevertheless, AI excels at making decisions based on unstructured data such as images, text and audio. It converts such data into vectors (ordered arrays of values) and identifies relationships between inputs and outputs. This is particularly valuable since much of the world’s data is unstructured. For instance, an appropriately trained AI model can associate a picture of a dog with the word ‘dog,’ demonstrating the immense utility of this capability.

Currently, AI can be best understood as a highly advanced form of statistical methodology. It’s revolutionary and poses real challenges, but these don’t include creating sentient machines or Artificial General Intelligence (AGI) — that reality is likely centuries away.

Mistakes can occur when we misunderstand AI’s capabilities and limitations or when we delegate decisions about AI to those who might not prioritize our best interests. It is premature to regulate the AI domain extensively. Firstly, enforcing these regulations could be challenging, and secondly, overregulation could stifle innovation. However, promoting awareness and understanding of AI’s impact and technical constraints can lead to better decision-making and allow us to harness the benefits of the AI revolution.

Stay tuned next week to learn the three most common fears of AI and why they may be false. In the meantime, contact us today to learn more about unlocking the undisputed power of AI.

Related Posts

Leave a Reply

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.

Discover more from AI Dynamics

Subscribe now to keep reading and get access to the full archive.

Continue reading