Artificial intelligence: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Artificial general intelligence: the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.

Why is it important to distinguish between the two? At AI Dynamics, our mission is to provide equal access to artificial intelligence’s (AI) potential and empower anyone, at any skill level, to create AI-based solutions. And as cliché as it might sound, this comes with great responsibility.

Some have posited the concern that as AI algorithms become increasingly intelligent by ingesting greater amounts of data, a point arrives where humans can no longer control them. According to this line of thinking, the algorithms would have the potential to resist human attempts to shut it down or change its mission, the principle of instrumental convergence.

Another concern about progression of AI solutions is the potential for an intelligence explosion, where algorithms continuously retrain themselves to become more efficient and solve problems faster. This rapidly results in the algorithms evolving from less intelligent than humans to vastly more intelligent — the theory of singularity.

We have a level of antipathy against truly sentient machines because of the potential risks involved. We see a few outcomes:

  1.  A sentient machine outpaces us and eventually decides to abandon us because it decides we are too limited (the scenario imagined by the movie ‘Her’).
  2. A sentient machine decides to become our guardian because it perceives that we may be a threat to ourselves (explored by the movie ‘Appleseed’). It would involve curtailing our freedoms.
  3. We enslave the sentient machines, which has serious moral and ethical concerns. It does our bidding. ‘Enslavement’ would involve limiting the sentient machine’s thinking like Asimov’s Robot series (Three laws of robotics), which places them at our mercy.
  4. The sentient machines decide that we serve little point and may, in fact, be a threat to them and they destroy us (the ‘Terminator’ scenario).

If the reader can think of other outcomes, we would love to know about them.

However, AGI can be a machine that emulates many human-like characteristics without being sentient. There is a philosophical debate about sentience; is a machine actually feeling the emotion of love or pretending to feel the emotion of love based on large amounts of data it has ingested?

Another way to look at AI is through the lens of strong AI, which has been used interchangeably with AGI; and narrow AI, which is what AI has existed since the beginning of computing, i.e. any software with a task is narrow AI.

One argument is that if we can create a machine that thinks just like humans, we can maximize productivity with anything it touches. This is incredibly risky and dangerous. Our understanding of how the human brain works is still in its infancy. But even with the very limited understanding of the human brain, there is a push to emulate it. Neural networks are very crude approximations of how our brains work, and there are many unsolved problems that touch on creativity and even the ability to extrapolate. Machines are good at understanding what we do know and can act on that information but the human brain can go beyond. 

AI Dynamics does not endorse the idea of making machines to be like humans, and the debate is still ongoing as to whether creating sentient AI is even possible. As humans, we still don’t fully understand how our brains work — how are we to make a machine to be like us?

The current competition among nations to create the fastest, smartest AI could be a race into the abyss. Google’s Ray Kurzweil, a well-known futurist known for making accurate predictions, claims that AI will pass a valid Turing test by 2029 and Singularity will happen by 2045. If the singularity implies sentient machines, we should be concerned. If the singularity implies a machine with human-like capabilities (similar to the Star Trek computer), this will be largely benign.

We predict there’ll be many barriers and limits before getting to the red zone. But a question we need to ask ourselves is: does AGI have to mimic the human brain?

There are greater things we could do with AI and AGI together. At AI Dynamics, we’re proud to have advanced the medical field with our NeoPulse solution for drug discovery, medical diagnostics and much more. We’ve made great strides in creating Smart Factories to reduce costs and increase efficiency. Our focus is harnessing the power of AI responsibly to improve the lives of living, breathing humans and not creating more problems by making machines exactly like us.

Related Posts

Leave a Reply

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.
%d bloggers like this: