The limits and future of AI

Chris Dare🔥
Sigma AI
Published in
8 min readOct 24, 2019

--

Illustration of a neural network. Source: https://tech.kartenmacherei.de/classifying-events-using-a-neural-network-488acb50de87

“Any sufficiently advanced algorithm is indistiguishable from intelligence” — Yours truly

Many would be right to say that this article deserves a place in an opinion column of a publication. Nonetheless, it’s all based on known facts and insights from some of the best professors in the world. It’s also a complex subject and so I’ve tried as much as possible to describe what I refer to from a high level of abstraction.

Let’s begin.

These days we see or hear in the news, a lot of concerns, shrieks, and complaints like the following:

“AI is taking over the world! AI is dangerous! Machines are becoming more intelligent than us…” (sighs)

Hold your horses. We’re not there yet. In this article, I will juxtapose human and machine intelligence to reveal the gaps between these two. I will also predict what will be required to make a huge leap to the next stage of the evolution of AI. Consider this a primer, not a deep dive into the subject matter.

What comes to your mind when you think about AI? A robot that can converse with you, cook your lunch? Sure, that counts. What also counts is Gmail; it’s able to recommend what you should type next as you compose your email — or let’s also consider tagging on Facebook; you upload a photo of you and some friends and Facebook automatically tells you that your friend John is also in the photo. (There are several examples.) Interesting right? Well, maybe that’s AI — and maybe it’s not. The thing is, terms like artificial intelligence (AI), machine learning (ML), deep learning (DL), and even deep neural networks (DNNs) have become buzzwords today. Moreover, their definitions keep changing over time. The public and scientific opinion of AI from the 1950s is different from what it is today. Once AI meant “search”, then it changed to expert systems…and now it’s centered around neural networks (and deep learning).

So can we establish a sound definition of AI? Probably. Scientific peers will agree that we can define an AI entity as an agent that is capable of reasoning, perception, and acting on its environment. Then we can also throw in the fact that’s not a biological agent. Otherwise then a mimosa plant would also qualify as AI. You can also think of AI as anything that is intelligent but not biological — for simplicity.

And then comes the first fundamental issue. Take a look at these words below:

Artificial Intelligence
Machine Intelligence
Computational Intelligence

We can even go on to reference terms such as:

Machine Learning
Knowledge-based systems

I think that’s interesting. You know why? Because intelligence, learning, and knowledge are all human attributes! They’re actually not meant to describe machines. A machine that functions exactly like a mimosa plant will today be considered as intelligent by many. But do we really identify an actual mimosa plant as intelligent?

It’s hard to define intelligence in the context of machines. But that’s what we’re trying to do after all, right? We want to give machines all/some qualities of humans. So, has this been successful? Let’s find out.

In order to give machines the level of intelligence humans possess, we first need to understand how human intelligence or cognition works. What comes into your mind when you think about human intelligence? What’s one important component that comes to mind? The brain? Yes, the brain. Well apparently, it turns out that we don’t know a lot about the brain and how it really works. It seems that the more we try to study the brain, the more we realize that we know very little about how it works. (Hopefully, that changes in the coming years.)

Side note: For the purpose of comparison and generalization, I may use the “brain” or “human intelligence” or “biological neural networks (BNNs)” interchangeably. I may do the same with the terms “AI” and one of its subfields — artificial neural networks (ANNs). An artificial neural network is simply a model inspired by the structure and function of a biological system in order to perform certain tasks that require intelligence. Not sure what a model means? In the context of tech and AI, It’s basically a representation of a system — often it’s mathematical. (We normally use math in such representations because unlike other languages, math is unambiguous.) Now, let’s get back to where we left off.

So I was telling you about the brain. The subject of the brain is very important because in its architecture lies a fundamental difference between biological neural networks (humans in this case) and artificial neural networks (machines).

Computers are based on a type of architecture called the Von Neumann or Princeton model. Here’s the diagrammatic representation:

Source: Wikipedia. It’s a 74-year-old architecture; Wikipedia was sure to get it right and they had a great diagram ;-)

The interesting thing is that this model was invented as far back as 1945 and computers today still have this architecture. They basically load a sequence of instructions from memory and execute them in the CPU. That hasn’t changed since 1945; we’ve only made computers more sophisticated in a manner that allows them to fetch, decode and execute programs faster. Even many emerging quantum computers are being built this way. The point is the architecture is still the same.

With the brain, on the other hand, …well we don’t even know how the brain really works?!? Between the brain and machines/computers, what we do know however is that memory works fundamentally differently. When I want to recall something like a friend’s full name, I just need to think about a part of what I’m looking for, a first name perhaps, and boom — I’ll remember. It’s not the same with computers. You have to reference an object in computer memory by an address. Imagine you wanted to recall what you ate last Sunday and you had to think of its address code — something like 114562 — in order to remember it. :-(

Well, that’s how computers work right now. This leads us to a limitation of the Von Neumann model:

You can’t do processing in memory

So far, we know that what makes biological neural networks advanced is that neurons are both memory and processing — unlike the Von Neumann machines we use today. Think of the brain like a computer for whose CPU, RAM and all other kinds of storage are the same component. We can call that a processing-in-memory (PiM) model. When genuine PiM models/architectures are successfully implemented, they will change the way the computer industry operates. Chip, phones, desktop, and laptop manufacturers…even AI companies & researchers, software developers and other professionals in the industry will need to adapt — or die. A lot of research has been put into realizing a PiM architecture but it’s hard to tell when a breakthrough will occur.

The thing is the latest advance in artificial intelligence is pretty much artificial neural networks. However, using today’s machines to implement the computational model of a biological neural network leaves you with parallel distributed processing. And one of the reasons for this is that fundamental difference in the memory systems of biological architectures and the Von Neumann architectures.

For us to build a true PiM computer that can cause an AI evolution, we may need to deeply answer questions like: “Why does the brain work the way it works?” and “Why is the brain built the way it is?” These are important to understand because one needs to be sure that an imitation of the brain — in modeling such an architecture for AI — will be contextually relevant. It’s something I learned from Marr’s level of abstraction. It helps you know how and what you should be really modeling. Take, for example, the invention of aircraft. For centuries people had attempted to build aircraft by trying to copy birds. It looked easy; all you had to do was glue some feathers and flap your wings like an eagle — until you keep falling off the cliff. The problem was that people at the time where observing birds from the wrong level of abstraction. It turns out that a bird’s wings conform to the laws of fluid dynamics. What enabled the first airplane to fly was that it followed the same principles. However, rather than flapping wings, planes are propelled by engines. We took the good parts and ignored inefficient features that couldn’t take us to other domains like outer space. Now we’re trying back at this again, this time to model AI and computers after the human brain. Thus, it’s important to understand that observing the brain from the neural network level of abstraction is not enough. If we also unlock the secrets of the brain at the memory level of abstraction, perhaps we can model computers and neural networks in a manner that is fundamentally the same, different implementation yet realizing an extremely reasonable form of human intelligence. IBM seems to understand this all too well. They also have a paper on this.

Probably the last thing I will mention is that you need to understand that the brain doesn’t exist on its own — it’s part of a body. As a result, one can argue that merely copying how the brain works without understanding how the rest of the body functions and how it’s connected to the brain can lead to a colossal fundamental failure. These are concerns researchers are addressing today in order to ameliorate the challenges in the evolution of artificial intelligence.

One final distinction about AI today I need to mention is that while humans require little data to do numerous tasks, AI tools of today require massive amounts of data in order to perform domain-specific tasks.

This is where I’ll stop. There are far more things to say but this has been a good enough primer to address the concerns many people have about AI. The heights attained by AI is truly remarkable. However, there’s a huge gap between human intelligence and machine intelligence. AI is currently just a tool that’s smart enough to execute a domain-specific task. We’re fortunate to still be in the early stages of its development and as we make breakthroughs it’s important that we model AI in a manner that augments our capabilities rather than shooting us in the foot.

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.” — Arthur C. Clarke

I'm on twitter. If you loved this article, follow me for updates on new releases.

Cheers.

--

--

Chris Dare🔥
Sigma AI

Software engineer: Data, product and leadership