What’s the Difference Between Strong AI and Weak AI?


A human brain made up of a blue computer node design.
thvideostudio/Shutterstock.com

Artificial Intelligence (AI) is a very common term today, but what present-day AI is and what most people think it is, can be very different. The AI you know is “weak” AI, but the AI many fear is “strong.”

What Is Artificial Intelligence Actually?

It’s easy to throw around a term like “AI”, but that doesn’t make it clear what we’re really talking about. In general, “artificial intelligence” refers to a whole field in computer science. The goal of AI is to get computers to replicate what natural intelligence can accomplish. That includes human intelligence, the intelligence of other animals, and the intelligence of non-animal life such as plants, single-celled organisms, and anything else that has some form of intelligence.

There’s a deeper question under this topic, and that’s what “intelligence” is in the first place. The truth is that even the science of intelligence can’t agree on a universal definition for what is or isn’t intelligence.

Broadly, it’s the ability to learn from experience, make decisions, and accomplish goals. Intelligence allows for adaptation to new situations, so it’s distinct from pre-programming or instinct. The more complex the problems that can be solved, the more intelligence you have.

We still have a lot to learn about intelligence in humans, despite having many different ways of measuring intelligence. We aren’t even sure how human intelligence works under the hood. Some theories, such as Gardner’s Theory of Multiple Intelligences have been thoroughly debunked, while there is lots of evidence to support a general intelligence factor in humans (referred to as the “G Factor“).

In other words, the details of intelligence, both natural and artificial, are still evolving. While we might feel like we intuitively know intelligence when we see it, it turns out that drawing a neat circle around the idea of intelligence is tricky!

The Age of Weak AI Is Here

The AI we have today is commonly referred to as “weak” or “narrative” AI. This means that a specific AI system is very good at doing just one or a narrow set of related tasks. The first computer to beat a human being at chess, Deep Blue, was totally useless at anything else. Fast forward to the first computer to beat a human at Go, AlphaGo, and its orders of magnitude smarter, but still only good at one thing.

All of the AI you encounter, use, or see today is weak. Sometimes different narrow AI systems are combined to form a more complex system, but the result is still effectively narrow AI. While these systems, especially ones that focus on machine learning, can produce unpredictable results, they aren’t at all like human intelligence.

Strong AI Doesn’t Exist

T-800 Endoskeleton Model from the Terminator 3D.
Sarunyu L/Shutterstock.com

AI that’s equivalent or superior to human intelligence doesn’t exist outside of fiction. If you think of movie AIs such as HAL 9000, the T-800, Data from Star Trek, or Robbie the Robot, they are seemingly conscious intelligences. They can learn to do anything, function in any situation, and generally do anything a human can, often better. This is “strong” AI or AGI (Artificial General Intelligence), essentially an artificial entity that is at least equal and would most likely surpass us.

As far as anyone knows there is no real-world example of this “strong” AI existing. Unless it’s somewhere in a secret laboratory somewhere, that is. The fact is, we wouldn’t even know where to start to make an AGI. We have no idea what gives rise to human consciousness, which would be a core emergent feature of an AI. Something referred to as the hard problem of consciousness.

Is Strong AI Possible?

No one knows how to make an AGI, and no one knows if it’s even possible to create one. That’s the long and short of it. However, we are proof that strong general intelligence exists. Assuming that human consciousness and intelligence are the results of material processes under the laws of physics, there’s no reason in principle that an AGI couldn’t be created.

The real question is whether we’re smart enough to figure out how it can be done. Humans may never advance enough to give birth to AGIs and there’s no way to put a timeline on this technology the way we can say that 16K displays will be available in a few years.

Then again, our narrow AI technologies and other branches of science such as genetic engineering, exotic computing with quantum mechanics or DNA, and advanced materials science might help us bridge the gap. It’s all pure speculation until it either suddenly happens by accident, or we have any sort of roadmap.

Then there’s the question of whether we should strive to create AGIs. Some very smart people, such as the late Professor Stephen Hawking and Elon Musk are of the opinion that AGIs will lead to apocalyptic ends.

Considering how far-fetched AGIs seem, those worries might be a little overblown, but maybe be nice to your Roomba, just to be safe.





Source link