Can AI truly "reason" like OpenAI and DeepSeek claim?
Follow us
Can AI Really Think Like Us?
The world of AI is evolving at a breakneck pace, with companies like OpenAI and DeepSeek making bold claims about their models' ability to "reason." But what does that really mean, and should we believe it?
The Buzz Around AI Reasoning
Lately, AI companies have been touting their new models as capable of reasoning—similar to how humans solve problems. This isn't just about spitting out quick answers, like ChatGPT does. These advanced models, such as OpenAI's o1 and DeepSeek's r1, aim to tackle problems by breaking them down into smaller parts and solving them step by step—a process known as "chain-of-thought reasoning."
These models have shown impressive results in some areas, like solving logic puzzles and acing math tests. However, they still stumble on simpler tasks, leaving experts divided on whether these models truly "reason" or not.
What Is Reasoning, Anyway?
When AI companies talk about reasoning, they refer to a model's ability to deconstruct a problem and work through it methodically. Yet, this definition is more limited than what most people think of as reasoning. Human reasoning involves various types, such as deductive, inductive, and analogical reasoning. AI's current capabilities might resemble just one slice of this complex pie.
A key feature of human reasoning is the ability to learn and apply rules from limited experiences. For instance, even young children can generalize from a few examples. Can AI do the same? Skeptics argue that AI's ability to generalize is limited, while others believe these models are on the path to achieving it.
The Skeptics' View
Some experts, like Shannon Vallor, argue that AI models aren't genuinely reasoning. Instead, they mimic human processes seen in their training data. This imitation might make them appear intelligent, but when these models fail at simple tasks, it suggests they aren't truly reasoning.
Melanie Mitchell, another expert, highlights that while models like OpenAI's o3 perform well on tests, they use enormous computational power without transparency on how they achieve results. This lack of clarity raises doubts about whether these models are genuinely reasoning or just using shortcuts, known as heuristics.
The Believers' Perspective
On the other hand, some experts, like Ryan Greenblatt, see signs of reasoning in these models. Although they rely more on memorization compared to humans, they still solve complex problems beyond their training examples. This indicates some level of reasoning, albeit different from human reasoning.
Ajeya Cotra suggests that AI models combine memorization with reasoning, similar to a diligent student who memorizes numerous equations and figures out which to apply. This approach allows them to tackle a wide range of tasks, even if they occasionally miss the mark.
The Concept of "Jagged Intelligence"
AI's reasoning abilities have been described as "jagged intelligence." This means they can excel at complex tasks while struggling with simpler ones. Unlike humans, whose problem-solving skills are more correlated, AI can be exceptional in one area and lacking in another.
Using AI Wisely
For now, it's crucial to understand AI's strengths and limitations. AI is best suited for tasks where solutions can be easily verified, like coding or website creation. In areas without clear answers or where stakes are high, it's wise to use AI as a brainstorming partner rather than relying on it for definitive solutions.
As AI continues to evolve, its role in our lives will grow. For now, knowing when and how to use it effectively is key to harnessing its potential without overestimating its capabilities.