Apparently not … or … some say yes, some say no. General Artificial Intelligence (AGI) is defined as: AI that can perform any intellectual/cognitive task a human can—better, faster, cheaper and that means that it has not arrived yet. Some experts predict AGI could be 5–10 years away, others believe it is still decades away or potentially unreachable. Others even say that scaling up current approaches will not lead to AGI—seeing that a new, revolutionary breakthrough seems to be required.
Therefore, current AI is categorized by many experts as Artificial Narrow Intelligence (ANI) rather than general, human-level intelligence.
(General Intelligence should not be confused with Super Intelligence. Super Artificial Intelligent systems define as ‘smarter than humans at everything’)
Large Language Models (LLMs) can mimic human communication, but the key (and non-comprehensive) technical, physical, and conceptual hurdles hindering the arrival of AGI include:
- LLMs are truly and spectacularly powerful but they still struggle to maintain reliability in reasoning during new, complex problems, especially those involving long-term, multi-step tasks. True understanding, common sense, independent human-like reasoning is often absent. It is important to note that intelligence (information) is not the same as intellect, which is higher-order thinking and abstract reasoning.
- Current models also lack the ability to comprehensively understand the world we live in, lacking personal experience and comprehending cause-and-effect. It still lacks flexibility; contextual understanding.
- It still cannot adapt as fluidly as humans do by, for example, updating their worldview in real-time, or reliably plan beyond their training data. An AI as flexible as a human would learn from experience and adapt to new problems. The reason seems to be that the transformer architecture powering most modern AI is designed to look at data as a static "snapshot" rather than adapting its understanding continuously in real-time, like a human does.
- There is a difference between the human mind and the human intellect, although they act as one system, which is not so easy to understand much less duplicate. According to Google: The mind is the "operating system" experiencing the world, whereas intelligence is a tool for navigating it efficiently.
- AI systems also display no human-like autonomous learning or long-horizon planning.
- True AGI will require the ability to learn new, unfamiliar tasks without extensive retraining—a feat current systems cannot do autonomously; it still relies on heavy training. It does not yet successfully display recursive self improvement without human intervention.
- Although some models have convincingly passed the Turing Test (where a machine could pass as human in text-based conversations with humans) and while AI shows rapid advancements in specialised tasks and generative AI, current AI models cannot match human intelligence across the full spectrum of cognitive abilities over multiple domains — mathematics, language, science, practical reasoning, creative tasks, inventing new concepts.
- AI language models can’t perform complex financial analyses and at the same time compose music or analyse medical scans. They can generate text, write code, create images — but all within clearly defined boundaries. Current Narrow AI models cannot match general human intelligence across the full spectrum of cognitive abilities, because all of these skills are not combined and even integrated in one system. Narrow AI systems trained on a single domain are outperformed by broad systems that train across all possible domains simultaneously.
- Many AI models operate as "black boxes," meaning they cannot explain how they reached a specific decision.
- Current AI technology excels at interpolating within their training data but it fails at out-of-distribution generalisation (handling novel situations)
- Although General AI is not perfect, no human is perfect either.
- On the question of hallucinations: Even advanced models suffer from hallucinations (falsifying information) but human error does not preclude intelligence, and it should, therefore, not disqualify general intelligence among machines. On the question of ‘jagged’ intelligence: AI can perform expert-level tasks in some areas while failing at basic, commonsense tasks, but humans do that too.
- Despite the hype, there is no universally agreed-upon path to AGI. Many researchers argue that scaling current technology will not suffice. It seems as if developers still need to make fundamental breakthroughs in the understanding of intelligence.
- Another legitimate hurdle to AGI is whether AI development is hitting a wall regarding high-quality data. Models have already consumed most of the public internet text, and scaling further requires massive energy infrastructure, bringing into question the sustainability of current AI scaling laws.
In conclusion two questions remain:
- Despite the lack of true AGI, artificial systems are rapidly evolving in leaps—not baby steps. Is AGI an event or rather a gradual moving towards?
- It is undeniable that we are seeing the disrupting of entire industries and boundaries expanding all around us. Will it be possible ever to go back?