AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist

Ben Goertzel stands behind a lectern at Beneficial AGI Summit 2024
Ben Goertzel believes we may be on the way to creating an artificial intelligence agent that is just as smart as humans by 2027. (Image credit: SingularityNET)

Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed. 

Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.

The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context. 

However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) — where AI becomes just as capable as humans across several areas independent of the original training data — is within reach. This hypothetical point in AI development is known as the "singularity."

Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027. 

Related: Artificial general intelligence — when AI becomes more capable than humans — is just moments away, Meta's Mark Zuckerberg declares

If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) — which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined. 

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.

On the cusp of the singularity

He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.

Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures. 

The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it. 

OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition — from content generation to reasoning. 

Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.

Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."  

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI — unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought." 

Keumars Afifi-Sabet
Channel Editor, Technology

Keumars is the technology editor at Live Science. He has written for a variety of publications including ITPro, The Week Digital, ComputerActive, The Independent, The Observer, Metro and TechRadar Pro. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. He is an NCTJ-qualified journalist and has a degree in biomedical sciences from Queen Mary, University of London. He's also registered as a foundational chartered manager with the Chartered Management Institute (CMI), having qualified as a Level 3 Team leader with distinction in 2023.

  • Aaron T Cowan
    Citing Kurzweil is about like citing the National Enquirer. While LLMs have some impressive results, their very limited generalized pretraining on events beginning in 2021, their frequent "hallucinations", and need for huge numbers of GPUs/TPUs in giant, power-sucking data centers just to to produce basic generative results suggests that they are far further away than a few years.
    It appears that hundreds of trillions of tokens may be needed to really achieve generalized intelligence, and no we are not close to that, nor are we in a period of exponential growth for these resources. Moore's law has not been reliably operating in this decade or even before, so that deus ex machina will not rescue general AI any time soon. We are still talking about about trillios of dollars of investment and radical redesign of TPUs or other neural computing hardware, and computer storage to make much of a dent in this problem. Therefore, a decade is likely the earliest and very optimistic timeframe for something approximating human intelligence.
    Also, massively scaling down the computational complexity through knowledge distillation and other techniques will mean that scaling beyond the ability to simulate a single human intelligence would be necessary to achieve the singularity. The singularity was predicated on the idea that cheap resources could simulate generalized intelligence. Instead it appears that it is anything but cheap.
    Reply
  • God
    Surprised Ben apparently got two things wrong:

    1. He didn't cite LMMs (Large Multi Modal Model), he only cited LLMs. He apparently completely ignored GPT-4V, a multi modal model able to process both image and text.

    2. ChatGPT i.e. natural language/text version, is reasonably seen as "emergent AGi" not narrow Ai. It's able to do several tasks quite well, and comparable to many humans, despite its flaws. Claiming that it's narrow seems wrong.

    It's reasonably odd that Ben cites ChatGPT as "narrow", while it; Passes Bar Exams, Physics Exams, Math Exams, Coding Exams, writes poems, writes music, writes stories, help robots follow instructions, etc. Maybe Ben is underestimating the impact of Language for humans?

    Source:
    i. Arxiv: Levels of AGI: Operationalizing Progress on the Path to AGI (Meredith et al)
    ii. GPT-4V(ision) System Card (Open Ai)
    Reply
  • God
    Aaron T Cowan said:
    Citing Kurzweil is about like citing the National Enquirer. While LLMs have some impressive results, their very limited generalized pretraining on events beginning in 2021, their frequent "hallucinations", and need for huge numbers of GPUs/TPUs in giant, power-sucking data centers just to to produce basic generative results suggests that they are far further away than a few years.
    It appears that hundreds of trillions of tokens may be needed to really achieve generalized intelligence, and no we are not close to that, nor are we in a period of exponential growth for these resources. Moore's law has not been reliably operating in this decade or even before, so that deus ex machina will not rescue general AI any time soon. We are still talking about about trillios of dollars of investment and radical redesign of TPUs or other neural computing hardware, and computer storage to make much of a dent in this problem. Therefore, a decade is likely the earliest and very optimistic timeframe for something approximating human intelligence.
    Also, massively scaling down the computational complexity through knowledge distillation and other techniques will mean that scaling beyond the ability to simulate a single human intelligence would be necessary to achieve the singularity. The singularity was predicated on the idea that cheap resources could simulate generalized intelligence. Instead it appears that it is anything but cheap.

    ChatGPT is already reasonably seen as "emergent AGi" not narrow Ai. It's also supposedly has around 1 trillion params. Notably, I've cited elsewhere that biological brains have around 500T params, though a case of encephalitis that damaged 90% brain volume , left around 10% or 50 trillion biological params, and he was claimed to have functioned "normally". (The Lancet 2007), so maybe around 50T params is the sweet spot.

    ~Separately, it's reasonably odd that Ben cites ChatGPT as "narrow", while it; Passes Bar Exams, Physics Exams, Math Exams, Coding Exams, writes poems, writes music, writes stories, help robots follow instructions, etc despite its flaws. Maybe Ben is underestimating the impact of Language for humans?

    Sources:
    i. Wikipedia/Neuron (500T biological params)
    ii. Arxiv: Levels of AGI: Operationalizing Progress on the Path to AGI (Meredith et al)
    Reply