Author Topic: Artificial Superintelligence Could Arrive by 2027, Scientist Predicts  (Read 132 times)

Offline javajolt

  • Administrator
  • Hero Member
  • *****
  • Posts: 35500
  • Gender: Male
  • I Do Windows
    • windows10newsinfo.com
We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.

During his closing remarks at this year's Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won't build human-level or superhuman AI until 2029 or 2030, there's a chance it could happen as soon as 2027.

After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there," Goertzel told the conference audience. "I mean, there are known unknowns and probably unknown unknowns."

"On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," he added.

To be fair, Goertzel is far from alone in attempting to predict when AGI will be achieved.

Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there's a 50/50 chance that humans invent AGI by the year 2028. In a tweet from May of last year, "AI godfather" and ex-Googler Geoffrey Hinton said he now predicts, "without much confidence," that AGI is five to 20 years away.

Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called "singularity," or the point at which AI reaches human-level intelligence and subsequently surpasses it.

Until the past few years, AGI, as Goertzel and his cohort describe it, seemed like a pipe dream, but with the large language model (LLM) advances made by OpenAI since it thrust ChatGPT upon the world in late 2022, that possibility seems ever close — although he's quick to point out that LLMs by themselves are not what's going to lead to AGI.

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI — unless the AGI threatens to throttle its own development out of its own conservatism," the AI pioneer added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level."

"It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion," he added, presumably referring to the singularity.

Naturally, there are a lot of caveats to what Goertzel is preaching, not the least of which being that by human standards, even a superhuman AI would not have a "mind" the way we do. Then there's the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.

All the same, it's a compelling theory — and given how rapidly AI has progressed just in the past few years alone, his comments shouldn't be entirely discredited.

source