By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

If the World Economic Forum in Davos was any indication, AI safety and security will be this year’s top priority for AI developers and enterprises alike. But first, we must overcome hype-driven distractions that siphon attention, research, and investment away from today’s most pressing AI challenges. In Davos, leaders from across the technology industry gathered, previewing innovations, and prophesying what’s to come. The excitement was impossible to ignore, and whether it is deserved or not, the annual meeting has built a reputation for exacerbating technology hype cycles and serving as an echo chamber for technology optimists. But from my perspective, there was a lot more to it. Amidst all the Davos buzz, many conversations took on the challenge of assessing critical AI challenges across development and security, and outlining a path forward. Sam Altman and Satya Nadella took on the real and present threats of LLM-generated misinformation and deep fakes -- both serious threats as nearly half of the world’s population braces for an election this year. I paneled a session alongside Yann Lecun, Max Tegmark, and Seraphina Goldfarb-Tarrant, where we discussed the need to overcome durable adoption challenges like cost and accessibility, the path to artificial general intelligence (AGI), and how we understand the utility and security of today’s AI systems. With talk of AGI and AI-powered economies continuing beyond Davos, it’s easy to lose sight of the challenges looming ahead. But to bring these long-promised AI systems and their impact to life, we first must solve the challenges of the Large Language Models (LLMs) of today and the autonomous AI systems of tomorrow. LLMs have drastically changed the makeup of enterprise technology across industries. There is no shortage of excitement. However, some have begun to feel disillusioned, questioning what AI prospects are real and which are merely hype. After all, the benefits of LLMs are matched equally by new and familiar safety and security challenges. The threat of bias and toxicity come to mind. Misinformation and security breaches threaten to disrupt elections and compromise privacy. Deep fakes are set to run rampant this year, claiming victims like Taylor Swift and President Biden with explicit content and impersonations. This is just the tip of a very large iceberg that’s yet to surface. As we forge ahead towards AGI, more challenges will be uncovered. And the solutions to today’s challenges will undoubtedly translate to future AI systems. Solutions to combat LLM-generated misinformation today might become the underpinnings of the controls used on AGI systems. Preventative measures to thwart prompt injection and data poisoning will extend far beyond LLMs, too. Putting off the questions and challenges of today ignores the reality that these AI systems are the foundations of future intelligence AI and AGI systems. Between now and an AGI future, a lot of development remains. In the quest for greater AI-driven productivity, humans remain the limiting factor. That will change in the next evolution of AI. Today’s human-to-AI systems will be phased out in favor of AI-to-AI systems as LLMs are refined and become more capable and accurate. Human-in-the-loop approaches will be replaced by light human supervision that merely ensures AI agents are operating as expected. The Internet of Agents (IoA), an interconnected system of intelligent agents with specific assignments, is the natural next step for AI. Imagine a scenario where an AI agent can detect a bug within an enterprise application’s code, assign a patch to a coding agent powered by an LLM, and push it live through an agent tasked with managing enterprise production environments. This could take several minutes. Whereas human intervention could stretch that timeline to hours or even days. Whether we like it or not, the “invisible hand” of the market will push this vision forward. As trust in AI systems builds, enterprise executives and development teams will cede control over these systems in the name of efficiency, productivity, and profitability. More in-depth details are posted on OUR FORUM.