The massive AI firms promised us that 2025 can be “the yr of the AI brokers.” It turned out to be the yr of speaking about AI brokers, and kicking the can for that transformational second to 2026 or possibly later. However what if the reply to the query “When will our lives be totally automated by generative AI robots that carry out our duties for us and principally run the world?” is, like that New Yorker cartoon, “How about by no means?”
That was principally the message of a paper printed with out a lot fanfare some months in the past, smack in the midst of the overhyped yr of “agentic AI.” Entitled “Hallucination Stations: On Some Fundamental Limitations of Transformer-Based mostly Language Fashions,” it purports to mathematically present that “LLMs are incapable of finishing up computational and agentic duties past a sure complexity.” Although the science is past me, the authors—a former SAP CTO who studied AI underneath one of many discipline’s founding intellects, John McCarthy, and his teenage prodigy son—punctured the imaginative and prescient of agentic paradise with the understanding of arithmetic. Even reasoning fashions that transcend the pure word-prediction strategy of LLMs, they are saying, received’t repair the issue.
“There is no such thing as a approach they are often dependable,” Vishal Sikka, the dad, tells me. After a profession that, along with SAP, included a stint as Infosys CEO and an Oracle board member, he at the moment heads an AI companies startup referred to as Vianai. “So we should always neglect about AI brokers operating nuclear energy crops?” I ask. “Precisely,” he says. Perhaps you will get it to file some papers or one thing to save lots of time, however you may need to resign your self to some errors.
The AI trade begs to vary. For one factor, an enormous success in agent AI has been coding, which took off final yr. Simply this week at Davos, Google’s Nobel-winning head of AI, Demis Hassabis, reported breakthroughs in minimizing hallucinations, and hyperscalers and startups alike are pushing the agent narrative. Now they’ve some backup. A startup referred to as Harmonic is reporting a breakthrough in AI coding that additionally hinges on arithmetic—and tops benchmarks on reliability.
Harmonic, which was cofounded by Robinhood CEO Vlad Tenev and Tudor Achim, a Stanford-trained mathematician, claims this current enchancment to its product referred to as Aristotle (no hubris there!) is a sign that there are methods to ensure the trustworthiness of AI methods. “Are we doomed to be in a world the place AI simply generates slop and people cannot actually verify it? That may be a loopy world,” says Achim. Harmonic’s answer is to make use of formal strategies of mathematical reasoning to confirm an LLM’s output. Particularly, it encodes outputs within the Lean programming language, which is understood for its potential to confirm the coding. To make certain, Harmonic’s focus up to now has been slim—its key mission is the pursuit of “mathematical superintelligence,” and coding is a considerably natural extension. Issues like historical past essays—which might’t be mathematically verified—are past its boundaries. For now.
Nonetheless, Achim doesn’t appear to assume that dependable agentic conduct is as a lot a problem as some critics consider. “I might say that almost all fashions at this level have the extent of pure intelligence required to cause by reserving a journey itinerary,” he says.
Each side are proper—or possibly even on the identical aspect. On one hand, everybody agrees that hallucinations will proceed to be a vexing actuality. In a paper printed final September, OpenAI scientists wrote, “Regardless of vital progress, hallucinations proceed to plague the sphere, and are nonetheless current within the newest fashions.” They proved that sad declare by asking three fashions, together with ChatGPT, to offer the title of the lead writer’s dissertation. All three made up pretend titles and all misreported the yr of publication. In a weblog in regards to the paper, OpenAI glumly said that in AI fashions, “accuracy won’t ever attain one hundred pc.”

























