The federal government of Singapore launched a blueprint at the moment for world collaboration on synthetic intelligence security following a gathering of AI researchers from the US, China, and Europe. The doc lays out a shared imaginative and prescient for engaged on AI security by worldwide cooperation slightly than competitors.
“Singapore is likely one of the few nations on the planet that will get alongside effectively with each East and West,” says Max Tegmark, a scientist at MIT who helped convene the assembly of AI luminaries final month. “They know that they are not going to construct [artificial general intelligence] themselves—they may have it performed to them—so it is vitally a lot of their pursuits to have the nations which can be going to construct it discuss to one another.”
The nations thought most definitely to construct AGI are, in fact, the US and China—and but these nations appear extra intent on outmaneuvering one another than working collectively. In January, after Chinese language startup DeepSeek launched a cutting-edge mannequin, President Trump referred to as it “a wakeup name for our industries” and mentioned the US wanted to be “laser-focused on competing to win.”
The Singapore Consensus on World AI Security Analysis Priorities requires researchers to collaborate in three key areas: learning the dangers posed by frontier AI fashions, exploring safer methods to construct these fashions, and growing strategies for controlling the conduct of probably the most superior AI techniques.
The consensus was developed at a gathering held on April 26 alongside the Worldwide Convention on Studying Representations (ICLR), a premier AI occasion held in Singapore this yr.
Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI security occasion, as did teachers from establishments together with MIT, Stanford, Tsinghua, and the Chinese language Academy of Sciences. Consultants from AI security institutes within the US, UK, France, Canada, China, Japan and Korea additionally participated.
“In an period of geopolitical fragmentation, this complete synthesis of cutting-edge analysis on AI security is a promising signal that the worldwide group is coming along with a shared dedication to shaping a safer AI future,” Xue Lan, dean of Tsinghua College, mentioned in an announcement.
The event of more and more succesful AI fashions, a few of which have stunning skills, has prompted researchers to fret a few vary of dangers. Whereas some give attention to near-term harms together with issues brought on by biased AI techniques or the potential for criminals to harness the know-how, a big quantity consider that AI might pose an existential menace to humanity because it begins to outsmart people throughout extra domains. These researchers, typically known as “AI doomers,” fear that fashions might deceive and manipulate people with the intention to pursue their very own targets.
The potential of AI has additionally stoked discuss of an arms race between the US, China, and different highly effective nations. The know-how is considered in coverage circles as crucial to financial prosperity and army dominance, and plenty of governments have sought to stake out their very own visions and rules governing the way it must be developed.