Be part of our day by day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Be taught Extra
A phase on CBS weekly in-depth TV information program 60 Minutes final evening (additionally shared on YouTube right here) supplied an inside have a look at Google’s DeepMind and the imaginative and prescient of its co-founder and Nobel Prize-winning CEO, legendary AI researcher Demis Hassabis.
The interview traced DeepMind’s speedy progress in synthetic intelligence and its ambition to realize synthetic basic intelligence (AGI)—a machine intelligence with human-like versatility and superhuman scale.
Hassabis described right this moment’s AI trajectory as being on an “exponential curve of enchancment,” fueled by rising curiosity, expertise, and sources getting into the sphere.
Two years after a previous 60 Minutes interview heralded the chatbot period, Hassabis and DeepMind at the moment are pursuing extra succesful techniques designed not solely to grasp language, but in addition the bodily world round them.
The interview got here after Google’s Cloud Subsequent 2025 convention earlier this month, wherein the search large launched a bunch of latest AI fashions and options centered round its Gemini 2.5 multimodal AI mannequin household. Google got here out of that convention showing to have taken a lead in comparison with different tech firms at offering highly effective AI for enterprise use circumstances on the most reasonably priced value factors, surpassing OpenAI.
Extra particulars on Google DeepMind’s ‘Challenge Astra’
One of many phase’s focal factors was Challenge Astra, DeepMind’s next-generation chatbot that goes past textual content. Astra is designed to interpret the visible world in actual time.
In a single demo, it recognized work, inferred emotional states, and created a narrative round a Hopper portray with the road: “Solely the circulate of concepts transferring onward.”
When requested if it was rising bored, Astra replied thoughtfully, revealing a level of sensitivity to tone and interpersonal nuance.
Product supervisor Bibbo Shu underscored Astra’s distinctive design: an AI that may “see, hear, and chat about something”—a marked step towards embodied AI techniques.
Gemini: Towards actionable AI
The published additionally featured Gemini, DeepMind’s AI system being skilled not solely to interpret the world but in addition to behave in it—finishing duties like reserving tickets and purchasing on-line.
Hassabis stated Gemini is a step towards AGI: an AI with a human-like potential to navigate and function in complicated environments.
The 60 Minutes group tried out a prototype embedded in glasses, demonstrating real-time visible recognition and audio responses. May it additionally trace at an upcoming return of the pioneering but in the end off-putting early augmented actuality glasses referred to as Google Glass, which debuted in 2012 earlier than being retired in 2015?
Whereas particular Gemini mannequin variations like Gemini 2.5 Professional or Flash weren’t talked about within the phase, Google’s broader AI ecosystem has not too long ago launched these fashions for enterprise use, which can mirror parallel improvement efforts.
These integrations help Google’s rising ambitions in utilized AI, although they fall outdoors the scope of what was immediately coated within the interview.
AGI as quickly as 2030?
When requested for a timeline, Hassabis projected AGI might arrive as quickly as 2030, with techniques that perceive their environments “in very nuanced and deep methods.” He urged that such techniques may very well be seamlessly embedded into on a regular basis life, from wearables to residence assistants.
The interview additionally addressed the opportunity of self-awareness in AI. Hassabis stated present techniques are usually not aware, however that future fashions might exhibit indicators of self-understanding. Nonetheless, he emphasised the philosophical and organic divide: even when machines mimic aware conduct, they aren’t made from the identical “squishy carbon matter” as people.
Hassabis additionally predicted main developments in robotics, saying breakthroughs might come within the subsequent few years. The phase featured robots finishing duties with imprecise directions—like figuring out a inexperienced block fashioned by mixing yellow and blue—suggesting rising reasoning skills in bodily techniques.
Accomplishments and security issues
The phase revisited DeepMind’s landmark achievement with AlphaFold, the AI mannequin that predicted the construction of over 200 million proteins.
Hassabis and colleague John Jumper have been awarded the 2024 Nobel Prize in Chemistry for this work. Hassabis emphasised that this advance might speed up drug improvement, probably shrinking timelines from a decade to simply weeks. “I believe at some point possibly we will remedy all illness with the assistance of AI,” he stated.
Regardless of the optimism, Hassabis voiced clear issues. He cited two main dangers: the misuse of AI by unhealthy actors and the rising autonomy of techniques past human management. He emphasised the significance of constructing in guardrails and worth techniques—educating AI as one may educate a baby. He additionally known as for worldwide cooperation, noting that AI’s affect will contact each nation and tradition.
“One in every of my large worries,” he stated, “is that the race for AI dominance might develop into a race to the underside for security.” He careworn the necessity for main gamers and nation-states to coordinate on moral improvement and oversight.
The phase ended with a meditation on the long run: a world the place AI instruments might remodel virtually each human endeavor—and finally reshape how we take into consideration data, consciousness, and even the which means of life. As Hassabis put it, “We want new nice philosophers to return about… to grasp the implications of this technique.”
Source link