3 things we learned from this interview with Google Deepmind's CEO, and why Astra could be the most exciting AI smart glasses
Date:
Mon, 21 Apr 2025 20:30:00 +0000
Description:
Project Astra and Google DeepMind preview.
FULL STORY ======================================================================
Google has been hyping up its Project Astra as the next generation of AI for months. That set some high expectations when 60 Minutes sent Scott Pelley to experiment with Project Astra tools provided by Google DeepMind.
He was impressed with how articulate, observant, and insightful the AI turned out to be throughout his testing, particularly when the AI not only
recognized Edward Hoppers moody painting "Automat," but also read into the womans body language and spun a fictional vignette about her life.
All this through a pair of smart glasses that barely seemed different from a pair without AI built in. The glasses serve as a delivery system for an AI that sees, hears, and can understand the world around you. That could set the stage for a new smart wearables race, but that's just one of many things we learned during the segment about Project Astra and Google's plans for AI. Astra's understanding
Of course, we have to begin with what we now know about Astra. Firstly, the
AI assistant continuously processes video and audio from connected cameras
and microphones in its surroundings. The AI doesnt just identify objects or transcribe text; it also purports to spot and explain emotional tone, extrapolate context, and carry on a conversation about the topic, even when you pause for thought or talk to someone else.
During the demo, Pelley asked Astra what he was looking at. It instantly identified Coal Drops Yard, a retail complex in Kings Cross, and offered background information without missing a beat. When shown a painting, it
didnt stop at "thats a woman in a cafe." It said she looked "contemplative." And when nudged, it gave her a name and a backstory.
According to DeepMind CEO Demis Hassabis, the assistants real-world understanding is advancing even faster than he expected, noting it is better at making sense of the physical world than the engineers thought it would be at this stage. Veo 2 views
But Astra isnt just passively watching. DeepMind has also been busy teaching AI how to generate photorealistic imagery and video. The engineers described how two years ago, their video models struggled with understanding that legs are attached to dogs. Now, they showcased how Veo 2 can conjure a flying dog with flapping wings.
The implications for visual storytelling, filmmaking, advertising, and yes, augmented reality glasses, are profound. Imagine your glasses not only
telling you what building you're looking at, but also visualizing what it looked like a century ago, rendered in high definition and seamlessly integrated into the present view. Genie 2
And then theres Genie 2, DeepMinds new world-modeling system. If Astra understands the world as it exists, Genie builds worlds that dont. It takes a still image and turns it into an explorable environment visible through the smart glasses.
Walk forward, and Genie invents what lies around the corner. Turn left, and
it populates the unseen walls. During the demo, a waterfall photo turned into a playable video game level, dynamically generated as Pelley explored.
DeepMind is already using Genie-generated spaces to train other AIs. Genie
can help these navigate a world made up by another AI, and in real time, too. One system dreams, another learns. That kind of simulation loop has huge implications for robotics.
In the real world, robots have to fumble their way through trial and error. But in a synthetic world, they can train endlessly without breaking furniture or risking lawsuits. Astra eyes
Google is trying to get Astra-style perception into your hands (or onto your face) as fast as possible, even if it means giving it away.
Just weeks after launching Geminis screen-sharing and live camera features as a premium perk, they reversed course and made it free for all Android users . That wasnt a random act of generosity. By getting as many people as possible to point their cameras at the world and chat with Gemini, Google gets a flood of training data and real-time user feedback.
There is already a small group of people wearing Astra-powered glasses out in the world. The hardware reportedly uses micro-LED displays to project
captions into one eye and delivers audio through tiny directional speakers near the temples. Compared to the awkward sci-fi visor of the original Glass, this feels like a step forward.
Sure, there are issues with privacy, latency, battery life, and the not-so-small question of whether society is ready for people walking around with semi-omniscient glasses without mocking them mercilessly.
Whether or not Google can make that magic feel ethical, non-invasive, and stylish enough to go mainstream is still up in the air. But that sense of
2025 as the year smart glasses go mainstream seems more accurate than ever. You might also like Why 2025 will be the year of the AI smart glasses You don't have to pay for Google Gemini to comment on what you're looking at on your phone anymore Google Gemini could soon get a super-useful 'Power up' button here's what it does
======================================================================
Link to news story:
https://www.techradar.com/computing/artificial-intelligence/3-things-we-learne d-from-this-interview-with-google-deepminds-ceo-and-why-astra-could-be-the-mos t-exciting-ai-smart-glasses
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)