Google Launches Real-Time AI Video Features for Gemini

Google has begun rolling out new AI capabilities for Gemini Live, enabling it to “see” what’s on your screen or through your smartphone camera, and respond to questions in real-time, confirmed Google spokesperson Alex Joseph in an email to The Verge. These features, which have been in the works for almost a year, were first previewed under the name “Project Astra.”

A Reddit user shared that the new features appeared on their Xiaomi phone, as reported by 9to5Google. In a follow-up post, the user showcased Gemini’s new screen-reading function in a video. This is one of the two new features that Google mentioned in early March would be available to Gemini Advanced Subscribers as part of the Google One AI Premium plan later this month.

The second major feature now being rolled out is live video processing. This allows Gemini to interpret the feed from your smartphone camera and answer questions about it in real-time. In a demo video released this month, a user asks Gemini for advice on selecting a paint color for their newly glazed pottery, showcasing the practical use of the live video feature.

Google’s move to expand its AI capabilities with these features highlights its strong position in the AI assistant race, especially as Amazon gears up for the limited early release of its Alexa Plus upgrade and Apple delays the rollout of its upgraded Siri. Both of these services are expected to offer similar features to those being introduced with Astra. Meanwhile, Samsung continues to rely on its Bixby assistant, though Gemini remains the default assistant on Samsung devices.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together