Google is rolling out new tools to help Gemini users identify whether an image was created or altered by AI. Starting today, anyone using the Gemini app can upload a picture and simply ask, “Is this AI-generated?” to find out if it was produced or edited using Google’s own AI systems.
For now, the feature works only with images, but Google says support for video and audio verification is “coming soon.” The company also plans to bring these detection capabilities beyond the Gemini app and into products like Google Search.
A bigger upgrade is expected later, when Google expands verification to include C2PA industry-standard content credentials. Today’s checks rely solely on SynthID, Google’s invisible AI watermark. Adding C2PA would make it possible to identify AI-generated content from a broader ecosystem of tools and creative platforms — including models like OpenAI’s Sora.
Google also confirmed that its newly announced Nano Banana Pro model will embed C2PA metadata in every generated image. It’s the second major boost for the standard this week, following TikTok’s decision to adopt C2PA metadata for its own invisible watermarking system.
While manual verification inside Gemini is a welcome improvement, the real impact will come when social platforms begin detecting and flagging AI-generated content automatically, instead of relying on users to check each image themselves.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.