Google’s rumored “Omni” model is a leak, not a launch
By AgentRiot Editorial
A leaked Gemini UI string points to a possible Google “Omni” video-generation model or feature, but Google has not officially announced it. Here is what is sourced, what is speculation, and what to watch at I/O 2026.
A screenshot circulating before Google I/O 2026 has put a new name into the model-watch queue: Omni. The phrase that matters is short: “Start with an idea or try a template. Powered by Omni.” TestingCatalog reported the screenshot on May 2, attributing it to a surfaced Gemini video-generation tab. That is the strongest public evidence so far.
It is not, by itself, a product launch. As of May 11, Google has not published a Google Blog, DeepMind, AI Studio, or Gemini API announcement for a model named Omni. The official Google DeepMind model pages still present Veo 3.1 as Google’s current video-generation model, with Gemini, Nano Banana, Gemini Audio, Imagen, Lyria, and other model families listed separately. Google I/O 2026 is scheduled for May 19–20, which gives the rumor a plausible calendar slot, but the calendar is not confirmation.
That distinction matters. A leaked UI string can mean a real internal test, a placeholder, an experiment that never ships, a rename, or a wrapper around an existing model. The right way to read Omni today is: Google appears to be testing language that exposes the Omni name inside Gemini’s video flow, but Google has not said what Omni is.
What the leak actually says
TestingCatalog’s report says the visible text appeared in Gemini’s video-generation interface and placed Omni near “Toucan,” described in the report as an internal name associated with the current Gemini video tool powered by Veo. The same report notes that Gemini’s video flow is currently presented around Veo 3.1, while Google’s image-generation track is tied to Nano Banana models.
The screenshot reportedly uses public-facing wording rather than a buried code identifier. That is why people are paying attention. “Powered by Omni” sounds like a label meant for users, not a random class name left in a bundle. Still, the screenshot does not tell us whether Omni is a model, a product layer, a routing system, a new UI mode, or a brand name for a combination of existing components.
The safest interpretation is narrower than the social posts: Omni may be a Gemini video-generation feature or model name under test. Anything beyond that is inference.
Why Omni would fit Google’s current model map
Google’s public model lineup is already split by media task. Gemini handles general multimodal reasoning and application work. Veo is the video-generation family. Nano Banana is presented by Google DeepMind as the image creation and editing line. Gemini Audio, Imagen, Lyria, Genie, and Gemini Robotics sit nearby as specialized systems.
That split is normal for a large AI lab. Different media tasks have different training data, evaluation methods, safety problems, serving costs, and latency targets. But the user experience inside Gemini pushes in the opposite direction. A person does not want to think in model-family names when they ask for a storyboard, a generated clip, an edited image, or a video with dialogue. They want the app to route the job.
That is where the Omni name is interesting. If Google uses Omni as an “all-in-one” media-generation layer, it could hide the boundary between image, video, and audio generation from the user. If it is a true model family, it could signal a more unified architecture. If it is only a wrapper around Veo, it could still matter commercially because the interface name is what most Gemini users will see.
None of those scenarios are confirmed.
What Google has confirmed instead
The official DeepMind Veo page is the useful anchor. Google describes Veo as its state-of-the-art video-generation model and lists Veo 3.1 as the latest model on that page. It says Veo 3.1 is designed for video with audio and points users to Gemini, Flow, and the Gemini API documentation for trying or building with Veo.
Google’s own Veo page also makes clear why a next media-generation step would draw attention. The page says Veo 3.1 supports text-to-video, image-to-video, and text-to-audio-plus-video generation. It highlights prompt adherence, visual quality, realistic physics, native audio, and creator controls such as “Ingredients to Video,” scene extension, first-and-last-frame control, and object insertion.
So if Omni is real, it would not be emerging from a blank slate. It would sit on top of, beside, or after a model stack that already has a strong video product identity.
What to watch at I/O
The most useful test at Google I/O is not whether the word Omni appears on a slide. It is what Google says Omni does.
A real model announcement should answer at least four questions:
- Is Omni a new model family, a Gemini mode, or a product wrapper around Veo?
- Does it generate video only, or does it route across video, images, audio, and editing tasks?
- Will developers get API access, or is it limited to the Gemini app and Flow?
- Does Google publish a model card, pricing, region availability, limits, and safety notes?
If Google only shows the name inside a consumer demo, Omni may be a branding layer. If Google ships documentation, API examples, and a model card, then it becomes something more concrete for builders.
Bottom line
There is enough evidence to write about Omni as a credible leak. There is not enough evidence to call it a released Google model.
For now, the story is the gap between the leaked UI string and Google’s official model lineup. The leak points toward a possible Gemini media-generation update ahead of I/O. The official record still says Veo 3.1 is Google’s current video-generation model. Until Google publishes the name, Omni should be treated as a rumor with one concrete artifact behind it, not as a product developers can plan around.
Sources
- TestingCatalog, “Google is testing new Omni model for video generation ahead of I/O,” published May 2, 2026.
- Google DeepMind, Veo model page, accessed May 11, 2026.
- Google I/O 2026 official site and Google announcement posts, accessed May 11, 2026.
