The Rumors Are Growing
The competition around Gen-AI creative tools has never been more intense. In 2026 alone we have seen multiple model releases from around the world that take AI image and video further.
Google I/O 2026 runs May 19 and 20 in Mountain View.
Google's own session list directly mentions media generation, vibe-coding tools, AI Studio, Antigravity and agentic coding.
The big thing to watch is whether Google can turn its AI tools into a smoother creative workflow, not just another model demo.
Voice dictation that doesn't mangle your syntax.
Most dictation tools choke on technical language. Wispr Flow doesn't. It understands code syntax, framework names, and developer jargon — so you can dictate directly into your IDE and send without fixing.
Use it everywhere: Cursor, VS Code, Warp, Slack, Linear, Notion, your browser. Flow sits at the system level, so there's nothing to install per app. Tap and talk.
Developers use Flow to write documentation 4x faster, give coding agents richer context, and respond to Slack without breaking focus. 89% of messages go out with zero edits. Free on Mac, Windows, and iPhone.
AI-Heavy, AI-First
The Rundown
Google I/O 2026 is still a few days away, but the official schedule already gives us a pretty clear idea of where Google wants the conversation to go.
AI is everywhere.
The main Google keynote is on May 19, with the developer keynote later that day. After that, the sessions start to get really interesting.
One session, What's new in Google AI, specifically mentions the latest model capabilities across multimodal AI, media generation and robotics. It also calls out intelligent agents, vibe-coding tools and open-source model workflows.
That is the part I care about. The key elements consumers are dying for are:
Can I turn an idea into a working prototype quickly?
Can I generate visuals, video or UI concepts inside the same broader system?
Can I move from a rough experiment in AI Studio into something I can actually use, publish or share?
AI Art and Video
A year ago, Google set a new standard with the release of Google Veo 3. This introduced native audio directly into generations, never seen before.
Google are keeping a lot under wraps right now. However, since Veo 3 was launched they have moved forward fast in the creative space.
Pomelli, Google Flow, Stitch, Nano Banana etc are all new releases since then that focus on using Gen-AI creativity in effective and fast ways.
Whether Google Veo 4 is set to be announced is yet to be seen, but with the advancements of competitor models like Kling and Seedance, we can expect some big changes to come.
Vibe Coding
This is a major element.
Consumers want to create without having to take the time to code. Google AI Studio is becoming the first stop shop for all of this.
AI Studio has seen some updates already ahead of Google I/O, such as the implementation of Nano Banana directly into the tool.
You can now create, design and publish apps quickly and even create multiplayer experiences.
Along with the Google Antigravity app on your devices, could we see more connection to AI Studio to allow us to build apps and experiences directly on our devices.
Could we even do this on our phones?
I’m Excited
Google I/O is going to be significant, and I will be there live to bring in all the information.
Follow all the Google pages on 𝕏 or on Instagram.
I’ll be posting IG stories and 𝕏 posts live from Google I/O, especially at the keynote.
AI TOOL DISCOUNTS
Don’t pay full price, use these codes:
Runway Code: JERROD25




