The way Google discovers, understands, and ranks content has never changed this fast. For years, if you published a podcast episode, a YouTube video series, or audio-first content, you were essentially flying blind when it came to organic search. Google could read a transcript if you provided one. It could scan your metadata if you optimized it. But actually understanding the content itself — the tone, the topic depth, the context — was largely out of reach.
That's no longer the case.
Google's VP of Search, Liz Reid, recently sat down for a detailed podcast conversation where she laid out, in plain terms, how large language models (LLMs) have fundamentally unlocked two capabilities that are quietly reshaping search as we know it. First, multimodal AI is allowing Google to comprehend audio and video content at a level that simply wasn't achievable a few years ago. Second, and perhaps even more exciting for publishers and brands, Google is actively building toward a search experience that adapts to what individual users actually pay for — serving up content from subscriptions they already have, not hiding it behind paywalls they can't access.
At IcyPluto, where we live at the intersection of AI and marketing strategy, these developments aren't just industry news. They are signals. And if you care about visibility, content strategy, or where search is heading in 2025 and beyond, you need to understand what's changing and why it matters right now.
For most of search's history, Google has been a fundamentally text-based engine. Even as video content exploded on the web, Google's ability to evaluate it was largely limited to surrounding text, captions, structured data, and descriptions that creators manually provided. If you didn't write it down for Google, Google mostly didn't know it existed in the content itself.
Liz Reid's recent comments change that picture significantly. She explained that because today's LLMs are multimodal — meaning they can process text, images, audio, and video simultaneously — Google now has the ability to understand what's actually happening inside an audio file or a video clip. Not just the transcription. Not just the metadata. The actual substance: what a video is really about, what style it uses, what level of depth it goes into, and how relevant it is to a searcher's query.
This is a genuinely different kind of capability. Think about what it means in practice. A podcast episode that deep-dives into AI marketing strategy can now be evaluated on the richness of its actual conversation, not just on whether the host remembered to write a good show description. A video tutorial on running Google Ads campaigns can be surfaced for a searcher who needs exactly that — even if the video thumbnail and title aren't perfectly keyword-optimized — because Google can now assess what the video covers internally.
For creators and brands that have invested heavily in non-text content formats, this is the moment things start to shift in your favor. The audience moved to video and audio years ago. Google is now, finally, catching up to where the content actually is.
Reid also tied this multimodal capability to a much bigger opportunity: multilingual search. This is where the story gets especially compelling.
She pointed to users in countries like India, where hundreds of millions of people search in Hindi, Bengali, Tamil, Telugu, and dozens of other languages. The challenge has always been that the web itself is still overwhelmingly English-centric. So if someone in Lucknow searches for detailed information about a health condition, a legal topic, or a financial concept in Hindi, the information they need might exist — just not in their language.
Previously, the solution to this required brute-force translation of vast amounts of web content into every major language, which simply wasn't scalable. LLMs change the equation entirely. Because a modern language model can take information written in English, genuinely understand it at a conceptual level, and then produce an accurate, natural-language response in Hindi — or any other language — the knowledge gap starts to close.
For global brands and marketers, this has significant implications. Content you've created in English may now be surfaced to users in other languages in ways it never was before. Conversely, content that exists in other languages — content you may have never even had translated — could become relevant to your competitive landscape in entirely new markets.
This is exactly the kind of shift that IcyPluto tracks closely. AI isn't just changing how we create content. It's changing who gets to access information, and how search engines bridge the gap between what exists and what people need.
The second major development Reid discussed is arguably even more transformative for the publishing, media, and content business. And it's one that most people in the SEO and marketing world haven't fully processed yet.
Reid described a direction Google is actively moving toward: search results that are aware of what a specific user already subscribes to, and that prioritize surfacing content from those outlets accordingly.
To understand why this matters, consider the current reality. Right now, if you search for coverage of a major financial story, Google's results page might return six or eight articles — most of them sitting behind paywalls you don't have access to. You click, you hit a paywall, you bounce back. That's a frustrating experience for the user, and it's a wasted opportunity for the one publisher whose content you actually could read because you pay for it.
Reid's vision flips that experience on its head. If you're a subscriber to a particular news outlet, Google should know that — and it should make your subscriber content rise to the top of your personal results. The one article you can read should surface above the six you can't. That's not just better UX. That's a fundamentally different model of what personalized search can look like.
She acknowledged that the technology is in its early stages and that "small steps" have been taken so far, with more on the way. Google has already begun expanding a Preferred Sources feature for English-language users globally, and has announced functionality that highlights links from users' paid subscriptions — initially in the Gemini app, with AI Overviews and AI Mode planned to follow. Early data from that rollout suggested users who select a preferred source click through to that site at twice the average rate, which is a compelling signal that the model works.
She also touched on micropayments — the idea that a user might pay a small amount to access an individual article from a publisher they don't subscribe to. Historically, micropayment models for content have struggled to gain mainstream adoption, and Reid acknowledged as much. But the fact that it's on Google's radar as a direction worth exploring says something about how seriously the company is thinking about the relationship between audiences, publishers, and search.
If you're building content strategy today — whether for a brand, a media company, or a creator economy platform — these two developments from Google should be reshaping how you think about investment and visibility.
For audio and video content creators, the immediate takeaway is to lean in harder, not wait. If Google is getting better at understanding the substance of audio and video content, then the quality and depth of what you produce matters more than ever. Surface-level content that's optimized purely for keywords isn't going to win when Google can evaluate whether your podcast actually delivers on the expertise it claims. The playing field is shifting toward genuine, in-depth content — which is exactly where strong brands want to compete.
For SEOs and digital marketers, the multimodal shift means your content audit needs to expand. It's no longer sufficient to evaluate only text-based content for search performance. Podcasts, webinars, YouTube series, video tutorials, and even audio overviews need to be considered as indexable, rankable assets. If you haven't started thinking about how your audio and video content is structured, what it covers, and how it signals authority to Google — that work needs to start now.
For publishers with subscription models, the subscription-aware search direction is both an opportunity and a strategic prompt. If Google is going to prioritize surfacing your content to your own subscribers, then subscriber relationships become more valuable — not just for direct revenue, but for search visibility. Growing your subscriber base starts to have an indirect effect on your organic reach for the users who matter most to your business.
For international brands, the multilingual implications of LLM-powered indexing suggest that your content may already be reaching new audiences in new languages without you even knowing it — and that proactively creating or optimizing for non-English markets could yield significant search dividends as Google's multilingual capabilities continue to mature.
At IcyPluto, we see this as a defining inflection point. The brands that treat AI-driven search evolution as a reason to double down on quality, authenticity, and genuine audience relationships will be the ones that benefit most from where Google is heading.
It's worth pausing to appreciate how much engineering complexity sits behind the capabilities Reid described. Understanding audio and video content is genuinely hard. For years, the speech-to-text accuracy required to make audio reliably searchable wasn't sufficient — particularly for proper nouns, regional accents, niche terminology, and fast-paced dialogue. Early experiments in audio search demonstrated significant failure rates that made the technology impractical at scale.
What's changed is the underlying architecture of the models themselves. Multimodal LLMs aren't just transcribing audio — they're building semantic representations of what's being communicated: the topic, the structure, the reasoning, the emotional register, the expertise level. That's a fundamentally richer signal than a keyword-stuffed transcript.
For video, the capability extends further still. Google can now assess not just what is being said in a video, but what is being shown — the visual context, the presentation style, the relationship between spoken content and on-screen visuals. A video that explains a complex concept using clear diagrams may now be weighted differently from one that delivers the same script without visual reinforcement, because Google can actually see the difference.
This multimodal understanding also connects to Google's broader push into AI Overviews and audio-generated summaries. When Google can understand audio and video content deeply enough to evaluate its quality and relevance, it can also synthesize that content into the AI-generated answers it's increasingly serving to users. Your podcast might not just rank in traditional search results — it might become a source that Google draws on when generating an Audio Overview for a related query.
Reid was careful not to commit to specific timelines for most of what she described. The multimodal indexing capabilities she outlined appear to be largely operational already — they're reflecting real changes in how Google processes and evaluates content. The subscription-aware personalization features are directional, with some building blocks already deployed and more in development.
What she did confirm is that Google I/O — scheduled for May 19-20 — is likely to be a significant moment for announcements related to AI and search. She noted that the pace of AI development is fast enough that features being built right now could still make it to the stage, even if they're only finalized in April.
For the broader marketing and SEO community, that means the next 60 to 90 days could bring a wave of new search capabilities that further accelerate everything Reid described. Brands and creators who are already adapting their content strategies for multimodal search will be better positioned to take advantage of those announcements quickly. Those who are still treating search optimization as a purely text-based discipline may find themselves scrambling to catch up.
At IcyPluto, staying ahead of these shifts is exactly what we're built for. The convergence of AI and marketing strategy isn't a future trend — it's the present reality. And the brands that understand it earliest are the ones that will define the next era of digital visibility.
Google's approach to search is evolving faster than at any point in the past decade, and the changes Liz Reid described represent some of the most consequential shifts in how content gets discovered online. Multimodal LLMs are opening up audio and video to meaningful indexing for the first time. Subscription-aware search is beginning to redefine what personalization in search can look like. And AI-powered multilingual understanding is expanding who can access information across language barriers.
These aren't incremental updates. They're structural changes to the architecture of how search works — and they have real implications for every brand, creator, publisher, and marketer competing for visibility in 2025 and beyond.
The question isn't whether to adapt. The question is how fast you move.
Google has expanded its branded queries filter in ...
Chrome 146's WebMCP lets AI agents act on websites...

When 70% of buyers ask AI about your category… Are...