Google is rolling out a series of updates to its Search experience, introducing a new layout for sponsored results and expanding its AI-powered image generation capabilities across Search, NotebookLM, and Photos. The updates, announced on October 13, are part of Google’s broader strategy to integrate generative AI into its core products while offering users more control over how ads and content are displayed.
The most notable change affects how text ads appear in Google Search results. Sponsored listings will now be grouped together under a single “Sponsored results” label, which remains visible as users scroll through the page. The redesign aims to make it clearer which results are paid placements while improving navigation at the top of search results.
Google is also adding a new feature that lets users hide sponsored results entirely. With a single click, users can collapse ad sections to view only organic results. The new format maintains the same ad size and limits the number of text ads to four per grouping. The “Sponsored” label will also extend to other ad units, including Shopping ads, with the update rolling out globally across both desktop and mobile devices.
According to Google, internal testing showed that the new layout helped users navigate search results more easily. However, the change could affect how advertisers value top-of-page placements, particularly for branded keywords where visibility is often a key metric. With clearer labeling and the ability to hide ads, ad clickthrough rates may shift as user behavior adapts to the new interface.
In addition to the ad redesign, Google is bringing its Nano Banana AI image model — part of the Gemini 2.5 Flash suite — directly into Search. The model allows users to generate and modify images as part of the search process. Through the Google app, users can open Lens, upload or capture a photo, and use a new “Create” mode to transform images with AI. These generated visuals can then serve as the basis for further searches, helping users find visually similar items or refine creative queries.
Originally launched in August, Nano Banana has already been used to create more than 5 billion images in the Gemini app. The model is now also embedded in NotebookLM, where it supports new illustration styles such as watercolor and anime and generates contextual visuals for summaries and video overviews. Google confirmed that Nano Banana will also be added to Google Photos in the coming weeks, expanding its reach within the company’s ecosystem of visual tools.
Google is further integrating AI into Discover, its personalized content feed, by providing short AI-generated summaries of trending topics. The feature offers brief previews that can be expanded for more details and includes direct links to relevant sources. It is currently available in the U.S., South Korea, and India.
Comments
Loading…