Home > Latest News > Google To Begin Rolling Out Generative AI-Powered Search Functions

Google To Begin Rolling Out Generative AI-Powered Search Functions

Sundar Pichai Google (Image: Sourced from Google Newsroom)

In a bid to stay ahead of the competition, particularly OpenAI, Google announced this week that it will roll out a “fully revamped” search experience powered by Alphabet’s powerful AI model, Gemini.

Initially, the function will be rolled out to all users in the US, though it will be available in other countries “soon”.

The release was announced by Sundar Pichai, CEO of Google and Alphabet, at its annual developer conference in Mountain View, California, on Tuesday.

“Google search is generative AI at the scale of human curiosity,” Pichai said while announcing the new features at the I/O summit.

Some searches will now come with “AI overviews,” a more narrative response that saves people the task of clicking through multiple links.

For those who will be able to access Google’s new search experience in the US, an AI-powered panel will appear right underneath the Google search bar and will present summarised information sourced from Google search results across the web.

Google added that it would also roll out an AI-organized page that groups results by theme or presents, say, a day-by-day plan for people turning to Google for specific tasks, such as putting together a meal plan for the week or finding a restaurant to celebrate an anniversary, reported Bloomberg.

In a bid to keep the platform and the technology safe, Google will not trigger AI-powered overviews for sensitive queries such as medical information or self-harm.

The rush to roll out the new feature comes as Google comes under increasing pressure from the likes of Anthropic and OpenAI. The latter’s ChatGPT has been a runaway success. On Monday OpenAI announced GPT-4o, a faster and cheaper AI model that will power its chatbot. The new AI model will now let people speak to ChatGPT or show it an image, and receive a response within milliseconds.

Google’s attempts to match the development pace of its peers in the generative AI search space is crucial to helping it maintain its search advertising revenues –  its core search business delivered more than A$264.19 billion in search advertising last year.

The future of Google’s AI models looks promising though. A new “visual search” feature coming soon to Google’s opt-in Search Labs experiment will, for example, allow people to take a video of a malfunctioning gadget and ask Google for an AI overview to help them troubleshoot the problem.

It has also demonstrated Project Astra, a prototype of an AI assistant that can process video and respond in real time. In a prerecorded video demo, an employee walked through an office as the assistant used the phone’s camera to “see,” responding to questions about what was in the scene. The program correctly answered a question about which London neighbourhood the office was located in, based on the view from the window, and also told the employee where she had left her glasses.

Google has updated its suite of AI models. On Tuesday, it announced Gemini 1.5 Flash, which it says is the fastest AI model available through its application programming interface, or API, used by programmers to automate high-frequency tasks like summarizing text, captioning images or video, or extracting data from tables.

It also unveiled updates to Gemini Nano, Google’s smallest AI model, expanding beyond text inputs to include images and even introduced a newer version of its family of open models, Gemma 2. It noted that it had also achieved better benchmarks on its Gemini 1.5 Pro AI model.

Developers can use Gemini 1.5 Pro to process more text, video and audio at a time — up to 2 million “tokens,” or pieces of content which equates to around two hours of video, 22 hours of audio or over 1.4 million words – an amount that Google says outpaces competitors including OpenAI.

Google also announced a new video generation model it is calling Veo, which generates high-quality videos lasting beyond a minute. It will bring some of Veo’s capabilities to YouTube Shorts and other video products “in the future.”

To ensure that hardware matches the capability to handle the AI functions, Google revealed a new version of its in-house-designed chip for data centers, the sixth version of its TPU, or Tensor Processing Unit which it says will be 4.7 times faster than its predecessor.

Google wants to position Gemini as everyday assistant for users as in their daily lives. Users who pay U$20 a month for Google’s AI premium subscription plan will gain access to a version of Gemini that can process 1 million tokens — or about 700,000 words — at once. That’s equivalent to having the AI model to summarizing around 100 emails. A new feature called Gemini Live will let Google’s premium subscribers speak naturally with the company’s AI software on their mobile devices.