Google introduces Google-Extended to let you block Bard, Vertex AI via robots.txt

You can now prevent Bard and Vertex AI generative APIs, and future generations of models, from from accessing your website, or parts of it.

Chat with SearchBot

Google today announced a new “standalone product token,” Google-Extended, that lets you control whether Bard and Vertex AI can access the content on your site.

This seems to be the end result of a “public discussion” Google initiated in July, when the company promised to gather “voices from across web publishers, civil society, academia and more fields” to talk about choice and control over web content.

Bard is Google’s conversational AI tool. Vertex AI is Google’s machine learning platform for building and deploying generative AI-powered search and chat applications.

The announcement. In a blog post, Google said:

“Today we’re announcing Google-Extended, a new control that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of models that power those products. By using Google-Extended to control access to content on a site, a website administrator can choose whether to help these AI models become more accurate and capable over time.”

– Google’s Danielle Romain, VP, Trust / An update on web publisher controls

What is Google-Extended. Google calls it “A standalone product token that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of models that power those products.”

The new crawler has been added to the Google Search Central documentation on web crawlers.

What Google is saying. The company said Google-Extended gives publishers “choice and control”:

  • “Making simple and scalable controls, like Google-Extended, available through robots.txt is an important step in providing transparency and control that we believe all providers of AI models should make available. However, as AI applications expand, web publishers will face the increasing complexity of managing different uses at scale.”

Robots.txt. You can use robots.txt to block Google-Extended from accessing your content, or parts of it. To fully block Google-Extended, add the following to your site’s robots.txt:

User-agent: Google-Extended
Disallow: /

Why we care. We know 242 of the most popular 1,000 websites have already decided to block GPTBot, OpenAI’s web crawler, since it launched in August. Now you can decide whether your website should opt out of helping Google improve its AI products.

Is this the right answer? In Robots.txt is not the answer: Proposing a new meta tag for LLM/AI, Search Engine Land contributor argued why using robots.txt for managing data usage in LLMs is the wrong approach. Seems Google didn’t agree.

Dig deeper. Crawlers, search engines and the sleaze of generative AI companies


About the author

Danny Goodwin
Staff
Danny Goodwin has been Managing Editor of Search Engine Land & Search Marketing Expo - SMX since 2022. He joined Search Engine Land in 2022 as Senior Editor. In addition to reporting on the latest search marketing news, he manages Search Engine Land’s SME (Subject Matter Expert) program. He also helps program U.S. SMX events.

Goodwin has been editing and writing about the latest developments and trends in search and digital marketing since 2007. He previously was Executive Editor of Search Engine Journal (from 2017 to 2022), managing editor of Momentology (from 2014-2016) and editor of Search Engine Watch (from 2007 to 2014). He has spoken at many major search conferences and virtual events, and has been sourced for his expertise by a wide range of publications and podcasts.

Get the must-read newsletter for search marketers.