Table of Contents

Want to Boost Rankings?
Get a proposal along with expert advice and insights on the right SEO strategy to grow your business!
Get StartedGoogle recently made a small but important change to its robots.txt documentation. The update, which added just a few words, has clarified how Google interacts with websites.
Specifically, Google stated that certain fields in robots.txt files, like “crawl-delay,” are not supported. If you’ve been using this directive to control how Google crawls your site, this change directly affects you.
But what exactly does this mean for your website, and why did Google make this clarification now? Let’s break it down.
What Did Google Change?
Google’s update is short but significant. In their robots.txt documentation, they added the following clarification: “(other fields such as crawl-delay aren’t supported).”
These eight words make it clear that if you’re using unsupported fields like “crawl-delay,” Google ignores them.
https://x.com/glenngabe/status/1843336321110311219?s=46
For years, many website owners and SEO experts have included the “crawl-delay” directive in their robots.txt files to manage how quickly search engines crawl their sites. The idea was to prevent search engines from overloading servers by pacing the number of requests they made.
However, with this change, Google is setting the record straight. They do not recognize or use the “crawl-delay” field, meaning it has had no effect on Google’s crawling behavior.
Why Did Google Make This Clarification?
Google made this update because many users were still asking questions about unsupported fields in robots.txt files. Google wants to ensure everyone is on the same page: if a field isn’t explicitly listed in their documentation as supported, it won’t be used by Google.
Jaskaran Singh, a Product Expert at Google, shared this information on LinkedIn, emphasizing that unsupported fields are ignored. He also provided some advice for website owners, recommending the use of comments (indicated by a “#” symbol) for better readability and ensuring paths in the robots.txt file start with a forward slash (“/”) to avoid errors.
Why Should You Care?
If you’ve been using “crawl-delay” or any other unsupported field to control how Google crawls your site, it hasn’t been doing anything. Google has been crawling your site at its own pace, regardless of what your robots.txt file says.
For some websites, this may not be a big deal. But for others—especially those with limited server resources—it could mean Google’s crawlers are overwhelming your site, causing slowdowns or even crashes.
If Google’s bots are using up your server’s resources, legitimate users might experience longer load times, hurting user experience and potentially damaging your search rankings.
Slower websites tend to rank lower in search results, so this clarification from Google could indirectly impact your SEO if your site is struggling to handle the load from Google’s crawlers.
What About Other Search Engines?
While Google doesn’t recognize “crawl-delay,” other search engines do. Bing, for instance, still supports the “crawl-delay” directive.
If you’re targeting users primarily using Bing or other search engines, “crawl-delay” might still work.
However, this introduces a consistency issue. If you want to control how all search engines crawl your site, you must find a different solution for Google.
This issue with varying crawler behaviors recently made headlines when Reddit blocked Bing’s crawlers. Reddit took action to prevent Bing from crawling its content, highlighting how different platforms might choose to manage or block specific crawlers depending on their own concerns and policies.
What Should You Do Now?
If you’ve been using unsupported fields like “crawl-delay” in your robots.txt file, now is the time to act. Here’s what you can do:
Check Your robots.txt File: Open your robots.txt file and review it for unsupported fields like “crawl-delay.” If you see any, remove them.
Use Supported Directives Only: Google supports certain fields in the robots.txt file, such as “Disallow” (to block access to specific parts of your site), “Allow” (to permit access to specific areas), and “Sitemap” (to direct bots to your sitemap). Make sure you’re using only these supported directives.
Optimize Your Server: If your main concern was preventing Google from overloading your server, consider optimizing your server to handle the traffic. You can also adjust your crawl rate in Google Search Console.
Test Your robots.txt File: After making changes, test your robots.txt file to ensure it works correctly. Google’s Search Console offers a tool to validate your robots.txt file and ensure it’s guiding crawlers as expected.
Monitor Google’s Crawling Activity: Keep an eye on how Google is crawling your site. Use Google Search Console’s Crawl Stats report to see how often Googlebot visits your site and whether it’s causing any issues.
Key Takeaways
- Google updated its robots.txt documentation to clarify that unsupported fields, like “crawl-delay,” are ignored.
- If you’re using unsupported fields like “crawl-delay,” they haven’t been working for Google crawlers all along. Other search engines, like Bing, still recognize “crawl-delay,” so your robots.txt file might still work for them.
- To ensure your site is crawled correctly, stick to supported directives like “Disallow,” “Allow,” and “Sitemap” in your robots.txt file.
Get Your Free SEO Audit Now!
Enter your email below, and we'll send you a comprehensive SEO report detailing how you can improve your site's visibility and ranking.
Share this article
