Discovered – Currently Not Indexed Status: How You Can Fix It
By: Dileep Thekkethil | Updated On: August 28, 2024
Table of Contents
If you’ve spent time crafting a great piece of content, nothing’s more frustrating than seeing it get stuck in the “Discovered – currently not indexed” status in Google Search Console.
It’s like having an invitation that never gets delivered. I’ve been there too, staring at that status wondering what’s causing the delay and how to fix it.
Luckily, I’ve dug deep into the issue, combining insights from Google’s Martin Splitt and SEO best practices, and now I’m here to help you navigate this problem.
Let’s break down what’s really going on when Google discovers your page but doesn’t bother crawling or indexing it, and what you can do to change that.
What Does ‘Discovered – Currently Not Indexed’ Even Mean?
Imagine Google as a busy office with a huge to-do list. Your URL is on that list, but it hasn’t made its way to the top yet. That’s essentially what “Discovered – currently not indexed” means.
Google knows about your page, but it hasn’t crawled or indexed it yet. This can happen for various reasons, from crawl budget limitations to content quality concerns.
You might feel like it’s a big problem, but sometimes it’s just a matter of time. However, if your page stays in this limbo for too long, it’s a sign that there’s more going on under the hood that needs your attention.
Why Does This Happen? Let’s Dig Into the Causes
After diving into the details shared by Martin Splitt, it’s clear that several factors could be causing this issue.
In an ideal world, as soon as Googlebot discovers your URL—whether through a sitemap, an internal link, or an external reference—it would immediately fetch the page, crawl it, and decide whether to index it.
Your content would seamlessly flow into Google’s index with no delays, and within hours, it would be ready to show up in search results.
But as Martin Splitt pointed out, that’s not always how things work.
Just like a never-ending to-do list, Googlebots often has a queue of URLs waiting to be crawled, which means your page could be waiting in line, even if it’s already on the radar.
The reality is that Googlebot has to prioritize its crawling based on factors like crawl budget, server response times, and content quality, which can delay or even prevent your page from being indexed.
Let’s look at what could be holding your content back:
Crawl Budget Constraints
For many websites, the crawl budget isn’t a problem.
But if you’re running a large site with thousands of pages—or even a smaller site with technical SEO hiccups—Googlebot might be too busy elsewhere to get to your new content.
This isn’t just about massive sites; even smaller sites can face this if things aren’t set up right.
Server Performance Issues
Here’s something I didn’t realize initially—if your server is sluggish or frequently throws up errors (like 500 errors), Googlebot might put your site on the backburner.
Martin Splitt highlighted this in one of his recent videos, pointing out that Google wants to avoid overwhelming weak servers.
Content Quality Signals
Google doesn’t have to crawl a page to get a sense of whether it’s worthwhile.
If your site has a history of low-quality content, Google might deprioritize new pages before it even gives them a full look.
This is one of the more subtle but critical reasons pages get stuck in this status, says Martin.
URL Discovery Without Context
If your page is only found through a sitemap, Google might see it as less important because it’s not well-linked within your site.
Martin mentioned that without proper internal linking, these pages often end up sidelined.
My First Fix: Manually Request Indexing
Whenever I encounter this issue, my first move is straightforward—manually request indexing in Google Search Console.
It’s a simple fix that works if your issue is minor:
- Enter your URL in the “URL Inspection” tool.
- If it’s not indexed, hit the “Request Indexing” button.
If this solves your problem, you’re lucky! But more often than not, the issue runs deeper, and we’ll need to dig into other areas.
Dealing with Crawl Budget Issues (Even for Smaller Sites)
Crawl budget might seem like a concern only for massive websites, but even smaller sites can run into crawl issues if they’re not set up correctly. One often overlooked factor is where your assets (like images and scripts) are hosted:
Subdomains and Crawl Budget
If your assets are hosted on subdomains (e.g., images on cdn.yoursite.com), they could be eating into the same crawl budget as your main site content. This means Googlebot might spend time crawling these resources instead of focusing on your important pages.
Solution: Offload your assets to a dedicated CDN that handles them separately from your main domain.
For instance, using a service like Cloudflare or Amazon CloudFront can help you distribute your assets independently, ensuring your crawl budget is reserved for high-priority pages.
These CDNs host your static files globally, reducing the load on your main domain while improving site performance and crawl efficiency.
Unnecessary Redirects
Redirects can cause crawl delays. If you have pages that don’t really need to be redirected, letting them 404 might be the better option, unless they have significant backlinks or traffic.
Solution: Using tools like SEMRush, Sitebulb, or Screaming Frog, you can quickly identify where your crawl budget might be wasted and clean things up.
Improving Content Quality: Your Best Bet for Long-Term Success
Martin Splitt and other Google experts are clear on one thing—quality matters.
Even if Google hasn’t fully crawled your page, it makes assumptions based on your site’s history.
If your site has thin, duplicate, or poorly structured content, it’s likely being deprioritized.
Here’s how I approach this:
Thin Content: Combine thin pages or build them out with unique, valuable insights.
Automated or Spun Content: If you’re using automated content, it’s probably not going to cut it. Google wants depth and relevance, which these methods rarely deliver.
Tools like SEMRush can help you analyze content quality and suggest improvements, making sure your content aligns with what Google values.
Get Serious About Internal Linking
Pages buried in your sitemap with no internal links? Google doesn’t see them as important, and that’s a problem.
Here’s what I do to make sure my pages get noticed:
Strategic Internal Linking: Make sure every new page is connected to relevant content within your site. This strengthens the overall structure and tells Google what’s valuable.
Use an XML/HTML Sitemap: Yes, it’s old-school, but it still works, especially for bigger sites. If your site is complex, a well-structured XML/HTML sitemap can help Google discover deeper pages.
Using tools like Screaming Frog, I regularly audit my internal links to ensure no important pages are left in the dark.
Don’t Forget About Backlinks
Lastly, backlinks remain one of the strongest signals for page value. If Google sees that a page has few or no quality backlinks, it’s less likely to prioritize crawling it. Getting more backlinks can be tough, but even one solid link can make a difference in getting Google to take notice.
Tools like SEMRush or Ahrefs can help you track backlink profiles and find gaps where more links are needed.
The “Discovered – currently not indexed” status isn’t the end of the world, but it is a nudge that something needs fixing.
Whether it’s a crawl budget issue, content quality, internal linking, or backlinks, there’s usually a clear path forward.
I’ve found that by following these steps and applying the insights from Google’s own experts like Martin Splitt, you can improve your chances of getting your content indexed—and ultimately ranked.
Get Your Free SEO Audit Now!
Enter your website URL below to receive a comprehensive SEO report with tailored insights to boost your site's visibility and rankings.
You May Also Like
How to Create a Sitemap: A Beginner’s Guide
Do you feel dejected after waiting too long for Google to index and rank your helpful content? The search engine probably hasn’t discovered your content yet. However original and helpful your content is, it should first be discovered by Google to unlock its SEO potential. The good news is you can always grab Google’s attention … How to Create a Sitemap: A Beginner’s Guide
How to View the Source Code of a Website
Did you just land on this page expecting a nerdy piece of content dealing with mundane HTML codes? No worries. As a content specialist, I know how puzzling it can be for a non-technical person to read through a topic involving codes. Let’s look at the bright side. The involvement of HTML codes doesn’t always … How to View the Source Code of a Website
How to Fix the 403 Forbidden Error
Have you ever suddenly hit a “No Entry” road sign when you took the fastest route to your destination? That’s annoying, right? That’s exactly how your website visitors feel when they come across a 403 Forbidden error. It’s the digital “No Entry” sign that stops users from accessing your content. Just like how the road … How to Fix the 403 Forbidden Error
Comments