8 Common Errors in Google Search Console and Their Solutions
Each error in Google Search Console can impact how your website appears and is discoverable on Google. Below, let’s detail some of the causes and solutions for these errors:
1. Blocked by Robots.txt
Cause: The robots.txt file might be preventing Googlebot from crawling certain sections of your site. As you know, with the Disallow commands used in your Robots.txt file, you can directly prevent Googlebot from crawling a URL, a directory, or multiple URLs. In this case, Google will show a warning message in the Index report in Search Console as “Blocked by Robots.txt.” By clicking on the warning message, you can see which URLs couldn’t be crawled due to this block. TE BiliÅŸim news sites block all links except the main link on your page with Robots.txt. These include share links, search queues, and other links that could cause duplicate indexing.
Solution: This message is not always an error that needs fixing. Google is merely informing you that there are URLs that can’t be crawled due to your Robots.txt commands. You should check if there is any important URL unintentionally blocked by Robots.txt among the affected URLs. Miswritten Disallow commands in the Robots.txt file are a common issue and may prevent important URLs from being crawled. After making changes, you can request a re-evaluation of your URLs by clicking the “Validate Fix” button to send a review request to Google.
2. Alternate page with proper canonical tag
Cause: The “Alternate page with proper canonical tag” warning message seen in the Index report is not an error and there is no issue to fix. This message informs you that the URLs listed use the correct Canonical URL.
Solution: Ensure that each page has the correct “canonical” tag, indicating to Google which page is the primary one. If there is no incorrect Canonical URL usage among the URLs listed in this report, there is no problem.
3. Not found (404)
Cause: This error occurs due to broken links or pages that no longer exist. When Googlebot sends a crawl request to a URL and receives a 404 response, it understands that the page is no longer available, resulting in a failed crawl. The URLs listed in this section of the Index report are those discovered by Googlebot through an external link or were previously existing on your site but now cannot be found. Without any action, Googlebot will periodically re-crawl a discovered 404 URL to check its status.
Solution: Fix or remove pages that return a 404 error. If the pages are temporarily unavailable, implement appropriate redirects. If you have moved the pages to a different URL, apply a 301 redirect to direct the 404 URLs to their new addresses.
4. Redirected page
Cause: This error can be caused by redirect chains or improperly configured redirects. A redirected page warning indicates issues that can arise from various redirect problems. After a URL request, if the relevant URL redirects to another URL, and if this redirect is too long with multiple redirects before reaching the final URL, it becomes difficult to reach the final URL, complicating Googlebot’s crawl process.
Solution: Ensure that redirects are correctly configured and that there are no unnecessary redirect chains.
5. Excluded by ‘noindex’ tag
Cause: The noindex tag prevents Google from indexing these pages. This message in the Index report appears not due to an error but to inform the site owner. As you know, the noindex tag tells Googlebot not to list the page in search results and prevents it from being indexed.
Solution: Remove the noindex tag from pages you want to be indexed.
6. Redirect error
Cause: This error can result from incorrect or broken redirects. For instance, if you redirected page A to page B and the target URL page B returns a 404 status, then page A will be listed in the “Redirect error” section in the Index report with a warning.
Solution: Check and fix redirects. If the target URL in your redirects is not working, remove the redirects and point the starting URLs to functioning, crawlable URLs. After taking this action, you can inform Google that you have resolved the issue by clicking the “Validate Fix” button in the Index report.
7. Crawled – currently not indexed
Cause: Google crawled these pages but has not yet indexed them. This typically happens due to quality or content deficiency. Just as it is not always possible to instantly crawl discovered URLs, it is also not always possible to instantly index crawled URLs. This message in the Index report indicates that your URLs have been crawled but not yet indexed by Google. There could be several reasons for this, including site-wide crawl budget issues, insufficient content quality in the affected URLs, duplicate content, or Googlebot finding the page not of sufficient quality to index.
Solution: Improve the quality and content of the pages, increase SEO compliance, and review again. Implement crawl budget management actions to direct Googlebot to valuable pages important for SEO rather than inefficient pages. Even if some pages remain uncrawled and unindexed after these actions, they will be pages that are not important for your SEO. Additionally, a content audit for the affected URLs is recommended. If the content on your pages is weak and insufficient, it should be strengthened and enriched. If you use duplicate content or content targeting only Googlebot, revise these pages.
8. Indexed despite being blocked by Robots.txt
Cause: The warning message listed under the “Enhance appearance” section in the Index report indicates that despite a URL being blocked by a Disallow command in the Robots.txt file, it has been crawled and indexed. How can a URL that is disallowed for Googlebot to crawl via Robots.txt still be crawled and indexed? Google explains that even if a URL has a crawl block through Robots.txt, if an external site links to this URL, Googlebot follows this link and performs indexing.
Solution: To prevent your URLs blocked by Robots.txt from being indexed, add a “noindex” tag to these URLs. This ensures the page is definitively not indexed.
0 Comments
Post a Comment