Page indexing comment, Page Fetch Failed: Redirect Error.
Page crawled, but not indexed.
The dreaded error reporting Blogger blog users sees when testing or inspecting their blog page urls at Google's Search Console these days.
What's happening here?
What do I need to do?
How to fix this!
Most common questions that comes in Blogger Help forums, I can say, for several months or so, when users feels like there is something wrong with their blog & Google is punishing their contents & hard work.
Frankly speaking, it may not be an issue at all, it is just how the error reporting works, so let's dive in deeper on why it may happen on Blogger blog pages.
Understanding Blogger Blogs Urls
Blogger blog pages uses a unique parameter in its urls to indicate the blog page is viewed on mobile or smaller screen sizes.
Each blog page urls will be appended with a ?m=1 paremeter, to indicate the page is served/viewed on mobile.
This is set at server end, by the developers, with a web page Header 302 Moved Temporarily, meaning this blog page urls with ?m=1 are temporary pages to be served, in this cases, on mobile, as the original pages are the urls without ?m=1 , the page relative urls aka canonical urls.
These setup were deliberate, so that only one url, the canonical page url, is used, crawled & indexed. This will help to prevent duplicate content issues, which search engines can penalize a blog if detected.
What Do I Need to Do?
If you've changed or added a custom robots.txt file directives, ensure that you know what your custom directives do!
If you are not 100% sure, then just Disable Custom robots.txt settings (toggle greyed out) at your Blog Settings panel.
By doing so, your blog will use the default setup given to all blogs, no changes needed -- one less thing for you to worry about.
Resubmit Sitemap at Search Console
If you've used the Atom url format as your blog sitemap url at Search Console, suggest for you to stop using them.
Only submit these sitemap urls at Google Search Console...
- .../sitemap.xml for Posts
- .../sitemap-pages.xml for Pages.
You'll get a lot more pages/urls crawled when using the original Blogger blog sitemap , as it can list thousands of pages automatically for you.
If there were any changes to the sitemap file format, if for whatever reason, needed by search crawlers, Blogger developers will update the file content format for you, no worries here.
Use Correct Googlebot for Crawling
When requesting Google to inspect your blog pages choose the correct Googlebot to crawl.
If you use the url with ?m=1 parameter to inspect, choose Googlebot Smartphone.
If you use the urls without ?m=1 parameter, choose Googlebot Desktop instead.
Not assigning the correct Googlebot to check your blog page urls, the horrible Failed with Redirect Issues error reporting will be displayed.
Why Page Fetch Failed Redirect Error?
If you choose Googlebot Smartphone to crawl & inspect a desktop version of your blog page (urls without ?m=1), these crawlers expects the blog urls with the ?m=1 parameter (remember explained earlier about blogger blog urls on mobile) as you've chosen mobile web crawlers to make the check.
In turn, Googlebot smartphone crawlers feels "unhappy" as they could not reach the correct page viewed on mobile, as the url given to inspect is for desktop or for larger screen sizes.
This means the initial Error Reporting by Google Search Console is correct, when identifying the problem & reported this to you.
How Long Can I See Any Changes Once Fixed?
If the correct blog setup was used, and you have an active blog, publishing original contents consistently, it'll be (a bit) quicker.
If your contents linked, could be found elsewhere on the inter web, in socials, forums etc, will also help web crawlers finding them (a bit) quicker.
The thing is, you can't just sit back & relax after publishing your contents, in hoping web crawlers finding them these days.
There are millions of web pages & Google does not guarentee that they will index all web pages on the inter web either.
You have to get your contents noticed, get more 'eyes' to read & engaged, which in turn, may signal web crawlers that there's might be something interesting here at your blog... 🤔
Understanding how Google Search Console tool & reporting work helps you to justify, if there is anything you can do rectify a problem.
Basically if you've stick with default Blogger blog settings, your blog pages should be fine -- if the contents Google web crawlers feels it is up to date & valuable to users searching for them.