noindex – WordPress / Yoast: How to solve the problem with the pagination page?

before switching to YOAST, I was using All in One SEO but I had huge problems with it. There were incompatiblliys which made it impossible to use this plugin later.

Either way, I am facing a paging problem on my website. Here is the example:



This / page / 2 poses SEO problems: double content, double h1, double description, etc.

With all-in-one SEO, I was able to define a term without tracking pages / without indexing. In YOAST, I can't find anything similar.

I guess it's a general wordpress thing.

I was able to find a "solution" for this, but that produces another problem.

If I change the steetings of the permalinks from "postname" to "default" -> page / 2 disappears. But then the links of my pages are not pleasant to read (p? = 123). See the attached example.
post name in permalink settings
bad links after switching to permalinks by default

Can you tell me how to fix it?

Thanks in advance and greetings,

Can noindex tags on individual articles affect global SEO or ranking on Google?

Some SEO gurus promise that noindex attributes are good for devaluing unimportant pages. But when I asked in the Q&A moz, they are quite sure that most of the noindex article on the website can hurt the overall SEO ranking. What is your opinion on this.

Why is my noindex tag not working?

I implemented an indexless tag in the header of my upload website because I realized it was indexed. But when we run this page by screaming like a frog or this tool here to verify that the noidex is present and working, it shows that this is not the case.

But if you display the source of the page, the code is present in the head tag. And unfortunately, we've seen cases where Google indexes pages that we haven't indexed. Do you have any thoughts on the above example or why is this happening in Google?enter description of image here

seo – Will a meta noindex bot tag completely prevent search engines from indexing a page?

I wonder if the meta tag will completely prevent indexing of a website page.

What happens if the user has registered a website from the Google Webmaster console noindex still prevent google from indexing this page?

What happens if the website has a sitemap that is submitted to Google using the search console, noindex still prevent Google from indexing website pages?

I would like to have some clarification on the priority that takes place in these situations.

If the tag was instead, what will happen in this case?

Why would a noindex page appear in SERPs?

Some possibilities.

1] It was indexed before adding the noindex tag to the page.
2] A rogue robot ignored the noindex tag and Google didn't capture it.

The noindex tag is based on the HONOR SYSTEM. Most crawlers, BUT NOT ALL, will follow a noindex tag. If the page is too critical to be publicly indexed, it must be password protected, which forces all crawlers to ignore it.

seo – Using `noindex` for the newly purchased domain we are migrating to

We recently bought the .com version of our domain. So we go from at oursite.combut it will take several weeks.

The site was previously indexed and a Google search continues to display the previous content.

Should we block search indexing of the site with noindex until the site is operational? If this has a lasting negative impact on SEO, I would prefer not to do so.

How can we test if our noindex pages still appear in search results?

I have a number of pages on a customer's site that are not accessible to the public. Therefore, I have not indexed them to allow maximum efficiency on search engine indexing systems. (I've heard that SE like that.)

I've discovered though that sometimes these pages again appear in search engine results.

In most cases, I made sure that my robots.txt directives do not interfere with my robots meta tag. My question is not about it.

My question is about isolated cases: the conflicts I missed. It is a vast and sprawling website, and what I need is an effective way to check if a search engine is showing items that I do not want them to post.

I know the query from the search engine "site:", but I want something more automated perhaps.

What are the tools you use to easily determine if a page without an index is indexed and displayed in the search results?

I feel that there is something simple that I have missed for a while> ahem < little I have worked in this industry.

No Search Search Noindex and Errors in Ownership

I'm having a series of problems with Google Search Console and I find it difficult to solve.

My site is set as a domain property, everything works pretty well, but about a month and a half ago, I've marked a new page like "noindex", Google has been exploring it and now displaying a error.

I've tried to check it, but the fix is ​​pending for more than a month and I do not know if there is a problem or if I should wait any longer.

I then tried to access the Google Analytics Search Console integration. However, if I understand correctly, this will only work if I create a new URL property in Search Console for my site.

Once I did that, this URL property displayed a good amount of noindex errors for many pages that are correct on the domain property. However, all these pages were indexed and searchable on Google.

What should I do in this case? I have to use the URL Remover tool, but it is also not available for domain properties.

PDF files are always indexed when X-Robots-Tag noindex is set in .htaccess

I'm trying to analyze the .htaccess file of a website that I manage, especially the following code:

Header set X-Robots-Tag "noindex, noarchive, nosnippet"

He is supposed noindex all PDF files of the site.

However, PDF files are still there and work, and I'm sure of that, because:

  • they always appear in the SERPs
  • they are green in the search console
  • the header check shows no command about noindexing

How is it possible? My hypothesis is that there is a conflict in the .htaccess code.

the robots.txt file:

User-agent: *
Allow: /wp-admin/admin-ajax.php
Disallow: /web_service/
Disallow: /wp-admin/
Disallow: /xmlrpc.php


robots.txt – Google does TRULY respect noindex and nofollow?

It's a bit confusing but since noindex is a meta tag on the page, the page must be allowed to be crawled (that is, not in robots.txt) for this to take effect.

Put a page as a disallow in robots.txt will always allow Google to index it if it finds it via a link from another page. It will just not be loaded (by robots.txt) so that the content, including the meta-description, does not appear in the searches; however, it may appear when Google believes that this is relevant, such as when the URL contains keywords.

So, if I understand correctly, you should not index what you want, but let Google explore it but add the noindex meta tag.

You can attend discussions on this interaction at