indexing – Will removing a property from the Google Search Console remove the index from the Google site?

Removing a property from Google Console removes only the Google Console website.

I am not sure of your goal. However, you can use the robots.txt file to delete your website from Google, for example, by using …

User-agent: Googlebot
Disallow: /

… or all search engines using

User-agent: *
Disallow: /

Each search engine has its own bot name, for example, Bing is bingbot.

User-agent: bingbot
Disallow: /

Robots.txt is a simple text file at the root of your website. It should be available at or

You can read about the robots.txt file at

You will find a list of the most important search engine bot / spider names in the top search engine bot names.

The use of the robots.txt file and the appropriate bot name is usually the fastest way to remove a website from a search engine. Once the search engine has read the robots.txt file, the website will be deleted in about 2 days or so, unless things have changed recently. Google had the habit of deleting sites within 1-2 days. Each search engine is different and the responsiveness of each can vary. Be aware that major search engines are quite responsive.

Reply to comments.

Robots.txt is indeed used by search engines to find out which pages to index. This is well known and understood and has been a de facto standard since 1994.

How Google works

Google indexes links, domains, URLs and page content among other data.

The link table is used to discover new sites and pages and to categorize pages using the PageRank algorithm based on the trusted network model.

The URL table is used as a join table between links and pages.

If you know the SQL database schema,

The link table would be something like:

The domain table would be something like:

The URL table would be something like:

The table of pages would be something like:
Page ID
title of the page
Description of the page

The url table is a join table between domains, links, and pages.

The page index is used to understand the content and index individual pages. The indexing is much more complicated than a simple SQL table, but the illustration is still valid.

When Google follows a link, it is placed in the links table. If the URL is not in the URL table, it is added to the URL table and submitted to the recovery queue.

When Google retrieves the page, Google checks if the robots.txt file has been read and, if so, it has been read within 24 hours. If the data in the cached robots.txt file is more than 24 hours old, Google will retrieve the robots.txt file. If a page is restricted by the robots.txt file, Google will not index the page nor remove it from the index if it already exists.

When Google sees a restriction in robots.txt, it is submitted to a queue for processing. The treatment begins each night as a batch process. The template matches all the URLs and all pages are removed from the page table with the help of the URL ID. The URL is kept for maintenance.

Once the page is retrieved, the page is placed in the page table.

Any link in the link table that has not been retrieved or is restricted by the robots.txt file, or a link broken with a 4xx error, are called pendent links. And while public relations can be computed using trusted network theory for the target pages of outstanding links, public relations can not be transmitted through these pages.

About 6 years ago, Google felt that it was wise to include pendent links in the SERP. This was done when Google redesigned its index and systems to aggressively capture the entire Web. The underlying idea was to present users with valid search results even if the page was restricted by the search engine.

URLs have very little or no semantic value.

The links have some semantic value, however, this value remains little because semantic indexing prefers more text and can not function properly as an autonomous element. Ordinarily, the semantic value of a link is measured with the semantic value of the source page (the page with the link) and the semantic value of the target page.

As a result, no URL to a suspended link target page can be ranked well. The exception is links and recently discovered pages. As a strategy, Google likes to "taste" links and pages recently discovered within the SERPs by defaulting the PR values ​​high enough to be found and tested in the SERPs. Over time, PRs and CTRs are measured and adjusted to place links and pages where they should exist.

See ROBOTS.TXT DISALLOW: 20 years of mistakes to avoid, where the ranking as I described it is also discussed.

The list of links in the SERP is wrong and many have complained about it. This pollutes the SERPs with broken links and links behind logins or paywalls, for example. Google has not changed this practice. However, the ranking mechanisms filter the links of the SERP, which removes them completely.

Do not forget that the indexing engine and the query engine are two different things.

Google recommends using noindex for pages that are not always possible or practical. I use noindex, however, for very large websites using automation, this may be impossible or at least cumbersome.

I've had a website with millions of pages that I've removed from Google Index using the robots.txt file in a few days.

And while Google opposes the use of the robots.txt file and the use of noindex, it is a much slower process. Why? Because Google uses in its index a TTL style metric that determines how often Google visits this page. This can be a long time, up to a year or more.

The use of noindex does not remove the SERP URL in the same way as the robots.txt file. The end result remains the same. It turns out that Noindex is actually no better than using the robots.txt file. Both produce the same effect, while the robots.txt file makes the results faster and bulkier.

And this is, in part, the point of the robots.txt file. It is generally accepted that people block entire sections of their website using robots.txt or completely block the site's robots. This is a more common practice than adding noindex to the pages.

Deleting an entire site with the help of robots.txt file remains the fastest way, even if Google does not like it. Google is not God nor his website, the New New Testament. As difficult as Google tries, he still does not rule the world. Shit close, but not yet.

The assertion that blocking a search engine with the help of robots.txt actually prevents it from seeing a meta noindex tag is utter nonsense and challenges the logic . You see this argument everywhere. In reality, the two mechanisms are exactly the same, except that one is much faster because of block processing.

Do not forget that the robots.txt standard was adopted in 1994 while the noindex meta-tag had not yet been adopted, even by Google in 1997. At first, delete a page from the Google. a search engine involved the use of the robots.txt file. drop and stay for a while. Noindex is only an addition to the already existing process.

Robots.txt remains the number 1 mechanism to restrict what a search engine indexes and will probably do it as long as I'm alive. (I'd better cross the street with caution, no more skydiving for me!)

Web forms indexing

I create wireframes for a web application that include so many forms. I want to create an indexing page that will include links to all pages. therefore kindly suggest any suitable template for indexing.

Why does my Drupal 8 view for solr indexing not show the results

I installed solr 8.2.0 for Drupal 8.7.6.

My server and my index are activated and indexed. But when you try to search through views, the results do not display. Please help me solve this problem.

enter the description of the image here

This is my point of view.

enter the description of the image here

bot – I run an indexing robot on my localhost computer, can my ISP detect it?

I'm using a 100 GB internet bandwidth package per month at my ISP. I've created a simple web crawler for fun and I run it on my personal computer 24 hours a day, 7 days a week. The robot is exploring all the bandwidth and I'm I have configured not to download media files (images, videos, audio files), so as to be able to make as many requests as to collect as many HTML pages as possible.

I do not know how ISPs work. Do they record the number of requests? or just bandwidth?

Can my ISP detect this? Or will they see me as a normal user who is asking too much above average?

Indexing – How can I submit my site to other search engines?

This question shows you how to force aol to index your site once it is submitted to Google Webmasters, with all ping and site plans configured.

For duckduckgo, it automatically indexes the Web:

No need to; DDG automatically indexes the web, so your website should be listed soon. (DDG also using the results of Bing / Yahoo, you can try to submit them to him: html)


indexing – How to test multiple Sitemap.xml files?

Is there a way to test multiple sitemap.xml files?
The validation works well, Google accepts all subfiles, but "Verification of the server response" in Yandex returns "The document does not contain any text".

By the rate of analysis and the overall progress of the indexing, I have the impression that both search engines fail to read the contents of the sitemap files. Because there is a large amount of "Discovered – Currently Unindexed" = 2/3 of all content, they have never been explored and due to the low indexation of ration in Yandex.

This website contains about 750,000 links in site map files. When I generate 50,000 links per file (about 11 MB), the crawl graph goes up, then falls. When there are 10,000 links per file, the graph drops much faster and stays at about the same level.

We have done various checks and, technically, everything seems to be going well, but if you look at the performance, it's pretty dubious.
Robots.txt gives full access. robots meta tags too.

  • Can anyone suggest a way to check why "Checking the server response" returns an error when the file exists?
  • Is there a way to check if the whole sitemap file system really works – which really means being read correctly by the search engines?
  • Can this problem be related to the settings defined in the .htaccess file?

Please see the screen shots below.
Location of the Sitemap file:
Link Yandex Server Check:

Thanks in advance
enter the description of the image here
enter the description of the image here
enter the description of the image here
enter the description of the image here
enter the description of the image here
enter the description of the image here

Indexing an external data source Search Api Solr

Is there a way or good documentation for indexing the external data source with the help of Search Api Solr.

The addition of a server when the solr module is enabled gives different main servers.

enter the description of the image here

What is the difference between Any Scheme Solr and Solr backend servers?
and would it be useful to use one of them to index the external (non-drupal) data source using only these modules?

Selecting one of the backends will bring the configurations as shown in the screenshot below,

enter the description of the image here

will the configuration of an external solr host help to solve the above problem (indexing the external data source)?

thank you,

seo – Limit search engines to the indexing of a single domain when multiple domains share the same web server and the same document root

I have multiple domains that listen on the same ports and share the same document root. This is a limitation of my hosting provider that allows me to have multiple domains for my GSuite mail routing without using one of my limited add-on domains.

The problem is that I see that Google indexes the content of one of these areas. The URL of some Google results is not the domain I want.

What options should I control this? The noindex directive in the meta of the HTML tag is not an option because the document root is shared. Apparently, Google advises against using the robots.txt file when the goal is to hide Google's results. And I also do not have access to the shell to modify HTTP header responses.

Edit: I must say that I have deleted the DNS record A. Although this prevents navigation on my website via this domain, I am not sure that this will ultimately improve Google's situation.

number theory – Intuition behind the indexing index of a map between smooth curves.

I'm studying elliptic curve arithmetic (Silverman) and I have a hard time understanding the intuition behind the Meme it $ textit {branching index} $ concept.

In the book, we leave $ phi: C_1 longrightarrow C_2 $ a non-constant map of smooth curves and we leave $ P in C_1 $. Next, we define the branching index of $ phi $ at $ P $, noted by $ e_ phi (P) $ as
$$ e_ phi (P) = text {ord} _P ( phi ^ {*} t _ { phi (P)}) text {,} $$
or $ t _ { phi (P)} in K (C_2) $ is a uniformizer to $ phi (P) $.

I'm trying to break it down into several parts. First of all, $ phi ^ {*} t _ { phi (P)} $ = $ t _ { phi (P)} circ phi $

Now, $ text {ord} _P ( phi ^ {*} t _ { phi (P)}) $ is the max $ d $ For who $ phi ^ {*} t _ { phi (P)} in M_p ^ d $, which means the max $ d $ for which we can express $ (t _ { phi (P)} circ phi) $ as card product $ f_1 … f_d $ such as $ f_i (P) = 0 $ for everyone $ i $, which means $ P $ would be a multiplicity zero $ d $ for $ (t _ { phi (P)} circ phi) $. It's nice to see that $ (t _ { phi (P)} circ phi) (P) = 0 $ and then, necessarily $ (t _ { phi (P)} circ phi) in M_p ^ d $ for $ d geq 1 $.

The book defines $ phi: C_1 longrightarrow C_2 $ be decrypted to $ P $ if $ e_ phi (P) = $ 1, which means that $ (t _ { phi (P)} circ phi) in M_p ^ 1 $in other words, we can not express $ (t _ { phi (P)} circ phi) $ as card product with $ P $ being a zero for said cards. In all, what I understand from this is that $ P $ is a multiplicity zero $ 1 $ for $ (t _ { phi (P)} circ phi) $.

Yes $ e_ phi (P)> $ 1 we said that $ phi $ ramifies to $ P $which means we can divide $ (t _ { phi (P)} circ phi) $ as a card product, each of them having $ P $ like a zero. My question is, what does all this mean? Is there a geometric intuition for this definition? I "understand" the technical details of the definition, but I do not understand why we define this concept and why we define it in this way.

The book goes with an example, considering the map $ phi: mathbb {P} ^ 1 longrightarrow mathbb {P} ^ 1 $, $ phi ((X, Y)) = (X ^ 3 (X-Y) ^ 2, Y ^ 5) $, it says that $ phi $ is branched to the points $ (0.1) $ and $ (1,1) $but I'm not sure what it really means.

If anyone could explain what is the intuition of this definition, I would be very grateful.

google – Status of the indexing coverage: "Submitted and indexed", except that it is not

A few months ago, I created a new property in the Google Search Console. We all know it: Google takes its time and I have patiently waited, checked from time to time, until the status of my sites is updated to:

Submitted and indexed

Super, I thought, and checked twice by typing in Google. However, no search entries have appeared. I've therefore checked three times by typing the URL without site: and typing in various keywords. No entries appeared.

Then I checked if this had been blocked by the robot.txt which, of course, makes no sense anyway since the sites are crawled and "indexed" (at least their status indicates it).

To be 100% sure, I also looked at the URL inspection tool and here is what it revealed:

Well, he also said that the page was indexed, mobile friendly, blah blah. But one thing was not right. No sitemap was available. This makes it even stranger since I submitted a site map and all pages are valid:

On the same site, he also confirmed again that my pages should be indexed:

(Do not worry about the excluded, it has nothing to do with it).

So, how is it that all my subpages, as well as my main page (so all the pages) are indexed, but not indexed?