Removing a property from Google Console removes only the Google Console website.
I am not sure of your goal. However, you can use the robots.txt file to delete your website from Google, for example, by using …
… or all search engines using
Each search engine has its own bot name, for example, Bing is bingbot.
Robots.txt is a simple text file at the root of your website. It should be available at example.com/robots.txt or www.example.com/robots.txt.
You can read about the robots.txt file at robots.org
You will find a list of the most important search engine bot / spider names in the top search engine bot names.
The use of the robots.txt file and the appropriate bot name is usually the fastest way to remove a website from a search engine. Once the search engine has read the robots.txt file, the website will be deleted in about 2 days or so, unless things have changed recently. Google had the habit of deleting sites within 1-2 days. Each search engine is different and the responsiveness of each can vary. Be aware that major search engines are quite responsive.
Reply to comments.
Robots.txt is indeed used by search engines to find out which pages to index. This is well known and understood and has been a de facto standard since 1994.
How Google works
Google indexes links, domains, URLs and page content among other data.
The link table is used to discover new sites and pages and to categorize pages using the PageRank algorithm based on the trusted network model.
The URL table is used as a join table between links and pages.
If you know the SQL database schema,
The link table would be something like:
The domain table would be something like:
The URL table would be something like:
The table of pages would be something like:
title of the page
Description of the page
The url table is a join table between domains, links, and pages.
The page index is used to understand the content and index individual pages. The indexing is much more complicated than a simple SQL table, but the illustration is still valid.
When Google follows a link, it is placed in the links table. If the URL is not in the URL table, it is added to the URL table and submitted to the recovery queue.
When Google retrieves the page, Google checks if the robots.txt file has been read and, if so, it has been read within 24 hours. If the data in the cached robots.txt file is more than 24 hours old, Google will retrieve the robots.txt file. If a page is restricted by the robots.txt file, Google will not index the page nor remove it from the index if it already exists.
When Google sees a restriction in robots.txt, it is submitted to a queue for processing. The treatment begins each night as a batch process. The template matches all the URLs and all pages are removed from the page table with the help of the URL ID. The URL is kept for maintenance.
Once the page is retrieved, the page is placed in the page table.
Any link in the link table that has not been retrieved or is restricted by the robots.txt file, or a link broken with a 4xx error, are called pendent links. And while public relations can be computed using trusted network theory for the target pages of outstanding links, public relations can not be transmitted through these pages.
About 6 years ago, Google felt that it was wise to include pendent links in the SERP. This was done when Google redesigned its index and systems to aggressively capture the entire Web. The underlying idea was to present users with valid search results even if the page was restricted by the search engine.
URLs have very little or no semantic value.
The links have some semantic value, however, this value remains little because semantic indexing prefers more text and can not function properly as an autonomous element. Ordinarily, the semantic value of a link is measured with the semantic value of the source page (the page with the link) and the semantic value of the target page.
As a result, no URL to a suspended link target page can be ranked well. The exception is links and recently discovered pages. As a strategy, Google likes to "taste" links and pages recently discovered within the SERPs by defaulting the PR values high enough to be found and tested in the SERPs. Over time, PRs and CTRs are measured and adjusted to place links and pages where they should exist.
See ROBOTS.TXT DISALLOW: 20 years of mistakes to avoid, where the ranking as I described it is also discussed.
The list of links in the SERP is wrong and many have complained about it. This pollutes the SERPs with broken links and links behind logins or paywalls, for example. Google has not changed this practice. However, the ranking mechanisms filter the links of the SERP, which removes them completely.
Do not forget that the indexing engine and the query engine are two different things.
Google recommends using noindex for pages that are not always possible or practical. I use noindex, however, for very large websites using automation, this may be impossible or at least cumbersome.
I've had a website with millions of pages that I've removed from Google Index using the robots.txt file in a few days.
And while Google opposes the use of the robots.txt file and the use of noindex, it is a much slower process. Why? Because Google uses in its index a TTL style metric that determines how often Google visits this page. This can be a long time, up to a year or more.
The use of noindex does not remove the SERP URL in the same way as the robots.txt file. The end result remains the same. It turns out that Noindex is actually no better than using the robots.txt file. Both produce the same effect, while the robots.txt file makes the results faster and bulkier.
And this is, in part, the point of the robots.txt file. It is generally accepted that people block entire sections of their website using robots.txt or completely block the site's robots. This is a more common practice than adding noindex to the pages.
Deleting an entire site with the help of robots.txt file remains the fastest way, even if Google does not like it. Google is not God nor his website, the New New Testament. As difficult as Google tries, he still does not rule the world. Shit close, but not yet.
The assertion that blocking a search engine with the help of robots.txt actually prevents it from seeing a meta noindex tag is utter nonsense and challenges the logic . You see this argument everywhere. In reality, the two mechanisms are exactly the same, except that one is much faster because of block processing.
Do not forget that the robots.txt standard was adopted in 1994 while the noindex meta-tag had not yet been adopted, even by Google in 1997. At first, delete a page from the Google. a search engine involved the use of the robots.txt file. drop and stay for a while. Noindex is only an addition to the already existing process.
Robots.txt remains the number 1 mechanism to restrict what a search engine indexes and will probably do it as long as I'm alive. (I'd better cross the street with caution, no more skydiving for me!)