8 – Drush or Drupal console command to list modules with security updates?

Is there an order for Drupal 8.x indicating the available updates?

I've added forms_steps 1.1 which has and SA and a security update required in the web user interface that says to update to 1.2

Web user interface indicating that there is a security update required

I have tried drush pm:security but it indicates that there is no security pending.

root@d568732a8640:/app# drush pm:security
 [success] There are no outstanding security updates for Drupal projects.

I may have overestimated the capacity of this command, so I'm looking for alternatives to get both the update and the security update in the console. My goal with this is to add a planned job in our CI that will report on it.

schema.org – Google Search Console: a missing identifier for breadcrumbs?

I've had a lot of errors in the console this morning telling me that none of my bread crumbs contain any identifier. This is true, but it has never been required before. If I add an ID property and try to validate with the help of the structured data validator, this means that the ID is not a field supported.

I've looked at Google's documentation for breadcrumbs and, if you check the examples of tags they've given, none of them contain any identifiers either. The Schema.org specification does not indicate the ID, Google documents do not indicate support for id and Google's own structured data test tool indicates that the ID is both unsupported and supported. What is the problem here? Does anyone have an example of how Google wants us to implement this?

Delete the history of the JavaScript JavaScript console?

How can I clear the command history of the Firefox JavaScript console? Clear all history, cookies, cache, etc., does not clear the history of the JavaScript console.

json – how to create an aws iam role with console access and saml

I've viewed the aws document here https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_saml.html

I am able to create a role with console access from the aws console. How can I get the same thing using aws cloudformation.

What I want to reflect in CFT

I've created the cft file below, but the role does not work, it does not seem to have access to the console.

{
  "Parameters": {
    "SAMLID": {
      "Type": "String",
      "Description": "SAML IDENTITY PROVIDER ARN"
    }
  },
    "Resources": {
      "FullAdminXME": {
        "Type": "AWS::IAM::Role",
        "Properties": {
            "Description" : "SAML Role for Azure AD SSO",
            "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": 
              {
                "Effect": "Allow",
                "Principal": {
                  "Federated": { "Ref" : "SAMLID" }
                },
                "Action": "sts:AssumeRoleWithSAML",
                "Condition": {
                  "StringEquals": {
                    "SAML:aud": "https://signin.aws.amazon.com/saml"
                  }
                }
              }

          },
          "ManagedPolicyArns": [
            "arn:aws:iam::aws:policy/AdministratorAccess"           
          ]
        }
      },

google search console – Get the following errors Text too small to read, clickable elements too close together, content wider than the screen

I receive the following errors in CGCs in the mobile friendly zone.
1. Text too small to read
2. Clickable elements too close together
3. Content wider than the screen

I've had the same error over the last month (for another page) and this has been corrected automatically. Have we ever encountered a similar problem?

indexing – Will removing a property from the Google Search Console remove the index from the Google site?

Removing a property from Google Console removes only the Google Console website.

I am not sure of your goal. However, you can use the robots.txt file to delete your website from Google, for example, by using …

User-agent: Googlebot
Disallow: /

… or all search engines using

User-agent: *
Disallow: /

Each search engine has its own bot name, for example, Bing is bingbot.

User-agent: bingbot
Disallow: /

Robots.txt is a simple text file at the root of your website. It should be available at example.com/robots.txt or www.example.com/robots.txt.

You can read about the robots.txt file at robots.org

You will find a list of the most important search engine bot / spider names in the top search engine bot names.

The use of the robots.txt file and the appropriate bot name is usually the fastest way to remove a website from a search engine. Once the search engine has read the robots.txt file, the website will be deleted in about 2 days or so, unless things have changed recently. Google had the habit of deleting sites within 1-2 days. Each search engine is different and the responsiveness of each can vary. Be aware that major search engines are quite responsive.

Reply to comments.

Robots.txt is indeed used by search engines to find out which pages to index. This is well known and understood and has been a de facto standard since 1994.

How Google works

Google indexes links, domains, URLs and page content among other data.

The link table is used to discover new sites and pages and to categorize pages using the PageRank algorithm based on the trusted network model.

The URL table is used as a join table between links and pages.

If you know the SQL database schema,

The link table would be something like:
LinkID
linkText
linkSourceUrlID
linkTargetUrlID

The domain table would be something like:
DomainID
urlid
field
domaineIP
domainRegistrar
domainRegistrantName

The URL table would be something like:
urlid
urlURL

The table of pages would be something like:
Page ID
urlid
title of the page
Description of the page
pageHTML

The url table is a join table between domains, links, and pages.

The page index is used to understand the content and index individual pages. The indexing is much more complicated than a simple SQL table, but the illustration is still valid.

When Google follows a link, it is placed in the links table. If the URL is not in the URL table, it is added to the URL table and submitted to the recovery queue.

When Google retrieves the page, Google checks if the robots.txt file has been read and, if so, it has been read within 24 hours. If the data in the cached robots.txt file is more than 24 hours old, Google will retrieve the robots.txt file. If a page is restricted by the robots.txt file, Google will not index the page nor remove it from the index if it already exists.

When Google sees a restriction in robots.txt, it is submitted to a queue for processing. The treatment begins each night as a batch process. The template matches all the URLs and all pages are removed from the page table with the help of the URL ID. The URL is kept for maintenance.

Once the page is retrieved, the page is placed in the page table.

Any link in the link table that has not been retrieved or is restricted by the robots.txt file, or a link broken with a 4xx error, are called pendent links. And while public relations can be computed using trusted network theory for the target pages of outstanding links, public relations can not be transmitted through these pages.

About 6 years ago, Google felt that it was wise to include pendent links in the SERP. This was done when Google redesigned its index and systems to aggressively capture the entire Web. The underlying idea was to present users with valid search results even if the page was restricted by the search engine.

URLs have very little or no semantic value.

The links have some semantic value, however, this value remains little because semantic indexing prefers more text and can not function properly as an autonomous element. Ordinarily, the semantic value of a link is measured with the semantic value of the source page (the page with the link) and the semantic value of the target page.

As a result, no URL to a suspended link target page can be ranked well. The exception is links and recently discovered pages. As a strategy, Google likes to "taste" links and pages recently discovered within the SERPs by defaulting the PR values ​​high enough to be found and tested in the SERPs. Over time, PRs and CTRs are measured and adjusted to place links and pages where they should exist.

See ROBOTS.TXT DISALLOW: 20 years of mistakes to avoid, where the ranking as I described it is also discussed.

The list of links in the SERP is wrong and many have complained about it. This pollutes the SERPs with broken links and links behind logins or paywalls, for example. Google has not changed this practice. However, the ranking mechanisms filter the links of the SERP, which removes them completely.

Do not forget that the indexing engine and the query engine are two different things.

Google recommends using noindex for pages that are not always possible or practical. I use noindex, however, for very large websites using automation, this may be impossible or at least cumbersome.

I've had a website with millions of pages that I've removed from Google Index using the robots.txt file in a few days.

And while Google opposes the use of the robots.txt file and the use of noindex, it is a much slower process. Why? Because Google uses in its index a TTL style metric that determines how often Google visits this page. This can be a long time, up to a year or more.

The use of noindex does not remove the SERP URL in the same way as the robots.txt file. The end result remains the same. It turns out that Noindex is actually no better than using the robots.txt file. Both produce the same effect, while the robots.txt file makes the results faster and bulkier.

And this is, in part, the point of the robots.txt file. It is generally accepted that people block entire sections of their website using robots.txt or completely block the site's robots. This is a more common practice than adding noindex to the pages.

Deleting an entire site with the help of robots.txt file remains the fastest way, even if Google does not like it. Google is not God nor his website, the New New Testament. As difficult as Google tries, he still does not rule the world. Shit close, but not yet.

The assertion that blocking a search engine with the help of robots.txt actually prevents it from seeing a meta noindex tag is utter nonsense and challenges the logic . You see this argument everywhere. In reality, the two mechanisms are exactly the same, except that one is much faster because of block processing.

Do not forget that the robots.txt standard was adopted in 1994 while the noindex meta-tag had not yet been adopted, even by Google in 1997. At first, delete a page from the Google. a search engine involved the use of the robots.txt file. drop and stay for a while. Noindex is only an addition to the already existing process.

Robots.txt remains the number 1 mechanism to restrict what a search engine indexes and will probably do it as long as I'm alive. (I'd better cross the street with caution, no more skydiving for me!)

How to change Google's crawling rate in the new Google Search Console?

Google has now completely removed the old search console (https://searchengineland.com/the-old-google-search-console-is-no-longer-available-321650) where you could request a rate of 39, customized analysis. . In the new Search Console, there does not seem to be any way to change the rate of analysis. You can click "Analysis Statistics" under "Legacy Tools and Reports", but I do not receive any options to change the scan rate. Since Google deleted the old search console 3 days ago, has anyone been able to request a custom rate of analysis?

Quiz game using the C # console

For this iteration of my game engine class, I have to use a class to create a quiz game. For example, I use tables to store my questions and answers. Here's how it appears up to now in QuestionAnswer.cs

using System;

public class QuestionAnswer
{
    private String() question = new string() { "What is 9-8?", "What is 1+1?", "What is 0+3?", "What is 10-6?", "What is 5+0?", "What is 7-1?", "What is 10-3?", "what is 42-34?", "What is 3*3?", "What is 20/2?"};
    private int() answer = new int() { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };

    public QuestionAnswer(String() question, int() answer) {
        this.question = question();
        this.answer = answer();
    }

    public String getQuestion()
    {
        return question();
    }

    public int getAnswer()
    {
        return answer();
    }
}

And I have to refer to that in my main Program.cs file

namespace ConsoleApp2
{
    class Program
    {
        static void Main(string() args)
        {
            Random rndQA = new Random();
            int rndInt = rndQA.Next(0, 9);

            QuestionAnswer myQA = new QuestionAnswer(); ;

            Console.WriteLine(myQA.getQuestion());
            Console.WriteLine(myQA.getAnswer());
            Console.ReadLine();
        }
    }
}

The main thing I want to do is to use a random array index for the question and answer string, then use the Int32.Parse code from my Console.ReadLine, compare it to the answer array, then give a point for each correct answer. . My main problem at the moment is that I do not know how to reference the array index while it's its own separate class, so I'm stuck in this perpetual code hell. How can I call a random index number from the code of my main program while using the table of my QuestionAnswers code?

I've tried to watch a video on this specific issue, but every video dealing with this issue in C # uses the visuals, while our teacher tries to teach us how to do it in the console application form.

.net – Modern Methods to Run a Console Application on the Scheduled Date

I have a console application that interacts and prepares data in a SQL Server database, exports a table, encodes, gzips, and downloads into an AWS S3 bucket. The application is complete and works as expected, but I have to plan the daily execution of this application (weekdays) at night …

Can you give me a glimpse of the current "trend" in console application execution planning? Obviously, there are many ways and the Windows Task Scheduler seems to be the most obvious way. However, I would like to know if it's still about the current trend or if there is a "new" "better" "better" method of planning applications to run.

In the end, it does not really matter, but I just want an overview of current trends in 2019 and the practice of current industry standards.

Thanks in advance 🙂

backlinks – Why do external links on the Google Search Console vary so much from one month to the next?

I keep an eye on the external links to my website that Google reports in its dashboard for webmasters of my blog (technical tips).

According to this dashboard, my website contains an extremely variable number of external links. I normally publish articles every week. Nothing that would attract a lot of spam.

Statistics:

  • May 2347 external links
  • June 1789
  • July 1708
  • August 1185

How should I read this report? Or what should I do with this information? Why would the number of external links decrease continually?