security – A more efficient way to block the constant barrage of xhr ad-tracking queries?

I currently have a set of dynamic rules configured to block various ad tracking sites with the help of uBlock Origin, which work with a steady stream of requests, as observed in the recorder. My question would be: is there a more efficient way to proceed?

From what I've understood, the dynamic rules replace the "My filter" rules, but they both correspond to the same thing. Is there a previous point where I could cut these requests, or a potential tip to let a request go, then block it for toll-free numbering? Or maybe even kill him on arrival, so to speak?

My apologies for the poorly worded question. English is my mother tongue, so I do not really have a valid excuse.

Design Guide – The Minimum Steps Needed for an Effective and Efficient Website Rebuilding Process?

I just started working for a web development agency. We work mainly with the government. We want to align ourselves with this market with a professional, creative and functional website. We have an excellent blog and wish to highlight our content and white papers, as well as our identity, vision and services.

We have just started the project discovery phase and are mapping the routes and user routes. I'm working on it in project support and partly as a junior UX designer, alongside the contract designer, who is more of a UI designer than a UX designer, which in turn does a process of learning during the process. BUT we have a rather strict Christmas deadline where we would like to have the latest drawings signed so that we can start developing the site in January.

Our current plan / schedule is as follows:

  • initial planning, including issues and desires for our existing website (completed)
  • persona planning (done)
  • cartography of the course of the user and creation of a canvas of course of value for our 4 personas (in progress)
  • content analysis
  • Sorting AI cards, hacking, revision and iteration
  • component mapping and content architecture
  • drawing session of new components
  • wireframing all vanilla pages
  • wireframing custom pages
  • Prototype key user paths for customer testing
  • client test script
  • tests of audience
  • prepare the bridge of customer knowledge to present
  • presentation of customer knowledge
  • refine prototypes / wireframes
  • UI: Theme and Brand
  • Review of the user interface

Which is a lot and certainly will not be done at Christmas this year. So my question is: what is the absolute minimum process required to redesign a website?

Temporal complexity – What is the most efficient algorithm for calculating polynomial coefficients from its roots?

Given $ n $ roots, $ x_1, x_2, dotsc, x_n $, the corresponding monique polynomial is $ y = (x-x_1) (x-x_2) dotsm (x-x_n) = prod_ {i} ^ n (x – x_i) $. To obtain the coefficients (that is to say, $ y = sum_ {i} ^ n a_i x ^ i $), a simple extension requires $ O (n ^ 2) $ not.

Else if $ x_1, x_2, dotsc, x_n $ are distinct from each other. The problem is equivalent to a polynomial interpolation with $ n $ points: $ (x_1, 0), (x_2, 0), dotsc, (x_n, 0) $. The fast polynomial interpolation algorithm can be executed in $ O (n log ^ 2 (n)) $ time.

I want to ask if there is a more efficient algorithm better than $ O (n ^ 2) $? Even if there are duplicate values ​​between $ {x_i } $? If it helps, we can assume that the polynomial is on a first finite field, that is to say $ x_i in mathbf {F} _q $.

ai – The most efficient way to recalculate the enemy's A * path on the fly?

I'm working on implementing a more robust trajectory search algorithm for enemies in my top-down shooter, and the algorithm A * works, but I now have to decide when to calculate the path.

There are some situations where the path will already be calculated:

  1. The enemy sees the player.
  2. The enemy hears a sound.

I suppose the enemy will have to recalculate continually to make sure that he is aiming at the player and not wandering to the goal of the last path even after the player has moved. I also anticipate that the players will initially head only towards the last sound they heard or the last place where they saw the player if it is no longer visible, and continue to move that they replay the player or hear a new sound.

However, while they actively engage the player in the line of sight and head for him, how do I determine when to recalculate their way? I already have some ideas, but I'm not sure what would be most effective or if there is a better way.

  1. Using the coordinates of the grid corresponding to the tiles of the map, if the player moves from one or more squares of the grid to his initial position, determined by the calculation (x2 - x1)^2 + (y2 - y1)^2. If this method is efficient enough, should each enemy perform this calculation during its update phase? Or should there be some kind of "notify" implementation (this would be a good amount of work because I do not have a type of signal / interval configuration).
  2. If the sum of the X and Y offsets of the grid coordinates of the enemy relative to the player's coordinates becomes larger, recalculate.

Someone has ideas? A * n 's not really an inexpensive operation, so I wish to limit the number of calculations I make, especially when there are a lot of enemies on the map.

matrices – Does the multiplication of integer matrix with its transpose (A ^ T) * A use a parallel algorithm more efficient than the use of symmetry?

Multiplication of the entire matrix with its transpose (A ^ T)A gives a symmetric matrix, so only half of it should be calculated. Moreover, the formula of the resulting element rik = Sum (aijajk, j) is reduced to the simple sum of the multiplication of the corresponding elements of two columns rik = Sum (ajiajk, j). If I understand correctly, the last observation gives no effectiveness, the formulas are a little more pleasant, nothing else. So, there are n (n-1) / 2 multiplications and nn (n-1) / 2 sum operations. Parallelization can improve the speed by simple division of labor, but I think there is no sophistication available to improve the speed of this algorithm beyond the naive algorithm. Or maybe there is still a more sophisticated algorithm?

I am aware of https://math.stackexchange.com/questions/158219/is-a-matrix-multiplied-with-its-transpose-something-special but this is not a discussion about the algorithms.

There is also https://en.wikipedia.org/wiki/Strassen_algorithm – so, maybe the sophistication I'm looking for equates to the application of Strassen's algorithm for only half of the resulting matrices ? I'm using symmetry in this case. But can I take advantage of the fact that A enters both sides of the multiplication?

javascript – How to make this block of code more attractive / is there a more efficient way to do it? React

I was just wondering how I could do it to make it more efficient / make it more visually appealing to the user? Any help is really appreciated!

     switch (this.state.selected) {
  case "all":
    styled = this.state.allKeywords.map(keyword => 
      
); break; case "positive": styled = this.state.positive.map(keyword =>
); break; case "negative": styled = this.state.negative.map(keyword =>
); break; default: console.log("No keyword list mapped"); }

security – How to make my web application coded in pure PHP, as efficient as an application developed with PHP Framwork?

I am on the last steps of a small web project, in which I used PHP for the back end. The database used is a MySQL database, but I plan to make the application compatible with Oracle and SQL Server (I might also need advice on this).

I've heard about PHP frameworks provided with components ready to improve security, database query management, routing, and so on.

Now, instead of restarting my application from scratch, I decided to modify my code to adapt it so that my application works as well as a web application developed with a framework such as Symphony or Laravel.

I am convinced that these frameworks do things better than me and I would like to know what exactly, to be able to improve certain aspects of my application such as:

  • security
  • Authentication
  • Database Management and Queries
  • routing
  • etc.

My web application needs to be installed on a corporate environment. It must therefore meet the most stringent requirements for Web standards.

A Blockchain that merges mining forks might be more energy efficient?

I'm new to blockchain technology, so I'm sorry if I use wrong terminology. This is not a Bitcoin question, but a general blockchain issue.

Imagine a theoretical blockchain that uses progressive proof of work – instead of looking for a single nonce which gives a smaller hash than the targetyou have to find 10 nonces that give smaller hashes than target * 10.
(Edit: a more complicated algorithm can be used, for example the first hash includes the block header and the first nuncio, the second hash includes the
block-header, the second nuncio and the previous hash, etc. – the calculation of the hashes must not be done in parallel).

If you have discovered 9/10 good nuncios, you will try to find the last nuncio even if another minor has already published a complete block.
This type of blockchain will encourage arbitrary mining ranges, where two miners will publish a mined block with the same previous-block hash (that is, the same index in the chain), which looks like this:

B1  ←-  B2  ←-  B3  ←-  B4
                ↑
                +-----  B4'

The way the current blockchains (which I know) solve this problem is to let the miners choose which of the forks to continue, and as only the longest blockchain is the true blockchain, all the miners will eventually work on one fork, and all the work that has been done on the other forks will go in vain.

Now imagine that each block can contain more than one previous hash. For example, a new miner might try to exploit a block such as:

B1  ←-  B2  ←-  B3  ←-  B4  ←-  B5
                ↑               |
                +-----  B4' ←---+

With one important condition: transactions in B4 should not contradict transactions in B4 (no double spending, etc.). Assuming there are a lot of unrelated transactions in mempool, it will be a very common scenario.

Miners may even be encouraged to include more hashes of previous blocks in the next block. For example, if the B4 minor notices that B4 has already been published, he may offer to pay fees to the following minors (of B5), including B4 & # 39;

This algorithm will be much more energy efficient (in the case of proof of progressive work) because minors can continue to exploit the same block even if another minor has already found a good block. It can extract more transactions in parallel without increasing the size of the block. This differs from the choice of a simpler hash function (for proof of work), as this has no effect on the length or increasing speed of the blockchain.

Do you know a blockchain / cryptocurrency of work proof that does something similar, like a blockchain that can split and merge? I know that IOTA does not use the blockchain at all, but I have never delved into the details.

Thank you

c – How to use programming macros to make the code faster, more efficient and more compact

Recently, I was reviewing some of the solutions of the best competitive programmers in the world. I've discovered that these people use a template to write programs, preferably in C ++.

I have some difficulty understanding these programs because they first created their own keywords with the help of #define and typedef. This is done to make the code more compact and faster to write.

Now, there are many instructions that we repeat many times when writing code, like for() loops. So, instead of writing a long statement, I can create a macro. The same can be done for other instructions to make the code more compact.

#include
#include
#include
#include
#include

#define SC1(x)          scanf("%lld",&x)
#define SC2(x,y)        scanf("%lld%lld",&x,&y)
#define SC3(x,y,z)      scanf("%lld%lld%lld",&x,&y,&z)
#define PF1(x)          printf("%lldn",x)
#define PF2(x,y)        printf("%lld %lldn",x,y)
#define PF3(x,y,z)      printf("%lld %lld %lldn",x,y,z)
#define REP(i,n)        for(long long i=0;i<(n);i++)
#define FOR(i,a,b)      for(long long i=(a);i<=(b);i++)
#define FORD(i,a,b)     for(long long i=(a);i>=(b);i--)
#define WHILE(n)        while(n--)
#define MEM(a, b)       memset(a, (b), sizeof(a))
#define ITOC(c)         ((char)(((int)'0')+c))
#define MID(s,e)        (s+(e-s)/2)
#define SZ(a)           strlen(a)
#define MOD             1000000007
#define MAX             10000000005
#define MIN             -10000000005
#define PI              3.1415926535897932384626433832795
#define TEST(x)         printf("The value of "%s" is: %dn",#x,x)

const int INF = 1<<29;

typedef long long ll;
typedef unsigned long long ull;

#define FILEIO(name)   freopen(name".in", "r", stdin);    freopen(name".out", "w", stdout);

I am interested in how the C compiler handles all these macros. I understand the constants defined with the help of macros such as:

#define MOD    10000000007
#define MAX    10000000005
#define MIN    -10000000005
#define PI     3.1415926535897932384626433832795

But how to create a function macro that can refer to an instruction, such as:

#define SC1(x)          scanf("%lld",&x)
#define SC2(x,y)        scanf("%lld%lld",&x,&y)
#define SC3(x,y,z)      scanf("%lld%lld%lld",&x,&y,&z)
#define PF1(x)          printf("%lldn",x)
#define PF2(x,y)        printf("%lld %lldn",x,y)
#define PF3(x,y,z)      printf("%lld %lld %lldn",x,y,z)
#define REP(i,n)        for(long long i=0;i<(n);i++)
#define FOR(i,a,b)      for(long long i=(a);i<=(b);i++)
#define FORD(i,a,b)     for(long long i=(a);i>=(b);i--)
#define WHILE(n)        while(n--)
#define MEM(a, b)       memset(a, (b), sizeof(a))
#define ITOC(c)         ((char)(((int)'0')+c))
#define MID(s,e)        (s+(e-s)/2)
#define SZ(a)           strlen(a)
#define TEST(x)         printf("The value of "%s" is: %dn",#x,x)

By defining macros as above, it is possible to increase the productivity, efficiency and speed of the programmer.

Now my questions are:

  1. When I define a macro as:
#define SC1(x)          scanf("%lld",&x)

How the compiler handles this macro. More generally, how this instruction is understood by the computer during the compilation / execution of the program.

  1. There are three types of macros used to reference three different versions of the loop:
#define REP(i,n)        for(long long i=0;i<(n);i++)
#define FOR(i,a,b)      for(long long i=(a);i<=(b);i++)
#define FORD(i,a,b)     for(long long i=(a);i>=(b);i--)

In this statement, the value of the variable to be calculated first is placed in parentheses. What is the meaning of the parenthesis when writing macros?

  1. What is it? means in the following statement:
#define FILEIO(name)   freopen(name".in", "r", stdin);    freopen(name".out", "w", stdout);

I know this is a vast question related to the use of macros in C / C ++, I really appreciate it if someone can point me in the right direction for interpret / understand these macro statements whenever this happens in a source code so that I can modify it. and write my own macros for fast and efficient coding.

captcha – Is Google reCaptcha v3 more efficient than v2 for identifying human users?

I understand the difference between reCaptcha v2 and v3 for the end user and the developer, but I wonder if Google has stated in any way that the logic used to make the final decision had been improved in v3.

Is the underlying engine that drives the determination between the man and the robot better in the v3 than in the v2? Would I expect to catch more robots or challenge less humans using v3 rather than v2?

I have already searched this answer for a long time, but I have not found anything definitive.