Google Sheets: Number of queries based on multiple conditions?

I have a spreadsheet that I use for my World of Warcraft guild and I ran into a wall while trying to add new features to it.

For reference, when our guild goes through the dungeons, we fairly distribute the falling objects according to a "countdown" system. That is, everyone starts at 0 and can roll dice against others 0 to try to win an item. If someone wins something, it becomes +1 and then can only roll against other +1's, so a person does not win simply because she's luckier.

So, we have a sheet that records each item that falls, who won it, where it was won and it counts for the system +1 (some items are not worth it for. to be counted). Next to the elements log, there is a query that shows what everyone else is automatically, which looks like this:

Loot Tracker
--------------
Name     Count
--------------
Jim      2
Bob      1
Liam     1
Dave     1
Luke     1

The request for this is as follows:

= QUERY (A: I, "Select D, count (E) where C =" Victrix "and G = true and A> = date" & TEXT (IF (WEEKDAY (J1,1 ) = 3, J1, J1-WEEKDAY (J1 + 4), "yyyy-mm-dd") & "& # 39; and A <= date" "& text (J1 + 8-WEEKDAY (J1 + 5) ), group "yyyy-mm-dd") & "& # 39; by order D by number (E) desc, label D "name", number (E) "number" ")

We are now at the point where people do not just roll for their own characters, but for their other specializations (something they will not use right away, but no one else will In need for their main character, what is called an "out of specification") and this must also be followed on the loot tracker.

Say I have this dataset:

 |  A         B              C      D
----------------------------------------------
1|  Name      Item           +1     Spec
----------------------------------------------
2|  Jim       Lorem          (✓)    MS
3|  Bob       Ipsum          (✓)    MS
4|  Jim       Dolor          (✓)    OS
5|  Liam      Sit            (✓)    MS
6|  Dave      Amet           (✓)    MS
7|  Luke      Consectetur    (✓)    OS

I've somehow needed a query, or a single formula, that will make the loot tracker work as follows:

Loot Tracker
-------------------
Name     MS+   OS+
-------------------
Jim      1     1
Bob      1     0
Liam     1     0
Dave     1     0
Luke     0     1

I hope that makes sense, I have no idea how I would proceed. I've tried separate query formulas on the third column here, but this has not been correctly sorted and aligned with the names. It must first be controlled by MS, then by OS.

Why does Google, Amazon, and other research (and research-related) platforms always seem to have difficulty with apparently simple queries?

I've noticed for some time that Google (as incredible as a search engine), Amazon, and other major search platforms are still struggling to produce relevant results when searching for version-specific information. . Why does it always seem to be such a difficult feat?

For example, I would like to know how to install rvm on Ubuntu. Here is the search I could do: "installing rvm ubuntu".

I find it strange that – by default, the first results concern all Ubuntu 16 or 14 … articles clearly written several years ago; especially if I search from an Ubuntu 19.10 machine. It seems that would be taken into account.

I've also noticed anomalies when looking for version-specific issues. Especially Apache, MySQL, Ruby, PHP (PHP searches are horrible with Google) and Python is also pretty bad.

A few times, I even grabbed a non-version issue and received a fairly dated result set; I then decided to clarify things by adding version numbers and I got an even more dated result set.

This, combined with the fact that Google should now know my research pretty well. After years of research related to development, should not I be familiar enough with the versions I am currently using and what I am looking for?

Amazon also has a particularly frustrating whim; returning the exact model numbers sought seems like a difficult task to perform. For example, an Amazon search for "256gb ssd" 90% of the results on the first page do NOT match this. If this is something that they have deliberately implemented, then it does not do exactly what it is done for. The only thing I can think of is that a technical hurdle has not been fully overcome yet.

I am curious to know why this always seems to be a problem if it is not, in fact, a planned "feature".

I understand that Google is complex and that I probably can not understand exactly what it's doing behind the scenes. In no case do I hit him. However, since such strange results always seem to be so abundant and similar across megalithic search and query systems, I suppose that it is probably a technical hurdle that we have not had. still not solved collectively.

What exactly is this obstacle? Why do these search technologies seem, from the user's point of view, still in a difficult period with seemingly basic search requirements?

Azure Application Insights: What's the difference between page views and queries?

Both have similar fields, but do not match what Cloudflare gives us for the result logs (not even slightly). What is the difference between them and that matches the equivalent of Apache Access Logs?

In pageViews, the client's IP address is always empty. Is this normal?

security – A more efficient way to block the constant barrage of xhr ad-tracking queries?

I currently have a set of dynamic rules configured to block various ad tracking sites with the help of uBlock Origin, which work with a steady stream of requests, as observed in the recorder. My question would be: is there a more efficient way to proceed?

From what I've understood, the dynamic rules replace the "My filter" rules, but they both correspond to the same thing. Is there a previous point where I could cut these requests, or a potential tip to let a request go, then block it for toll-free numbering? Or maybe even kill him on arrival, so to speak?

My apologies for the poorly worded question. English is my mother tongue, so I do not really have a valid excuse.

performance query – NULL queries get blocked and use connections

Operation show processlist in phpmyadmin for me often shows queries with NULL information and a seemingly few seconds time, but that are actually stuck there for a long time (every time I run show processlist the same identifier may have a different time). I have no idea about the nature of these queries or why they are blocked, but they can often result in all of my available connections being used and waiting in Apache state, preventing them from loading. sites. Is there a parameter I can modify to end blocked queries for a long time? Is the php.ini parameter that defines max_execution_time responsible for this situation (I've set it to 3000 from the default value 90).

I've attached a screenshot of the race show processlist.

enter the description of the image here

tls – how can i use ssl python queries with chrome client cert

I write a splider for nike.com
but this site ssl need to check ssl fingerprints

I download the client certificate from chrome and use it

openssl x509 -inform der -in nike.cer -out nike_certificate.pem

requests.post(cert=nike_certificate.pem)

Exception:

(Caused by SSLError(SSLError(336265225, '(SSL) PEM lib (_ssl.c:3845)')))

apache – Which browsers handle client-side compression of queries like POST's ed formdata?

Which browsers manage client side compression requests, such as "multipart / form-data" sent by the client to the server via HTTP POST?

The HTTP server that I'm trying to run with is Apache with the mod_deflate and mod_gzip modules, which include Content-Encoding: deflate and Content-Encoding: gzip headers in POST requests.

Event Sourcing – CQRS – How can an order be properly validated when queries are required?

I know that this question has been asked several times, but I am concerned about unscheduled write requests in the already existing questions, more particularly by the possible consistency in the command model.

I have a simple CQRS + ES architecture for an application. Customers can buy items on my site, but there is a code requirement: a customer can not buy more than $ 500 worth of products from our store. If they try, the purchase should not be accepted.

So here is what my order manager looks like (in python and simplified from worries such as currencies, injection by simplicity):

class NewPurchaseCommand:
    customer_id: int
    product_ids: List(int)

class PurchasesCommandHandler:
    purchase_repository: PurchaseRepository
    product_repository: ProductRepository
    customer_query_service: CustomerQueryService

    def handle(self, cmd: NewPurchaseCommand):
        current_amount_purchased = self.customer_query_service.get_total(cmd.customer_id)

        purchase_amount = 0
        for product_id in cmd.product_ids:
            product = self.product_repository.get(product_id)
            purchase_amount += product.amount

        if current_amount_purchase + purchase_amount > 500:
             raise Exception('You cannot purchase over 500$')

        new_purchase = Purchase.create(cmd.customer_id, cmd.product_ids)
        self.purchase_repository.save(new_purchase)

        # Then, after the purchase is saved, a PurchaseCreated event is persisted, 
        # sent to a queue which will then update several read projections, which one 
        # of them is the underlying table that the customer_query_service uses.

CustomerQueryService uses an underlying table to quickly retrieve the amount that the user has purchased at the current time (for example), and this table is used exclusively by the write side and then updates:

CustomerPurchasedAmount table
CustomerId | Amount
10         | 480

Although my command manager runs on simple scenarios, I want to know how to handle any eventual cases that may occur:

  • This malicious user 10 performs two simultaneous purchases of $ 20. But since the CustomerPurchasedAmount table is finally updated, both queries will succeed (that's the case that worries me the most)
  • It is possible that some product prices change during the processing of the application (unlikely, but again, this can happen).

My questions are:

  • How can I avoid and protect the control of the simultaneity case discussed previously?
  • How should read templates specifically designed for the writing side be updated? Synchronous? Asynchronous as I do now?
  • And in general, how should order validation occur if the information you are querying to validate it might be out of date?

innodb – After upgrading from MySQL 5.5 to 5.7, queries are more likely to crash

Recently, we migrated our production base to Amazon RDS with a version upgrade from 5.5 to 5.7 using the AWS DMS service. After that, we frequently encounter blocking issues for our insertion … on duplicate key update queries and update requests. While in MySQL 5.5 it was very minimal.

For example, suppose one of our table structures is the following.

CREATE TABLE `job_notification` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `uid` int(11) NOT NULL,
  `job_id` int(11) NOT NULL,
  `created_time` int(11) NOT NULL,
  `updated_time` int(11) NOT NULL,
  `notify_status` tinyint(3) DEFAULT '0'
  PRIMARY KEY (`id`),
  UNIQUE KEY `uid` (`uid`,`job_id`),
) ENGINE=InnoDB AUTO_INCREMENT=58303732 DEFAULT CHARSET=utf8 COLLATE=utf8_bin

Our insertion request is as follows …

    INSERT INTO job_notification (uid, notify_status, updated_time, created_time, job_id) VALUES
('24832194',1,1571900253,1571900253,'734749'),
('24832194',1,1571900254,1571900254,'729161'),
('24832194',1,1571900255,1571900255,'713225'),
('24832194',1,1571900256,1571900256,'701897'),
('24832194',1,1571900257,1571900257,'682155'),
('24832194',1,1571900258,1571900258,'730817'),
('24832194',1,1571900259,1571900259,'717162'),
('24832194',1,1571900260,1571900260,'712884'),
('24832194',1,1571900261,1571900261,'708267'),
('24832194',1,1571900262,1571900262,'701855'),
('24832194',1,1571900263,1571900263,'702129'),
('24832194',1,1571900264,1571900264,'726738'),
('24832194',1,1571900265,1571900265,'725105'),
('24832194',1,1571900266,1571900266,'709306'),
('24832194',1,1571900267,1571900267,'702218'),
('24832194',1,1571900268,1571900268,'700966'),
('24832194',1,1571900269,1571900269,'693848'),
('24832194',1,1571900270,1571900270,'730793'),
('24832194',1,1571900271,1571900271,'729352'),
('24832194',1,1571900272,1571900272,'729043'),
('24832194',1,1571900273,1571900273,'724631'),
('24832194',1,1571900274,1571900274,'718394'),
('24832194',1,1571900275,1571900275,'711702'),
('24832194',1,1571900276,1571900276,'707765'),
('24832194',1,1571900277,1571900277,'692288'),
('24832194',1,1571900278,1571900278,'735549'),
('24832194',1,1571900279,1571900279,'730786'),
('24832194',1,1571900280,1571900280,'706814'),
('24832194',1,1571900281,1571900281,'688999'),
('24832194',1,1571900282,1571900282,'685079'),
('24832194',1,1571900283,1571900283,'686661'),
('24832194',1,1571900284,1571900284,'722110'),
('24832194',1,1571900285,1571900285,'715277'),
('24832194',1,1571900286,1571900286,'701846'),
('24832194',1,1571900287,1571900287,'730105'),
('24832194',1,1571900288,1571900288,'725579')
 ON DUPLICATE KEY UPDATE notify_counter=VALUES(notify_counter), updated_time=VALUES(updated_time)

Our request for update is as follows …

update job_notification set notify_status = 3 where uid = 51032194 and job_id in (616661, 656221, 386760, 189461, 944509, 591552, 154153, 538703, 971923, 125080, 722110, 715277, 701846, 725579, 686661, 685079)

These queries worked fine in MySQL 5.5 with the same data and index packet size, but after the migration, frequent crashes occur frequently for this type of queries …

NB: Our system is a high level simultaneous system.
innodb_deadlock_detect is disabled. innodb_lock_wait_timeout is 50.

When we explained the queries, it gave a better plan of execution. Nevertheless, we have frequent dead ends and other queries are also getting longer.

Documentation on using parentheses () in mysql queries

I have an application that allows me to write queries with multiple conditions and I use several other keywords, such as between, and, or etc.

I have these two conditions

select * from countries where date between (x and y) and filter = filter and column choosen like '%col%';

and

select * from countries where date between (x and y) and filter = filter

I did not actually run the queries but I want the date condition using between to be executed first and I thought the hooks would help.

What is the exact use of the hooks () in mysql and where is it documented ?.