python – Monotonous priority queue problem (Radix Heap?)

I study algorithms, to learn, I do exercises that appear on sites like leetetcode and hacker rank. There is an exercise that I don't know where to start:

New disease spreads throughout the day i = 0, so that in i-ésimo a day ago i new cases. Fortunately, there is treatment and a hospital can provide treatment. However, the hospital has a capacidade máxima de camas C which are available initially on the day i = 0. The hospital must have a bed available to admit a new patient. The hospital can then treat up to k patients per day and treatment é efetivo após um dia, which means that the hospital can heal up to k patients during the day i and therefore distribute them and make your bed available again the day i + 1. Write an algorithm to find out when the hospital will no longer be able to accept all new patients. It is the first day that one or more patients will not be admitted. Write your solution in a Python function Hospital_Overflow(C, k). The time complexity of your solution should be O(log C).

For example, hospital_overflow (5.2) should return 5. Indeed, on day 1, there is 1 patient in the hospital; this patient is cured and released on day 2, when two other patients are admitted; these two are cured on day 3, when three patients are admitted; on day 4, only 2 of the 3 hospital patients are cured and 4 new patients are admitted, filling the 5 beds. Therefore, on the 5th, the hospital will not be able to admit all new cases.

The method I would do would be to create a bunch and add new patients to the queue while deleting those who have been cured, but this has an O (n) complexity. I read that queues with this monotonous property of integers can be resolved in O (log (n)), but I couldn't figure out how, I'm looking for it help to understand how to solve the problem in the required complexity.

rabbitmq – Why control minimum and maximum concurrent consumers on an AMQP queue?

I am trying here to understand why some executives introduce control of the number of consumers in an AMQP queue.

For example, Spring AMQP introduces this functionality on version 1.3.0, with the maxConcurrentConsumers property:

Since version 1.3.0, you can now dynamically adjust the competitorConsumers property. If changed during container operation, consumers are added or removed as necessary to adapt to the new setting.

In addition, a new property called maxConcurrentConsumers has been added and the container dynamically adjusts the competition according to the workload.

But I can't understand the benefits of using it if I could just define the concurrentConsumers already with my maximum value.

The only benefit I know of is to reduce the average number of connections to the RabbitMQ server. But thinking that most apps are very far from reaching a number of connections that could create problems with an instance of RabbitMQ, these consumer control features seem very rare to take advantage of.

rabbitmq – Why control minimum and maximum concurrent consumers on an AMQP queue?

I am trying here to understand why some executives introduce control of the number of consumers in an AMQP queue.

For example, Spring AMQP introduces this functionality on version 1.3.0, with the maxConcurrentConsumers property:

Since version 1.3.0, you can now dynamically adjust the competitorConsumers property. If changed during container operation, consumers are added or removed as necessary to adapt to the new setting.

In addition, a new property called maxConcurrentConsumers has been added and the container dynamically adjusts the competition according to the workload.

But I can't understand the benefits of using it if I could just define the concurrentConsumers already with my maximum value.

The only benefit I know of is to reduce the average number of connections to the RabbitMQ server. But thinking that most applications are very far from reaching a number of connections that could create problems with an instance of RabbitMQ, control of these consumers seems very rare.

Message Queue – The price is not updated on the frontend after updating the API product

I have a problem in that every time I update a product price using the asynchronous endpoint

POST http://base.test/rest/all/async/V1/products

(running Rabbit MQ), the price does not change on the product description page or the list page.

If I clear the BLOCK_HTML cache, then it displays OK.

This only happens with the asynchronous endpoint. Regular end point

 POST http://base.test/rest/all/V1/products

updates the price immediately.

Oddly enough, I notice that it updates immediately on my local machine, but not on our host's server.

Anyone have any experience or suggestions on what it could be?

php fpm – the php-fpm status page shows the length of the listen queue at 0

I have configured php-fpm to use a listen.backlog = 128 but when I look at the FPM status page, I see listen queue len: 0

It is a working AWS EC2 server Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-1058-aws x86_64)
I use the default /etc/php/7.3/fpm/pool.d/www.conf but I added an additional configuration file (/etc/php/7.3/fpm/pool.d/x-www-local.conf) which has the following content:

(www)

listen.backlog = 128
pm = static
pm.max_children = 10
pm.max_requests = 500
pm.status_path = /status

I don't know why the queue length is always displayed at 0. Any ideas?

Message Queue – Can front-end applications (mobile, web, etc.) write directly to MQ (Kafka or RabbitMQ)? Or do you need adapters / proxies / gateways?

Found this:

At Uber, we use Apache Kafka as a message bus to connect different
parts of the ecosystem. We also collect system and application logs
as pilot and pilot application event data. Then we do this
data available for a variety of downstream consumers via Kafka.

enter description of image here

Can someone describe how this part works:

enter description of image here

job? Can front-end applications (mobile, React, Angular, etc.) write directly to Kafka (or RabbitMQ)? Or do you need adapters / proxies / gateways?

Message Queue – Search for a lightweight library that responds to the features of persistently distributed journaling primarily in the manner of an editor / subscriber

I'm watching IOT World Solution where we have MQTT broker. This MQTT broker sits in one of the Data center. Reducing operating costs is our main goal. We do a lot of processing related to alerts and alarms on this data, currently we are looking for a solution that can do distributed log / alert persistence mainly on remote disk.

Our main need is to use light solutions where operational complexity and maintenance costs can be significantly reduced. We want to do it on-site, so we are not considering cloud solutions.

We have examined the following alternatives:

  • Apache Kafka – Large choice but very good operation and maintenance
    complex.
  • Rabbit MQ – High availability is a problem.
  • Apache Pulsar
    Operational complexity.
  • NATS – Absence of persistence.
  • Akka Streams – Large learning curve and operational flows.

So we are looking at a lightweight library that can do distributed persistence preferably with a publisher and subscriber model. Preferably on JVM stack. The thought or help will be greatly appreciated.

accessibility – Which animation or visual queue indicates that there is more content below on mobile

I am designing a form on a mobile application. It is not visible on one screen, but not long enough to be on a second screen.

My question is what type of animation or visual queue can I use to create a possibility that there is more content below. I have seen similar things on the desktop like this, but not for mobiles.

enter description of image here
Any suggestions and real examples would be appreciated.

java – Codility Fish (queue) solution

Today I tried a "Fish" codility exercise (https://app.codility.com/programmers/lessons/7-stacks_and_queues/fish/), and I have to say that I 39; found:
1) Not necessarily "easy" as the page suggests, AND
2) Satisfactory enough to understand at the end.

My solution is a bit playful, and I was wondering if anyone could take a look at it and let me know if it was a good way to fix it. Any comments will be appreciated.

import java.util.ArrayDeque;

// you can write to stdout for debugging purposes, e.g.
// System.out.println("this is a debug message");

class Solution {
    public int solution(int() A, int() B) {

        ArrayDeque safetyPool = new ArrayDeque();
        ArrayDeque activePool = new ArrayDeque();

        for (int i = 0; i < A.length; i++) {
            activePool.offer(Fish.cFish(A(i), B(i)));
        }

        while(!activePool.isEmpty()) {
            // System.out.println(safetyPool + " " + activePool);
            Fish safeFish = safetyPool.peekLast();
            Fish swimmingFish = activePool.peekFirst();

            if (safeFish != null) {
                if (safeFish.dir == 1) {
                    if (safeFish.dir != swimmingFish.dir) {
                        if (safeFish.att > swimmingFish.att) {
                            activePool.removeFirst();
                        } else {
                            safetyPool.removeLast();
                        }
                    } else {
                        safetyPool.offer(activePool.pollFirst());
                    }
                } else {
                    safetyPool.offer(activePool.pollFirst());
                }
            } else {
                safetyPool.offer(activePool.pollFirst());
            }
        }
        return safetyPool.size();
    }
}

class Fish {
    public int att;
    public int dir;

    public Fish(int att, int dir) {
        this.att = att;
        this.dir = dir;
    }

    public static Fish cFish(int att, int dir) {
        return new Fish(att, dir);
    }

    @Override
    public String toString() {
        return String.format("Fish(att: %d, dir: %d)", att, dir);
    }
}
```

Implement N Work Queue Using Promises in JavaScript

Write a code that will process a list of strings. The treatment will be a promise and it will be resolved in 1 to 10 seconds for example. Workers should not take more than limiting the strings and once one of the workers is free, he should take another chain until the list is complete. And detect the ready-made condition.

const limit = 2 // limit of promises started at the same time
let inProgress = 0 // number of in progress workers
let queue = () // list of strings that should be processed

Fill in random strings.

for (let i = 0; i < 4; i++) {
    queue.push(randomString(10))
}

console.log('queue', queue)

A function which will return a promise which will be resolved in a random time.

const func = text => {
    return new Promise((res) => {
        var randTime = Math.floor(Math.random() * 9000) + 1000
        console.log('working on', text)
        setTimeout(res, randTime, text)
  })
}

And the processing function.

function process() {
  if(inProgress == limit) {
    return false
  }

  if (queue.length === 0) {
    if (inProgress === 0) {
        console.log('all done')
    }
    return false  
  }

  let item = queue.pop()
  inProgress += 1

  func(item).then(result => {
    console.log('result', result);
    inProgress -= 1
    process()
  });

  process()
}

process()