frontal – How to detect when the evaluation queue is empty?

I have encountered this problem in my LaTeX editor project (which is not free software): Can a Mathematica notebook be programmed to work as a LaTeX editor?

The notebook of a LaTeX document typically includes hundreds of cells that can generate LaTeX source code. The evaluation of one of these cells is supposed to trigger a compilation by pdflatex to generate a PDF file. On the other hand, we would also like to be able to evaluate a selection of many cells of this type (or the entire notebook) while triggering only one pdflatex compilation (from LaTeX source concatenated from all cells of the evaluated selection).

In short, we want to trigger a pdflatex compilation just after evaluating the selection, that is, when the evaluation queue is empty.

To determine when a selection completes its evaluation, I used $ Pre to stop a timer at the beginning of any cell evaluation and to reset and start the timer at the end of the cell evaluation. A scheduled task is configured to read the timer, for example every 0.1 s. If the timer has a value significantly greater than the expected delay between the end of one evaluation in the queue and the beginning of the next, we know that the selection must be made. have completed the evaluation and the front-end system is now waiting for the user's input.

I suppose that there is a much simpler solution to this problem because Mathematica displays the message "Being processed …" in the title of the notebook window during the evaluation of a selection.

Sorting a table with a priority queue with the help of a heap

I would appreciate that anyone can report errors.

My process is to convert the array into a complete binary tree, then perform a batch use of the insert in order to get the largest element at the top, and then generate the state of the segment of memory.

enter the description of the image here

Best design for a queue of work AMQP

I have a system in which a user uploads a file to import users, but it is necessary to perform validations, which takes time. I would like to create a queue of work with RabbitMQ and the user will download the file and send it to the queue then process it.

What is the best design to use in this case for my producer and consumer? The service where the user downloads the file is the producer, but should I leave the consumer in the same service to process the file or should I create another service to handle these queues?

If in the future I have to create new queues and let consumers in the same application, it is recommended to use the application that produces, consumes and processes also?

This is a microservice application

apache kafka – Is there an effective framework for a listener to listen to different types of messages in queue?

Part of the project involves designing a listener or a consumer who listens to the different arbitrary types and numbers of message queues such as rabbitMQ, Kafka, ActiveMQ, and so on. In addition, the list would be deployed in the form of a micro-service and the same copy. would be deployed on multiple machines. Is there an effective framework to accomplish this task? And whoever has good advice or points to note for the design of this distributed auditor?

Thank you

c ++ – Simultaneous Queue Adapter

There is a lot of code out there for the basic adapters of std :: deque to provide a thread-safe queue. I've adopted that, but I wanted to provide a relatively complete analog to std :: tail, so I added all the builders and operator =. The queue is working well. My questions are mainly about the builders and operator =:

  1. Have I applied the correct type characters to identify if the constructor will be noexcept?
  2. j & # 39; uses notify_once for push and square, and notify_all for operator = overloads. Is it correct?
  3. Since I need to acquire a lock before changing the queue, builders need to be written a little differently than std :: tail adapter. For example, the initialization list can not include copying / moving data. Does this code look correct?
  4. The conditional noexcept the syntax that I added for Cut and empty looks weird. Am I doing this correctly?

Other comments not related to these questions are welcome. A note about the class: I added some combination methods (by clear_count_push) because it's a common combination of calls that I use, such as stopping the program when I apply a close thread semaphore to the queue, so the thread that takes the job from the queue know it's time to stop.

template <typename T, class Container = std :: deque> class BlockingQueue {
public:
using container_type = Container;
using value_type = table_name Container :: value_type;
using size_type = typeName Container :: size_type;
using reference = typename Container :: reference;
using const_reference = typeName Container :: const_reference;
static_assert (std :: is_same_v, "container adapters require consistent types");
// Builders: see https://en.cppreference.com/w/cpp/container/queue/queue
// These are in the same order and number as in cppreference
/ * 1 * / BlockingQueue () noexcept (std :: is_nothrow_default_constructible_v) {};
/ * 2 * / Explicit BlockingQueue (const Container & cont) noexcept (
std :: is_nothrow_copy_constructible_v)
: queue_ {cont}
{
}
/ * 3 * / explicit BlockingQueue (Container && cont) noexcept (
std :: is_nothrow_move_constructible_v)
: queue_ {std :: move (continued)}
{
}
/ * 4 * / BlockingQueue (Const BlockingQueue & other)
{
automatic lock {std :: scoped_lock (other.mutex_)};
queue_ = other.queue_;
}
/ * 5 * / BlockingQueue (BlockingQueue && other) noexcept (
std :: is_nothrow_move_constructible_v)
{
automatic lock {std :: scoped_lock (other.mutex_)};
queue_ = std :: move (other.queue_);
}
/ * 6 * / template <class Alloc, class = std :: enable_if_t <std :: uses_allocator_v>>
Explicit BlockingQueue (const Alloc & alloc) noexcept (
std :: is_nothrow_constructible_v)
: queue_ {alloc}
{
}
/ * 7 * / template <class Alloc, class = std :: enable_if_t <std :: uses_allocator_v>>
BlockingQueue (const Container & cont, const Alloc & alloc): queue_ {cont, alloc}
{
}
/ * 8 * / template <class Alloc, class = std :: enable_if_t <std :: uses_allocator_v>>
BlockingQueue (Container && cont, const Alloc & alloc) noexcept (
std :: is_nothrow_constructible_v)
: queue_ (std :: move (cont), alloc)
{
}
/ * 9 * / template <class Alloc, class = std :: enable_if_t <std :: uses_allocator_v>>
BlockingQueue (const BlockingQueue & other, const Alloc & alloc): queue_ (alloc)
{
automatic lock {std :: scoped_lock (other.mutex_)};
queue_ = other.queue_;
}
/ * 10 * / template <class Alloc,
class = std :: enable_if_t <std :: uses_allocator_v>>
BlockingQueue (BlockingQueue && other, const Alloc & alloc) noexcept (
std :: is_nothrow_constructible_v)
: queue_ (alloc)
{
automatic lock {std :: scoped_lock (other.mutex_)};
queue_ = std :: move (other.queue_);
}
// operator =
BlockingQueue & operator = (const BlockingQueue & other)
{
{
automatic lock {std :: scoped_lock (mutex_, other.mutex_)};
queue_ = other.queue_;
}
condition_.notify_all ();
return * this;
}
BlockingQueue & operator = (BlockingQueue && other) noexcept (
std :: is_nothrow_move_assignable_v)
{
{
automatic lock {std :: scoped_lock (mutex_, other.mutex_)};
queue_ = std :: move (other.queue_);
}
condition_.notify_all ();
return * this;
}
// destructor
~ BlockingQueue () = default;
// methods
void push (const T & value)
{
{
automatic lock {std :: scoped_lock (mutex_)};
queue_.push_back (value);
}
condition_.notify_one ();
}
void push (T && value)
{
{
automatic lock {std :: scoped_lock (mutex_)};
queue_.push_back (std :: move (value));
}
condition_.notify_one ();
}
model void location (Args && ... args)
{
{
automatic lock {std :: scoped_lock (mutex_)};
queue_.emplace_back (std :: forward(args) ...);
}
condition_.notify_one ();
}
T pop ()
{
automatic lock {std :: unique_lock (mutex_)};
condition_.wait (lock, [this]() noexcept (noexcept (std :: declval().empty())){
return! queue_.empty ();
});
T rc {std :: move (queue_.front ())};
queue_.pop_front ();
return rc;
}
   [[nodiscard]]std :: optional try_pop ()
{
automatic lock {std :: scoped_lock (mutex_)};
if (queue_.empty ())
returns std :: nullopt;
T rc {std :: move (queue_.front ())};
queue_.pop_front ();
return rc;
}
void clear ()
{
automatic lock {std :: scoped_lock (mutex_)};
queue_.clear ();
}
   [[nodiscard]]auto size () noexcept const (noexcept (std :: declval().Cut()))
{
automatic lock {std :: scoped_lock (mutex_)};
return file_.size ();
}
   [[nodiscard]]auto clear_count ()
{
automatic lock {std :: scoped_lock (mutex_)};
auto ret = queue_.size ();
queue_.clear ();
return ret;
}
auto clear_count_push (const T & value)
{
size_type ret;
{
automatic lock {std :: scoped_lock (mutex_)};
ret = queue_.size ();
queue_.clear ();
queue_.push_back (value);
}
condition_.notify_one ();
return ret;
}
auto clear_count_push (value T &&)
{
size_type ret;
{
automatic lock {std :: scoped_lock (mutex_)};
ret = queue_.size ();
queue_.clear ();
queue_.push_back (std :: move (value));
}
condition_.notify_one ();
return ret;
}
model auto clear_count_emplace (Args && ... args)
{
size_type ret;
{
automatic lock {std :: scoped_lock (mutex_)};
ret = queue_.size ();
queue_.clear ();
queue_.emplace_back (std :: forward(args) ...);
}
condition_.notify_one ();
return ret;
}
   [[nodiscard]]empty () const noexcept (noexcept (std :: declval().empty()))
{
automatic lock {std :: scoped_lock (mutex_)};
return queue_.empty ();
}

private:
Container queue_ {};
mutable std :: condition_variable condition_ {};
mutable std :: mutex mutex_ {};
};

queue script wp – wp_localize_script does not work on the ajax response

When a file is uploaded to my page, I make an ajax call to add an entry to a database. I then want to return a string to the client with the help of wp_localize_script, but when I try to access it, the error message "Uncaught reference error: pw_script_vars n '. is not defined ".

I know I can send something back with wp_die but it's a bit of a workaround that I do not think is the right way. The question is: why does not the wp_localize_script work? Any help appreciated.

script.js:

var ajaxurl = "";
jQuery (function ($) {
var i;
for (i = 1; i <21; i ++) {
$ (& # 39; body & # 39;). we ("change", # file; + i, function () {
var file_data = $ (this) .prop (& # 39; files)[0];
var id = $ (this) .attr (& id; #);

var form_data = new FormData ();

form_data.append (& # 39; file & # 39 ;, datafile);
form_data.append ('action', 'file_upload');

jQuery.ajax ({
URL: ajaxurl,
type: "POST",
contentType: false,
processData: false,
data: form_data,
success: function (answer) {
alert (pw_script_vars.alert);
}

});

});
}
});

functions.php:

                add_action (& # 39; wp_enqueue_scripts & # 39; owr_scripts_function & # 39;);
add_action (& # 39; wp_ajax_file_upload & # 39 ;, & quot; file_upload_callback & quot;);
add_action (& # 39; wp_ajax_nopriv_file_upload & # 39; & # 39; file_upload_callback & # 39;);

function owr_scripts_function () {
wp_enqueue_script (& quot; script & # 39 ;, get_stylesheet_directory_uri (). & # 39; /script.js&#39 ;, array (& # 39; jquery & # 39;), time (), false);
}
function file_upload_callback () {
wp_localize_script ('script', & # 39; pw_script_vars & # 39 ;, array (
& # 39; alert & # 39; => __ (& # 39; Hey! You clicked the button! & # 39 ;, & # 39; pippin & # 39;),
& # 39; messages & # 39; => __ (& # 39;) You clicked on the other button. Good job! & # 39;; & # 39; pippin & # 39;)
)
)
wp_die ("download success");
}

`

When you insert an item into a priority queue and the heap size is already maximum, should you generate an error OR increase the size of the array?

I am currently learning how to implement priority queues using heap, but I ran into a wall trying to implement the operation. ;insertion. Suppose the size of the array storing the heap elements is not, and the heap size is also not. Suppose further that we are trying to insert an item 'x'. in the heap. We would need to increment the heap size by 1, but the size of the heap would be larger than the size of the array, which is unacceptable.

My question is, how do we deal with this problem? Should we A: just throw an error (since heap size == size of array), or B: Increase the size of the array by 1. A vast majority of the programs I've seen online implement A, but I think B would be better. Are there advantages / disadvantages to using A on B (or vice versa)? Are there other strategies to solve this problem?

Any help is greatly appreciated.

object-oriented – Single-link waiting queue under PHP 7 – Tracking

I have improved this post. Now I have this:

item = $ item;
$ this-> next = null;
}

function __destruct () {
$ this-> item = null;
}

getItem () function
returns $ this-> item;
}
}

The QueueIterator class implements Iterator {

function __construct ($ queue) {
$ this-> queue = $ queue;
$ this-> current_node = $ queue-> head;
}

current civil service () {
return $ this-> current_node-> item;
}

public key ():  scalar {
return null;
}

public service next (): void {
$ this-> current_node = $ this-> current_node-> next;
}

public service rewind (): void {
$ this-> current_node = $ this-> queue-> head;
}

Valid Public Service (): bool {
return isset ($ this-> current_node);
}
}

The Queue class implements IteratorAggregate {

__construct () function
$ this-> head = null;
$ this-> tail = null;
$ this-> size = 0;
}

function __destruct () {
$ this-> head = null;
$ this-> tail = null;
}

push function ($ item) {
if ($ this-> size === 0) {
$ this-> head = $ this-> tail = new QueueNode ($ item);
} other {
$ new_node = new QueueNode ($ item);
$ this-> tail-> next = $ new_node;
$ this-> tail = $ new_node;
}

$ this-> size ++;
}

pop function () {
if ($ this-> size === 0) {
throw a new exception ("Popping an empty queue.");
}

$ ret = $ this-> head-> getItem ();
$ this-> head = $ this-> head-> next;

if (- $ this-> size == 0) {
$ this-> head = $ this-> tail = null;
}

return $ ret;
}

getFirst () function
if ($ this-> size () === 0) {
throw a new exception ("getFirst () on an empty queue.");
}

returns $ this-> head-> getItem ();
}

getLast () function
if ($ this-> size () === 0) {
throws new Exception ("getLast () on an empty queue.");
}

returns $ this-> tail-> getItem ();
}

function size () {
returns $ this-> size;
}

isEmpty () function
return size () === 0;
}

public function getIterator ():  Traversable {
return new QueueIterator ($ this);
}
}

$ queue = new Queue ();

for ($ i = 1; $ i <= 10; $i++) {
    $queue->push ($ i);
}

echo "Iteration:";

foreach ($ queue as $ item) {
echo $ item. "";
}

echo "
Jumps: "; for ($ i = 1; $ i <= 10; $i++) { echo $queue->getFirst (). "". $ queue-> pop (). ""; } echo "
Goodbye!"; ?>

Exit

Iteration: 1 2 3 4 5 6 7 8 9 10
Popping: 1, 1 2, 2 3, 3 4, 4 5, 5 6, 6 7, 7 8, 8 9, 9 10, 10
Goodbye!

As always, any criticism is very appreciated!

How are background workers generally implemented to query a queue of messages?

Suppose you have a queue of messages to poll every x seconds. What are the usual ways to query and run HTTP / Rest jobs? Are you just creating a cron service and are you calling the worker script every x seconds?

slug – How to queue a script on a specific URL containing multiple parts

I've tried using is_page () to pass the URL, but the page on which I'm trying to queue my script is "races / community / add "and is not a unique slug like" about-us ". Is it possible to do this with is_page () or should I approach this in another way?

add_action (& # 39; wp_enqueue_scripts & # 39 ;, &&;; enqueue_date_picker_styles_and_scripts & # 39 ;, 101);
function enqueue_date_picker_styles_and_scripts () {
if (is_page (& # 39; breeds / community / add & # 39;)) {
wp_enqueue_style (& # 39; bootstrap-datepicker-css & # 39; // cdnjs.cloudflare.com/ajax/libs/bootstrap-datepicker/1.8.0/css/bootstrap-datepicker.standalone.css&#39 ;);
wp_enqueue_script (bootstrap-datepicker-js, // cdnjs.cloudflare.com/ajax/libs/bootstrap-datepicker/1.8.0/js/bootstrap-datepicker.min.js&#39 ; array ('jquery'), '1.8.0', true);
}
}