## algorithms – Check which range a number is between

I have an array of time variables `t(1), t(2), ..., t(n).` and corresponding variables `v(1), v(2), ..., v(n).` Given a time `tx`, I would like to find effectively between which interval `(t(i), t(i+1))` the number lies. Correspondingly, the variable `vx` must be interpolated, which can be done easily afterwards.

P.S: The time `t(i=1)` is initialized with `t1` and is always increasing and greater than zero.

## Backpack Problems – N Placement Algorithms Weighted Balls in M ​​Uniform Bins While Looking for a Balanced Weight?

Suppose that there is $$N$$ weighted balls and $$M$$ equal weight, it is guaranteed that at least one placement exists for all balls to be placed in lockers.

What is the right algorithm for getting a well-balanced investment where each trap has almost equal ball weights?

I know that if the bins have different weights, the problem is NP-hard; Not sure that a simplified question can be resolved in linear time.

I would appreciate any references where I could find papers dealing with complexity and related solutions. Thank you!

## algorithms – Weird implementation of quicksort

I've encountered a wired question from my school's algorithm test. For the first time, I thought it was a normal fast sorting problem and I was confident of solving it, but if I read this algorithm carefully, it is a little different from the fast sort algorithm. ..

The algorithm and the question of origin are as follows:

``````QUICKSORT(A, p, r)
1 if p < r
2   then q = PARTITION(A, p, r)
3 QUICKSORT(A, p, q)
4 QUICKSORT(A, q + 1, r)

PARTITION(A, p, r)
1 x = A(p)
2 i = p − 1
3 j = r + 1
4 while TRUE
5   do repeat j = j − 1
6     until A(j) ≤ x
7   do repeat i = i + 1
8     until A(i) ≥ x
9   if i < j
10    then exchange values A(i) and A(j)
11    else return j
``````
• (Q1) Suppose the PARTITION (A, 1, 6) is applied to Table A (= (4, 3,
7, 8, 6, 2)). Note that we assume that the first element of the array is
A (1), that is, A (1) = 4 in this case. Describe the return value of
PARTITION and the state of Table A.
• (Q2) Find the smallest total
number of PARTITION calls in order to complete QUICKSORT for any
size chart 6.

I think this algorithm has a lot of "problems". First, in the QUICKSORT line 3 function, should not it be QUICKSORT (A, p, q-1)? Then, in the PARTITION functions line 2 and line 8, if we take the initial value of i as p-1, then, according to A (i)> = x, at the beginning, A (p) will always be modified ?! And on line 11, he did not change A (p) and A (j) finally ......

However, as I tried using this weird algorithm to do Q1, the result after the first iteration is 237864, the second is 234687, the third is 234678. It worked!

So, if you've seen this version of Quick Sort or if you know the mechanism, can you give me some comments?

## algorithms – Search for the last substring in the lexicographic order in O (n) time

Either a string s, return the last substring of s in lexicographic
order.

Example 1

Input: "abab" Output: "bab" Explanation: The substrings are ["a",
"ab", "aba", "abab", "b", "ba", "bab"]. The maximum lexicographically
the substring is "bab"

More specifically, I am looking for a proof / intuition that goes with it. I can find a lot of solutions, but none provide proof of their technique and I can not convince myself.

source: https://leetcode.com/problems/last-substring-in-lexicographical-order/

## algorithms – Function replication – Stack Exchange software engineering

You are given an arbitrary function `f`. You can invoke it with arbitrary parameters and read the return values, as well as note the exceptions that it triggers, but you can not read its implementation.

How would you like to write a function `g` which reproduces `f`Implementation?

Note that by "function" I do not want to restrict `f` to be a mathematical function; it can work on any type of data, not just numbers. "Program" can be a better word for that. However, if this makes it easier to think about the problem, it may be helpful to start thinking only in terms of numbers.

Also note that I do not think certainty / proof of correction is possible here; only error correction is.

## Number of swaps and comparisons in sorting algorithms

I was reading the comparison algorithms (bubble, insertion and selection). I've followed this link link1, but if any one ask what is the number of comparison and exchange in these algorithms, what should I say?

for a size n chart, please refer to the attached photo, help me understand the number of comparisons and conversions.

## algorithms – Binary search symbols table

Hi, I am trying to learn myself from algorithms (Sedgewick) and I have encountered the following problem:

``````3.1.15: Assume that searches are 1,000 times more frequent
than insertions for a BinarySearchST client. Estimate the
percentage of the total time that is devoted to insertions,
when the number of searches is 10^3, 10^6, and 10^9.
``````

As stated in the problem Searches (S) = 1000 * Inserts (I)

• $$S = 10 ^ 3 to I = 1$$
• $$S = 10 ^ 6 to I = 10 ^ 3$$
• $$S = 10 ^ 9 to I = 10 ^ 6$$

At this stage of the book, we use simple tables and linked lists to save the symbol table (inefficient hash maps, trees, etc.). This would mean that the searches take ~ log2 (N) times and the insertions take ~ N / 2 times (assuming a uniform distribution on which the inserts are placed).

Am I right to calculate the percentage of insertion into the search time would be approximately:

$$frac {Inserts times N / 2} {Searches times log_2 (N)}$$

Using $$Searches = 10 ^ 3 times inserts$$ this reduces to

$$frac {N / 2} {(10 ^ 3 times log_2 (N)}$$

This would mean that the percentage strongly depends on the initial size of the symbol table and that it is not a constant percentage that we can use to answer the question.

Any suggestions on what I'm saying, should I assume the initial size of the table?

## Algorithms – Can all the problems in EXP karp be reduced to EXP-Complete?

According to wikipedia and other references, there is a complete language $$L in EXP$$ as for all languages $$L & # 39;$$ in $$EXP$$ there is a polynomial reduction $$f$$ who converts an instance of $$L & # 39;$$ in $$L$$ in polynomial time. I think this definition is good, but some observations baffle me. I'm building a language $$The & # 39;$$ in $$EXP$$ such as $$The & # 39;$$ can not convert to $$L$$ with polynomial time reductions and $$The & # 39;$$ is in $$EXP$$ with an M algorithm in this way:

1) $$input (i, x)$$

2) running $$f_i (i, x)$$ later $$2 ^ n$$ if the calculation time is greater than $$n ^ { log n}$$ then accept.

3) otherwise accept $$(i, x)$$ Yes Yes $$f_i (i, x) not in L$$.

obviously $$Language (M) in EXP$$ so there is a polynomial time reduction of $$Language (M)$$ at $$L$$ but that makes a contradiction because M prevents reductions of less than $$n ^ { log n}$$ time to $$L$$.

My question is: where is my mistake?

## Algorithms – Solve / mitigate the driver's problem without requiring identification?

Is there an algorithm / protocol to solve / mitigate the problem of stowaways without requiring identification?

In a web-based system, I want to distribute exactly one virtual dollar a day to each person, without requiring identification, when someone creates an account.

A mitigation solution would mean, for example, that a given person can get up to \$ 2 a day (in case he would have created more than one account), but not much more. .

The algorithm should distribute to each person roughly the same amount of money per day. Thus, the ratio of money distributed to two different people per day should be about 1.

## algorithms – Temporal complexity of the predecessor search for a dictionary implemented as a sorted array

I'm reading "The Algorithm Design Manual" by Steven Skiena. On page 73, he discusses the time complexity of the implementation $$Predecessor (D, k)$$ and $$Successor (D, k)$$ and suggests that it takes O (1) time.

If the data structure looks something like

``````((k0, x), (k1, x), ...)
``````

where the keys `k0` at `kn` are sorted, given `k`, I thought the `successor(D, k)` should first look in the sorted table `k` ( $$O (log n)$$ ) then recover the successor ( $$O (1)$$ Since the table is sorted, the overall time complexity should be $$O (log n)$$ instead of $$O (1)$$ as mentioned in the book. This should also apply for `predecessor(D, k)`.

For an unsorted array, the temporal complexity of the predecessor and successor remains the same. $$O (n)$$ since the search in the unsorted table also takes $$O (n)$$.

Did I misunderstand something?