## algorithms – solve time complexity of recurrence relation

we assume n is a power of 2. We normally say multiplication takes theta(n^2) execution time. please show me process

exp_better(a,n) {
if (n == 1)
return a
m = n/2
power = exp_better(a,m)
power = power * power;
if (n is even)
return power;
else
return power*a;
}

## algorithms – Is there a way to uniquely map every natural number x

Imagine you have a number x, when x ∈ (0, N). Is there any algorithm that can map x to y, so that y is also y ∈ (0, N) with the mapping being unique, and the distribution of all y is distributed pseudorandomly across the whole range? I know it’s possible by just generating a set from 0 to N, shuffling it, and using x as an index. I want to know if there is some smarter way to do this that doesn’t involve a memory footprint that is linear to x.

The Pigeonhole principle shows that this is impossible when y > x, and it is trivially possible when x = y (well… y := x), but is this possible in a manner when y is randomly distributed?

My first (bad) attempt in C# was to use a golden ratio and travel around a circle, since mathematically this is guaranteed to give a unique angle every time. In theory (phi * n) mod 360 is sort of random looking and unique. Sadly this only works if you have infinite precision and not at all when you have discrete buckets for the output, so this idea didn’t really work out, even when N = 255:

So out of pure curiosity I’m wondering – is there some beautiful algorithm to map this so that it doesn’t involve either a predefined list of candidate numbers or a list of already used numbers, or so on?

## trees – Is the heap in “Data Structures Algorithms in Java” by Goodrich, Tamassia, and Goldwasser missing sentinel leaves?

In the book, Data Structures Algorithms in Java Sixth Edition by Goodrich, Tamassia, and Goldwasser on page 339, there is an

“Example of a heap storing 13 entries with integer keys. The last position is the one storing entry (13,W).”

I’ve recreated the a similar image using qtree-tikz with my own depth and height calculations in the box on the left.

According to the book,

A heap T storing n entries has height $$h = lfloor log n rfloor$$.

Justification: From the fact that T is complete, we know that the number of nodes in levels $$0$$ through $$h-1$$ of T is precisely $$1+2+4+…+2^{h-1} = 2^{h}-1$$, and that the number of nodes in level $$h$$ is at least 1 and at most $$2^h$$. Therefore

$$n geq 2^h -1 +1 = 2^h$$ and $$n leq 2^h -1 +2^h = 2^{h+1} -1$$.

Here is where I see a problem. I added the depths and heights for nodes at each level according to my understanding of trees in the box to the left of the tree. The example tree Figure 9.1 shown above obviously has 7 inner nodes and although depth level 3 would have 8 nodes, it is unfull at 6 nodes.

If I follow the “Justification”, I end up with the following

I am being forced to conclude that there must be a missing level of sentinels,
$$2^3 -1 = 7$$ and the only way to get a power of 3 in those formulas is to have a height of 4. But why on earth would the authors, who otherwise explain everything in detail, not make important piece of information explicit? Or, did I miss something? I would appreciate a thorough explanation to help me understand the proof. It also seems that the authors throw around the term “level” loosely, sometimes meaning depth level and sometimes meaning height level.

I should also mention that earlier in the book, on page 286, they provide a definition for the height of a tree without any examples.

We next define the height of a tree to be equal to the maximum of the depths of its positions (or zero, if the tree is empty).

• If $$p$$ is a leaf, then the height of $$p$$ is $$0$$.
• Otherwise, the height of $$p$$ is one more than the maximum of the heights of $$p$$‘s children.

## algorithms – Range sum query – tree representation efficiency

I was reading about possible solutions to the well known problem:

Given array `A` with length `N` create a structure that enables

• Answering what is the sum $$sum_{k=i}^{j} A(k)$$
• Updating $$A(k)$$

I’ve seen most solutions use binary index tree but was curios whether it’s possible to just use a regular tree that is built using similar qualities.

So given $$A = (5, 4, 7, 9, 1)$$

I try to construct a tree by creating a tree node for each value that has a start and end (which are just the index in the beginning.

To construct the tree I push all the starting node into a queue $$Q$$

``````while not Q.empty():
next <- ()
for i in range(Q.size()):
f <- Q.front()
Q.pop()
if Q.empty():
if marker:
parent <- make_parent(f, marker)
next.push(parent)
else:
marker <- f
else:
f2 <- Q.front()
Q.pop()
parent <- make_parent(f, marker)
next.push(parent)

for n in next:
Q.push(n)
``````

After the this ends marker will hold the root

(I have working c++ code but I tried to provide something more abstract and simple)

and to get a sum of range I perform the following (assuming I have an array Nodes that holds all the leaves) and that the query starts with the root of the tree that we created above

``````sumRangeNode(int i, int j, Node* n)
if i == j
return Nodes(i)
if n == null
return 0
if j < n->start || i > n->end
return 0;

if i <= n->start && j >= n->end
return n->val

return sumRangeNode(i, j, n->left) + sumRangeNode(i, j, n->right)
``````

The question is does it still have the $$log(N)$$ complexity, I’ve tried to reason about it but struggled with:

• The fact that I might be building a tree with “stragglers” like the $$1$$ in the example
• The fact that I recursively explore right and left. Intuition tells me that because there are “enough” cases where the descent is stopped it’s OK but couldn’t find a way to formalize/prove it.

## algorithms – Leader Election: Every bit-reversal ring is \$frac{1}{2}\$-symmetric

I have a proof that I need help with. Like the title says, the theorem is that every bit-reversal ring is $$frac{1}{2}$$-symmetric. The theorem is for Leader Election algorithm in synchronous ring. The things I know follow:

Bit reversal ring is defined as follows: We assign to each process $$i$$ the integer from $${0,ldots, n-1}$$ whose $$k$$ bit representation is the reverse of the $$k$$ bit representation of $$i$$. $$n$$ is also a power of two, $$n=2^{k}$$.

Two segments $$U$$ and $$V$$ are order equivalent if they are the same length $$k$$, and for all $$i$$ and $$j$$ such that $$1 leq i,j leq k$$ we have that $$u_{i} leq u_{j}$$ if and only if $$v_{i} leq v_{j}$$.

Ring $$R$$ is $$c$$-symmetric if for every segment $$S$$ of $$R$$ there are at least $$lfloor frac{cn}{l} rfloor$$ segments that are order equivalent to $$S$$, including $$S$$ itself, where $$l$$ is length of the segment, and this holds for every $$sqrt{n} leq l leq n$$.

So after plugging all I know into formulas I get that $$lfloor frac{2^{k-1}}{l} rfloor$$ is formula for number of segments and $$l$$ is such that $$2^{frac{k}{2}} leq l leq 2^{k}$$.

Any hint or piece of information would be much appreciated! Thank you.

## algorithms – Logarithmic Time — O(log n) in Python

Hi , I am new to Data Structure and trying to get some clarifications.

### Following shows an example Logarithmic Time — O(log n) in Python.

``````def binary_search(data, value):
n = len(data)
left = 0
right = n - 1
while left <= right:
middle = (left + right) // 2 # How can make this by 3 to make the search faster
if value < data(middle):
right = middle - 1
elif value > data(middle):
left = middle + 1
else:
return middle
raise ValueError('Value is not in the list')

if __name__ == '__main__':
data = (1, 2, 3, 4, 5, 6, 7, 8, 9)
print(binary_search(data, 8))
``````

Can you please explain and write the code on how can make this faster? Maybe for n/3

Thanks

## algorithms – Potential function for a dynamic stack

Consider a dynamic stack stored in an array of size m with n elements (initially n=0) and only a push operation. If upon a push n=m then expand the array such that m = 3n (That is, triple the size of the array).

Define a potential function based on the number n of elements in the array and the total number m of slots in the array, and show that the push operation has a constant amortized time.

My attempt:
Let the potential function be ϕ(n)= 3n – m.

Consider T*(push) = T(push) + ϕ(n) + ϕ(n-1)

If n < m then T(push) = 1 (since there is space in the array so you only need to add the new element) and so T*(Push) = 1 + 3n – m – (3(n-1)-m) = 4

If n=m then T(push) = n+1 (since there is not enough space so you need to copy all the elements plus the new one to the new array).

ϕ(n) = 3n-m, but m = 3n since the array is full it must be expanded for the push so ϕ(n) = 3n – 3n = 0

ϕ(n-1) = 3(n-1)-m, but prior to the push the array is full so m=n and thus ϕ(n-1) = 3(n-1)-n = 2n-3

Thus, T*(push) = n+1 + 0 – (2n-3) = -n + 4, which is not constant.

If anyone could help show me where I went wrong that would be great!

## algorithms – When is the expected time of a move to front list equal to that of an optimal list

When is the expected time of a move-to-front list equal to that of an optimal list?

My thinking is that it is when the move-to-front list happens to be arranged as an optimal list, but since the expected time takes the average over multiple times, this doesn’t really make sense. My only other thinking was the trivial case when the list has only one element.

## algorithms – Time efficient way to find pairs in an array whose sum equals a specific number?

Given an array of integers, we want to find how many explicit pairs can be made such that their sum is divisible by 60. The pairs do not need to be non-unique.

For example (29, 20, 30, 100, 30, 121, 160, 31, 20) has 6 pairs -> (29, 31) (20, 100) (20, 160) (30, 30) (100, 20) (160, 20)

The obvious and immediate thing is to make a nested for loop

`````` for (int i = 0; i < pairs.Count - 1; i++)
{
for (int n = i + 1; n < pairs.Count; n++)
{
if ((pairs(i) + pairs(n)) % 60 == 0)
{
count++;
}
}
}
``````

but that’s too slow. The above also doesn’t factor in duplicates like if the array had the number “30” as four elements, which is equal to a divergent series of (n-1)(n)/2 = (4-1)(4)/2 = 6.

How can we improve the performance?

My initial though is to make a dictionary/hashmap, but I am unsure how to iterate through that in a performant way.

## algorithms – DAG decomposition similar to series-parallel graphs

Let’s consider directed acyclic graphs (DAGs) with single source and single sink. Such graphs can be combined with two operations, used in Series-Parallel graphs – parallel composition $$P$$ and series composition $$S$$.

I’m interested in reverse operation (i.e. decomposition) – given a DAG $$G_1$$ I want to get a DAG $$G_2$$, where all the subgraphs, which can’t be decomposed further using operations, reverse to $$P$$ and $$S$$, are replaced by nodes. For example, the graph $$G1$$ below can be decomposed in this way – three subgraphs, which can’t be decomposed further are replaced by nodes (in red color). Note, that all such non-decomposable subgraphs with two vertices (simply arcs – in green color) aren’t replaced by nodes.

I know that the series decomposition can be done in linear time (it’s equivalent to finding articulation points). The parallel decomposition looks harder, also it’s unclear what operation should be tried first at each decomposition step.

Are there any existing algorithms, which can find such decomposition?