asymptotics – Resolution or approximation of recursion relations for number sequences

It may happen that you encounter a strange recurrence like this:
$$ T (n) = begin {cases}
c & n <7 \
2T left ( frac {n} {5} right) + 4T left ( frac {n} {7} right) + cn & n geq 7
end {cases} $$

If you are like me, you will realize that you can not use the main theorem and then you will be able to think: "hmmm … maybe a recurrence tree analysis could work." You would then realize that the tree is starting to get very raw. After some research on the Internet, you will see that the Akra-Bazzi method will work! Then you really start looking at the situation and you realize you do not really want to do all the calculations. If you have been like me up to now, you will be delighted to know that there is a simpler way.


Let $ c $ and $ k to be positive constants.

Then leave $ {a_1, a_2, ldots, a_k } $ to be positive constants such as $ sum_1 ^ k a_i <$ 1.

We must also have a recurrence of the form (as our example above):

$$ begin {align}
T (n) & lqc & 0 <n < max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} } \
T (n) + t (a_1 n) + T (a_2 n) + points T (a_k n) & n geq max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} }
end {align} $$

Claim

So I claim $ T (n) leq bn $ or $ b $ is a constant (for example, asymptotically linear) and:

$$ b = frac {c} {1 – left ( sum_1 ^ k a_i right)} $$

Proof by induction

Based: $ n < max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} } implies T (n) leq c <b <bn $

Induction: Assume true for everything $ n <$we have then

$$ begin {align}
T (n) & leq cn + T ( lfloor a_1 n rfloor) + T ( lfloor a_2 n rfloor) + dots + T ( lfloor a_k n rfloor) \
& l afloor + afloor
& lcc + b a_1 n + b a_2 n + points + b a_k n \
& = cn + bn sum_1 ^ k a_i \[0.5em]
& = frac {cn – cn sum_1 ^ k a_i} {1 – left ( sum_1 ^ k a_i right)} + frac {cn sum_1 ^ k a_i} {1 – left ( sum_1 ^ k a_i right)} \[0.5em]
& = frac {cn} {1 – left ( sum_1 ^ k a_i right)} \
& = bn & square
end {align} $$

Then, we have $ T (n) leq bn implies T (n) = O (n) $.

Example

$$ T (n) = begin {cases}
c & n <7 \
2T left ( frac {n} {5} right) + 4T left ( frac {n} {7} right) + cn & n geq 7
end {cases} $$

We first check the coefficients inside recursive calls whose sum is less than one:
$$ begin {align}
1 &> sum_1 ^ k a_i \
& = frac {1} {5} + frac {1} {5} + frac {1} {7} + frac {1} {7} + frac {1} {7} + frac { 1} {7} \[0.5em]
& = frac {2} {5} + frac {4} {7} \[0.5em]
& = frac {34} {35}
end {align} $$

We then verify that the base case is lower than the max of the inverse coefficients:
$$ begin {align}
n & < max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} } \
& = max {5, 5, 7, 7, 7, 7 } \
& = 7
end {align} $$

With these conditions fulfilled, we know $ T (n) leq bn $ or $ b $ is a constant equal to:
$$ begin {align}
b & = frac {c} {1 – left ( sum_1 ^ k a_i right)} \[0.5em]
& = frac {c} {1 – frac {34} {35}} \[0.5em]
& = 35c
end {align} $$

So we have:
$$ begin {align}
T (n) & leq 35cn \
Earth; T (n) & geq cn \
so T (n) & = Theta (n)
end {align} $$


Similarly, we can prove a connection for when $ sum_1 ^ k = 1 $. The proof will follow much the same format:

Let $ c $ and $ k to be positive constants such as $ k> $ 1.

Then leave $ {a_1, a_2, ldots, a_k } $ to be positive constants such as $ sum_1 ^ k a_i = $ 1.

We must also have a recurrence of the form (as our example above):

$$ begin {align}
T (n) & lqc & 0 <n < max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} } \
T (n) + t (a_1 n) + T (a_2 n) + points T (a_k n) & n geq max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} }
end {align} $$

Claim

So I claim $ T (n) leq alpha n log_k n + beta n $ (we choose $ log $ based $ k because $ k will be the branching factor of the recurrence tree) where $ alpha $ and $ beta $ are constants (for example, asymptotically linear) such that:

$$ beta = c $$
and
$$ alpha = frac {c} { sum_1 ^ k a_i log_k a_i ^ {- 1}} $$

Proof by induction

Based: $ n < max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} } implies T (n) leq c = beta < alpha n log_k n + beta n $

Induction: Assume true for everything $ n <$we have then

$$ begin {align}
T (n) & leq cn + T ( lfloor a_1 n rfloor) + T ( lfloor a_2 n rfloor) + dots + T ( lfloor a_k n rfloor) \
& leq cn + sum_1 ^ k ( alpha a_i n log_k a_i n + beta a_i n) \
& = cn + alpha n sum_1 ^ k (a_i log_k a_i n) + beta n sum_1 ^ k a_i \
& = cn + alpha n sum_1 ^ k left (a_i log_k frac {n} {a_i ^ {- 1}} right) + beta n \
& = cn + alpha n sum_1 ^ k (a_i ( log_k n – log_k a_i ^ {- 1})) + beta n \
& = cn + alpha n sum_1 ^ k a_i log_k n – alpha n sum_1 ^ k a_i log_k a_i ^ {- 1} + beta n \
& = alpha n sum_1 ^ k a_i log_k n + beta n \
& = alpha n log_k n + beta n & square
end {align} $$

Then, we have $ T (n) leq alpha n log_k n + beta n implies T (n) = O (n log n) $.

Example

Let's change this previous example that we used just a little bit:
$$ T (n) = begin {cases}
c & n <35 \
2T left ( frac {n} {5} right) + 4T left ( frac {n} {7} right) + T left ( frac {n} {35} right) + cn & n geq 35
end {cases} $$

We first check the coefficients inside recursive calls whose sum is equal to one:
$$ begin {align}
1 & = sum_1 ^ k a_i \
& = frac {1} {5} + frac {1} {5} + frac {1} {7} + frac {1} {7} + frac {1} {7} + frac { 1} {7} + frac {1} {35} \[0.5em]
& = frac {2} {5} + frac {4} {7} + frac {1} {35} \[0.5em]
& = frac {35} {35}
end {align} $$

We then verify that the base case is lower than the max of the inverse coefficients:
$$ begin {align}
n & < max {a_1 ^ {- 1}, a_2 ^ {- 1}, ldots, a_k ^ {- 1} } \
& = max {5, 5, 7, 7, 7, 35 } \
& = 35
end {align} $$

With these conditions fulfilled, we know $ T (n) leq alpha n log n + beta n $ or $ beta = c $ and $ alpha $ is a constant equal to:
$$ begin {align}
b & = frac {c} { sum_1 ^ k a_i log_k a_i ^ {- 1}} \[0.5em]
& = frac {c} { frac {2 log_7 5} {5} + frac {4 log_7 7} {7} + frac { log_7 35} {35}} \[0.5em]
& l about 1.048c
end {align} $$

So we have:
$$ begin {align}
T (n) & leq 1.048cn log_7 n + cn \
so T (n) & = O (n log n)
end {align} $$

asymptotics – Question about homework of exponential complexity

I am currently learning algorithm analysis in my class and I came across this homework question. I really do not know where to start or what the question is asking. Can someone guide me in the right direction? Thank you.

Suppose that the number of operations required by a particular algorithm is exactly T (n) = 2 ^ n and that our computer at 1.6 Ghz performs exactly 1.6 billion operations per second. What is the biggest problem, in terms of n, that can be solved in less than a second? In less than a day?

What I know so far:

The computer can perform 1,600,000,000 operations per second, or 13,824,000,000 operations in a day.

Asymptotics – How to solve equations using Θ big

Thanks for contributing an answer to Computer Science Stack Exchange!

  • Please make sure to respond to the question. Provide details and share your research!

But to avoid

  • Ask for help, clarification, or answer other answers.
  • Make statements based on opinions; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, read our tips for writing good answers.

asymptotics – fine adjustment of the growth rate of polynomial degrees

Let $ r $ to be an integer with $ r> $ 1. Assume that $ p_ {k} (x) $ is a polynomial with positive integer coefficients with $ p_ {k} (0) = $ 1 but where $ p_ {k} neq 1 $ for everyone $ k geq 0 $.

Assume that
$$ prod_ {k = 0} ^ { infty} p_ {k} (x) = frac {1} {1-rx} $$

For each $ n> $ 0, let $ t_ {n} $ to be the numerical indices k $ or $ deg (p_ {k} (x)) = n $. So is it possible to select polynomials $ (p_ {k}) _ {k geq 0} $ or
$$ | t_ {n} – frac {r ^ {n}} {n} | = O ( alpha ^ {n}) $$
for each $ alpha> $ 1?

How long can the function $ n mapsto | t_ {n} – frac {r ^ {n}} {n} | $ grow? How long can the function $ n mapsto max (0, frac {r ^ {n}} {n} -t_ {n} $ grow? For example, can we have $ max (0, frac {r ^ {n}} {n} -t_ {n}) = O ( alpha ^ {n}) $ for everyone $ alpha> $ 1?

This question is motivated by very great cardinals.

The asymptotics of a vector sequence defined by a recurrence relation

The sequence of vectors $ ( mathbf {vb} _0, mathbf {vb} _1, mathbf {vb_2, dots) $ obeys the recursion relationship that

$ A mathbf {vb} _j- mathbf {vb} _ {j-1} = sum_ {k = 0} ^ j diag ( mathbf {vb} _k) B mathbf {vb} _ {j-k} $,

where A and B are given matrix. The first term $ mathbf {vb} _0 $ is also given.

How to calculate the asymptotics of elements in vectors $ mathbf {vb} _j $ or the vector standard $ || mathbf {vb} _j || $ or absolute average $ | u ^ T mathbf {vb} _j | $. (From the numerical results, I found that these three had similar asymptotics.)

For sequences of numbers, I know that the method of generating functions can solve problems similar to Motzkin's numbers. Are there methods for the vector sequence or the vector standard?

asymptotics – implementation of d-ary heap vs. implementation of Fibonacci heap Comparisons of Dijkstra performance

Suppose that Dijkstra's algorithm with the priority queue uses a d-ary memory segment. if we adjust d, we can try to get the best runtimes for the algorithm, with d being ~ | E | / | V |.

So, for a | V | fixed, what is the highest possible ratio between this runtime and the Dijkstra runtime using a Fibonacci segment? Where to know the Fibonacci segment: delete_min = O (log | V |), insert / decrease_key = O (1) (damped) and | V | × delete_min + (| V | + | E |) × insert = O (| V | log | V | + | E |).

On the other hand, implementation of the d-ary heap: delete_min = O ($ dfrac {d log | V |} {log d} $), insert / diminished_ sign = O ($ dfrac {log | V |} {log d} $) and | V | × delete_min + (| V | + | E |) × insert = O ((| V | · d + | E |)$ dfrac { log | V |} {log d} $ ).

Like trying to follow a solution to provide, but I'm not sure why it reduces to O ($ dfrac {log | V |} {log | E | / | V |}) $, in case 1 where | E | dominates, so Dijkstra with Fibonacci heap is O (| E |), How do we get the ration as O ($ dfrac {log | V |} {log | E | / | V |}) $ while Dijkstra with d-aire is O $ ((| V | · d + | E |) dfrac { log | V |} {log d} $)?

enter the description of the image here

asymptotics – when is it even possible (even for dense graphs) | E | = (| V | ^ 2)

You are absolutely right $ Theta $ is the closest asymptotic bond. But this remains asymptotic, and that means we do not care about constant factors or terms of lower degree: when $ n $ (or $ v $ or whatever) becomes big enough, the smallest terms become negligible.

In that case, $ frac {v (v-1)} {2} = frac {1} {2} v ^ 2 + frac {-1} {2} $ v. So, removing the constant factors and the lower terms leaves us with $ v ^ 2 $.

As a remark, I would not use $ Theta $ to set the number of edges in a chart because there is no lower limit. You can have a graph with an arbitrary number of vertices and zero edges if you wish.

sequences and series – Asymptotics of the partial sum of binomial coefficients

For some fixed $ 0 <p <1 $with $ q = 1-p $, let $ np leq c leq np + sqrt {2npq-2n log log n} $ and $ 2np leq x leq 2np + 2 sqrt {2npq-2n log log 2n} $. I'm trying to get some asymptotics of the partial sum
$$
sum_ {k = x-c} ^ c binom {n} {k} binom {n} {x-k}
$$

or equivalent if $ c = n lambda_1 $ and $ x = 2n lambda_2 $, for the constants $ p leq lambda_2 leq lambda_1 <$ 1
$$
sum_ {k = 2n lambda_2-n lambda_1} ^ {n lambda_1} binom {n} {k} binom {n} {2n lambda_2-k}
$$

My initial attempt was to adapt @ robjohn's solution in this article.

First we focus on
$$
a_k = binom {n} {k} binom {n} {2n lambda_2-k}
$$

Then leave $ k = n lambda_2 + j $,
$$
log left ( frac {a_ {k + 1}} {a_k} right) = – frac {2j} {n lambda_2 (1- lambda_2)} + O (n ^ {- 1})
$$

So,
$$
a_k = a_ {n lambda_2} exp left (- frac {2j ^ 2} {n lambda_2 (1- lambda_2)} + O (j / n) right)
$$

By approximation of Stirling, we have
$$
a_ {n lambda_2} sim frac {1} {2 pi n lambda_2 (1- lambda_2)} (1- lambda_2) ^ {- 2n} left ( frac {1- lambda_2} { lambda_2} right) ^ {2n lambda_2} = C ( lambda_2)
$$

Then, using the Riemann integral for the exponential,
$$
sum_ {j = -n ( lambda_1- lambda_2)} ^ {n ( lambda_1- lambda_2)} exp left (- frac {2j ^ 2} {n lambda_2 (1- lambda_2)} + O (j / n) right) = sqrt {n lambda_2 (1- lambda_2)} int _ {- infty} ^ { infty} exp left (-2t ^ 2 right) dt (1 + O (1 / n))
$$

we have
begin {eqnarray}
sum_ {k = 2n lambda_2-n lambda_1} ^ {n lambda_1} binom {n} {k} binom {n} {2n lambda_2-k} & sim & C ( lambda_2) sq n lambda_2 (1- lambda_2)} sqrt { pi / 2} \
& = & frac {1} {2 sqrt {2 pi n lambda_2 (1- lambda_2)}} (1- lambda_2) ^ {- 2n} left ( frac {1- lambda_2} lambda_2} right) ^ {2n lambda_2}
end {eqnarray}

Replacement of the back $ c = n lambda_1 $ and $ x = 2n lambda_2 $and noticing Stirling's formula for $ binom {2n} {x} $, we have
$$
sum_ {k = xc} ^ c binom {n} {k} binom {n} {xk} sim frac {1} { sqrt {2}} sqrt { frac {2n} {2 pi x (2n-x)}} left ( frac {2n} {2n-x} right) ^ {2n} left ( frac {2n-x} {x} right) ^ x sim frac {1} { sqrt {2}} binom {2n} {x}
$$

However, I am not entirely convinced that this asymptosis is valid for the ranges given for $ c $ and $ x $. Is there a refinement for this result?

asymptotics – Explanation of the execution of this function

I'm trying to understand the complexity of running the code below in terms of n.

I know that it is $ Θ (n ^ {4/3}) $but I do not understand why.

I thought the outer loop was running $ log (n) $ time, the second run $ n ^ {1/3} $ times and the deepest races $ O (log (n)) $ times as he runs $ i $ times and $ i $ is at most $ log (n) $. That would add to $ log ^ 2 (n) * n ^ {1/3} $, right?

for (int i = 1; i ≤ n; i = 2 * i) {
for (int j = 1; j * j * j ≤ n; j = j + 1) {
for (int k = 1; k ≤ i * i; k = k + i) {
F ();
}
}
}

( F () works at constant time)

Thank you for any help!

asymptotics – Does $ O (T + log T) = O (T log T) $?

Let $ b $ to be the base of the logarithm. Yes $ T> max (b ^ 2, 2) $then $ log T = log_b T> $ 2. So
$$ T log T – (T + log T) = (T-1) ( log T -1) -1> 1 times 1 -1 = 0 $$
that is to say., $ T + log T <T log T $.

So any function that grows asymptotically more slowly than $ T + log T $ modulo a constant factor also grows asymptotically more slowly than $ T log T $ modulo the same constant factor. According to the definition of multiple uses of the large O notation,
$$ O (T + log T) = O (T log T) $$

Yes it is true that $ O ((T + log T) ^ {1 / n}) = O ((T log T) ^ {1 / n}) $, where we consider $ n $ like a constant.