asymptotics – fine adjustment of the growth rate of polynomial degrees

Let $ r $ to be an integer with $ r> $ 1. Assume that $ p_ {k} (x) $ is a polynomial with positive integer coefficients with $ p_ {k} (0) = $ 1 but where $ p_ {k} neq 1 $ for everyone $ k geq 0 $.

Assume that
$$ prod_ {k = 0} ^ { infty} p_ {k} (x) = frac {1} {1-rx} $$

For each $ n> $ 0, let $ t_ {n} $ to be the numerical indices k $ or $ deg (p_ {k} (x)) = n $. So is it possible to select polynomials $ (p_ {k}) _ {k geq 0} $ or
$$ | t_ {n} – frac {r ^ {n}} {n} | = O ( alpha ^ {n}) $$
for each $ alpha> $ 1?

How long can the function $ n mapsto | t_ {n} – frac {r ^ {n}} {n} | $ grow? How long can the function $ n mapsto max (0, frac {r ^ {n}} {n} -t_ {n} $ grow? For example, can we have $ max (0, frac {r ^ {n}} {n} -t_ {n}) = O ( alpha ^ {n}) $ for everyone $ alpha> $ 1?

This question is motivated by very great cardinals.

The asymptotics of a vector sequence defined by a recurrence relation

The sequence of vectors $ ( mathbf {vb} _0, mathbf {vb} _1, mathbf {vb_2, dots) $ obeys the recursion relationship that

$ A mathbf {vb} _j- mathbf {vb} _ {j-1} = sum_ {k = 0} ^ j diag ( mathbf {vb} _k) B mathbf {vb} _ {j-k} $,

where A and B are given matrix. The first term $ mathbf {vb} _0 $ is also given.

How to calculate the asymptotics of elements in vectors $ mathbf {vb} _j $ or the vector standard $ || mathbf {vb} _j || $ or absolute average $ | u ^ T mathbf {vb} _j | $. (From the numerical results, I found that these three had similar asymptotics.)

For sequences of numbers, I know that the method of generating functions can solve problems similar to Motzkin's numbers. Are there methods for the vector sequence or the vector standard?

asymptotics – implementation of d-ary heap vs. implementation of Fibonacci heap Comparisons of Dijkstra performance

Suppose that Dijkstra's algorithm with the priority queue uses a d-ary memory segment. if we adjust d, we can try to get the best runtimes for the algorithm, with d being ~ | E | / | V |.

So, for a | V | fixed, what is the highest possible ratio between this runtime and the Dijkstra runtime using a Fibonacci segment? Where to know the Fibonacci segment: delete_min = O (log | V |), insert / decrease_key = O (1) (damped) and | V | × delete_min + (| V | + | E |) × insert = O (| V | log | V | + | E |).

On the other hand, implementation of the d-ary heap: delete_min = O ($ dfrac {d log | V |} {log d} $), insert / diminished_ sign = O ($ dfrac {log | V |} {log d} $) and | V | × delete_min + (| V | + | E |) × insert = O ((| V | · d + | E |)$ dfrac { log | V |} {log d} $ ).

Like trying to follow a solution to provide, but I'm not sure why it reduces to O ($ dfrac {log | V |} {log | E | / | V |}) $, in case 1 where | E | dominates, so Dijkstra with Fibonacci heap is O (| E |), How do we get the ration as O ($ dfrac {log | V |} {log | E | / | V |}) $ while Dijkstra with d-aire is O $ ((| V | · d + | E |) dfrac { log | V |} {log d} $)?

enter the description of the image here

asymptotics – when is it even possible (even for dense graphs) | E | = (| V | ^ 2)

You are absolutely right $ Theta $ is the closest asymptotic bond. But this remains asymptotic, and that means we do not care about constant factors or terms of lower degree: when $ n $ (or $ v $ or whatever) becomes big enough, the smallest terms become negligible.

In that case, $ frac {v (v-1)} {2} = frac {1} {2} v ^ 2 + frac {-1} {2} $ v. So, removing the constant factors and the lower terms leaves us with $ v ^ 2 $.

As a remark, I would not use $ Theta $ to set the number of edges in a chart because there is no lower limit. You can have a graph with an arbitrary number of vertices and zero edges if you wish.

sequences and series – Asymptotics of the partial sum of binomial coefficients

For some fixed $ 0 <p <1 $with $ q = 1-p $, let $ np leq c leq np + sqrt {2npq-2n log log n} $ and $ 2np leq x leq 2np + 2 sqrt {2npq-2n log log 2n} $. I'm trying to get some asymptotics of the partial sum
$$
sum_ {k = x-c} ^ c binom {n} {k} binom {n} {x-k}
$$

or equivalent if $ c = n lambda_1 $ and $ x = 2n lambda_2 $, for the constants $ p leq lambda_2 leq lambda_1 <$ 1
$$
sum_ {k = 2n lambda_2-n lambda_1} ^ {n lambda_1} binom {n} {k} binom {n} {2n lambda_2-k}
$$

My initial attempt was to adapt @ robjohn's solution in this article.

First we focus on
$$
a_k = binom {n} {k} binom {n} {2n lambda_2-k}
$$

Then leave $ k = n lambda_2 + j $,
$$
log left ( frac {a_ {k + 1}} {a_k} right) = – frac {2j} {n lambda_2 (1- lambda_2)} + O (n ^ {- 1})
$$

So,
$$
a_k = a_ {n lambda_2} exp left (- frac {2j ^ 2} {n lambda_2 (1- lambda_2)} + O (j / n) right)
$$

By approximation of Stirling, we have
$$
a_ {n lambda_2} sim frac {1} {2 pi n lambda_2 (1- lambda_2)} (1- lambda_2) ^ {- 2n} left ( frac {1- lambda_2} { lambda_2} right) ^ {2n lambda_2} = C ( lambda_2)
$$

Then, using the Riemann integral for the exponential,
$$
sum_ {j = -n ( lambda_1- lambda_2)} ^ {n ( lambda_1- lambda_2)} exp left (- frac {2j ^ 2} {n lambda_2 (1- lambda_2)} + O (j / n) right) = sqrt {n lambda_2 (1- lambda_2)} int _ {- infty} ^ { infty} exp left (-2t ^ 2 right) dt (1 + O (1 / n))
$$

we have
begin {eqnarray}
sum_ {k = 2n lambda_2-n lambda_1} ^ {n lambda_1} binom {n} {k} binom {n} {2n lambda_2-k} & sim & C ( lambda_2) sq n lambda_2 (1- lambda_2)} sqrt { pi / 2} \
& = & frac {1} {2 sqrt {2 pi n lambda_2 (1- lambda_2)}} (1- lambda_2) ^ {- 2n} left ( frac {1- lambda_2} lambda_2} right) ^ {2n lambda_2}
end {eqnarray}

Replacement of the back $ c = n lambda_1 $ and $ x = 2n lambda_2 $and noticing Stirling's formula for $ binom {2n} {x} $, we have
$$
sum_ {k = xc} ^ c binom {n} {k} binom {n} {xk} sim frac {1} { sqrt {2}} sqrt { frac {2n} {2 pi x (2n-x)}} left ( frac {2n} {2n-x} right) ^ {2n} left ( frac {2n-x} {x} right) ^ x sim frac {1} { sqrt {2}} binom {2n} {x}
$$

However, I am not entirely convinced that this asymptosis is valid for the ranges given for $ c $ and $ x $. Is there a refinement for this result?

asymptotics – Explanation of the execution of this function

I'm trying to understand the complexity of running the code below in terms of n.

I know that it is $ Θ (n ^ {4/3}) $but I do not understand why.

I thought the outer loop was running $ log (n) $ time, the second run $ n ^ {1/3} $ times and the deepest races $ O (log (n)) $ times as he runs $ i $ times and $ i $ is at most $ log (n) $. That would add to $ log ^ 2 (n) * n ^ {1/3} $, right?

for (int i = 1; i ≤ n; i = 2 * i) {
for (int j = 1; j * j * j ≤ n; j = j + 1) {
for (int k = 1; k ≤ i * i; k = k + i) {
F ();
}
}
}

( F () works at constant time)

Thank you for any help!

asymptotics – Does $ O (T + log T) = O (T log T) $?

Let $ b $ to be the base of the logarithm. Yes $ T> max (b ^ 2, 2) $then $ log T = log_b T> $ 2. So
$$ T log T – (T + log T) = (T-1) ( log T -1) -1> 1 times 1 -1 = 0 $$
that is to say., $ T + log T <T log T $.

So any function that grows asymptotically more slowly than $ T + log T $ modulo a constant factor also grows asymptotically more slowly than $ T log T $ modulo the same constant factor. According to the definition of multiple uses of the large O notation,
$$ O (T + log T) = O (T log T) $$

Yes it is true that $ O ((T + log T) ^ {1 / n}) = O ((T log T) ^ {1 / n}) $, where we consider $ n $ like a constant.

asymptotics – Missing part of the case 2 proof of the main theorem (with ceilings and floors) in CLRS?

I'm trying to review the proof of The Main Theorem in Introduction to Cormen, Leiserson, Rivest, Stein (CLRS) algorithms. Theorem provides an asymptotic analysis for recursion relations $ T (n) = aT (n / b) + f (n) $ or $ a geq 1, b> $ 1 and $ f (n) $ is an asymptotically positive function.

The authors create a formulation of the following lemma (4.3):

  1. if $ f (n) = Theta (n ^ { log_ba}) $then $ g (n) = Theta (n ^ { log_ba} { log_bn}) $

or $ g (n) = sum_ {j = 0} ^ { lfloor log_b n rfloor – 1} a ^ jf (n_j) $ and $ n_j = begin {cases}
n, & text {if j = 0} \
lceil n_ {j-1} / b rceil, & text {if j> 0}
end {cases} $

When they prove the situation in which soils and ceilings appear, the authors make a statement:

For Case 2, we have $ f (n) = Theta (n ^ { log_b a}) $. If we can show that
$ f (n_j) = O (n ^ { log_ba} / a ^ j) = O ((n / b ^ j) ^ { log_ba}) $, then the proof for
case 2 of lemma 4.3 will pass.

Well, I have it. When this condition is true, we can find a constant $ c $, such as

$ g (n) the sum_ {j = 0} ^ { lfloor log_b n rfloor – 1} n ^ { log_b a} $

and conclude that $ g (n) = O (n ^ { log_b a} { log_bn}) $.

But that's do not proof that $ g (n) = Omega (n ^ { log_b a} { log_bn}) $. Where is one? Should this be obvious? How can I prove this statement?

asymptotics – Let $ f (n) = Omega (n), g (n) = O (n) $ and $ h (n) = theta (n) $ then $[f(n).g(n)]+ h (n) $ is?

Let $ f (n) = Omega (n), g (n) = O (n) $ and $ h (n) = theta (n) $ then $[f(n).g(n)]+ h (n) $ is?

My attempt:

allows $ f (n) = g (n) = n $then $[f(n).g(n)]+ h (n) = Omega (n ^ 2) + theta (n) = Omega (n ^ 2) $

But the answer is $ O (n) $. Now of course where I made a mistake or if I miss something. How can it be $ O (n) $?

asymptotics – Lower limit of an iterative algorithm involving a while loop

I have an algorithm that includes a while loop (plus some previous and next steps that are run only once).

The least iteration necessary for the convergence of my algorithm is 1.

So, is it reasonable to infer that a lower limit for my algorithm is $ O $($ n $), or $ n $ is the number of steps needed to achieve convergence?

In general, to solve these types of problems, just count the number of times each step of the algorithm is executed.

This is trivial in the case of a for loop because there is a strict limit to the number of iterations. However, the case of a while loop is different because we do not know the number of iterations needed.