algorithm analysis – Big-O notation for lower bound instead of Big-Omega

In the Wikipedia’s Binary search tree, one can read

Traversal requires $O(n)$ time, since it must visit every node.

Since it is question of a lower bound, shouldn’t we write

Traversal requires $Omega$(n) time, since it must visit every node.

Is the $O(n)$ statement here even correct?

Problem in understanding big-O notation and similiar

I am trying to understand the concept of big-O notation problem and came through this problem. Can you tell me how could

O(n) = 1 + Θ(1/n)

complexity theory – Finding the Big-O and Big-Omega bounds of a program

I am asked to select the bounding Big-O and Big-Omega functions of the following program:

void recursiveFunction(int n) {
    if (n < 2) {

    recursiveFunction(n - 1);
    recursiveFunction(n - 2);

From my understanding, this is a Fibonacci sequence, and according to this article here,, the tight upper bound is $1.6180^n$. Thus, I chose all the Big-O bounds >= exp(n) and all the Big-Omega bounds <= exp(n). Below are the choices:

O(log n)

Om(log n)

The answer choices I selected:


Om(log n)

However, it was alerted that a few of my answers were incorrect (not sure which of them). This seems strange, considering that this recursive function mimics the calls of a fibonacci sequence which has a Big-Theta exponential time complexity.

algorithms – Are logarithmic Big-O complexities defined with constant base equal to those defined with variable base?

Example: Deleting from a B-Tree (not to be confused with binary tree) has Big-O complexity of $ O(log_t n) $ (where $t$ is the order of the tree).

There was one true/false question on exam which asks if the Big-O complexity of the operation mentioned in the example above is $ O(log_2 n) $.

I am beginner in this topic but I understand that both $ O(log_2 n) $ and $ O(log_t n) $ belong to the same Big-O complexity category of $ O(log n) $. The only thing which confuses me is whether it matters if the base of a logarithm is given as a constant or as a variable.

Additionally: Would the answer change if we swap the bases from the example and the question?

algorithms – Proving a tighter upperbound (big-O) for this problem


So the other day I had fun providing a new solution to this famous question. In the analysis part I showed that my little algorithm has space complexity: O(k) and Ω(log(k)). However, my rough logic says that we should be able to prove a tighter bound of O(min(k,log(n))), but I was unable to prove it. Moreover it seems like an interesting, general-case problem.

Finding the space complexity is equivalent to answering this question:


Array of size k; where elements, e, are unique integers: 0 <= e < n, for some n.


Find the average (rounded down) of the elements.
Randomly discard either all the elements > than the average, or all the elements <= the average.
We then do the same thing to the reduced array size.
We keep repeating the above steps until the array is size 1.

My Question:

Is it O(min(k,log(n))) on the number of times average is counted? and if so, how do we prove it? (NOTE that we don’t care about the time it takes to calculate the average or remove the elements. This is because this questions is deliberately designed to be equivalent to my space-complexity problem.)

My thoughts:

It seems really intuitive that it’s O(min(k,log(n))) because if k is n, then taking the average will always divide the elements in half. However, I can’t seem to prove that it doesn’t sometimes perform worse when k < n.

Thinking about this a bit we recognise that having outliers is really what makes for bad performance, so I try to imagine a worst-case scenario:

array = (1, 2, ..., (x-2), (x-1), (n-1)), where (x-1) < average < (n-1)

In this scenario, we in worst-case reduce the array size to x-1; however, by increasing k by adding any number of elements, e: (x-1) < e < (n-1), the worst-case array size (for the next iteration) is still x-1.


average = (x(x-1)/2 + n-1)/x

Thinking about the complexity, for a given n, worst case x:

x = average

=> x^2 - x^2/2 + x/2 = n-1
=> O(x) = O(sqrt(n))

and here x represents the biggest sub-size.

So in this artificial “worst case” scenario, we get O(sub-array size) = O(min(k,sqrt(n))), which kind of implies that the overall complexity = O(min(k,log(sqrt(n)))) = O(min(k,log(n))).

However, this proof is quite informal and I’m not sure how to show that this genuinely represents the worst-case scenario.

real analysis – Something wrong with my proof on little-o Big-O arithmetics

I run into this problem in a web site and it states that the equality below is false. When I try to prove that this is false I end up finding out that it is true. What I wanted to ask you is where I did wrong in the proof below. I dont give up quickly but it’s been three days, it started really to put me back into my schedule.
Thanks in advance!

$o(x^n) + o(x^m) = O(x^n), {x to inf}, n>m$

$ f(x) = o(x^n) iff |f(x)| < epsilon * |x^n| , forall x geq N $

$ g(x) = o(x^m) iff |g(x)| < epsilon * |x^m| , forall x geq M $

$ |f(x)| + |g(x)| leq epsilon * (|x^n| + |x^m|) , forall x geq max(M,N) $

$ |f(x) + g(x)| leq |f(x)| + |g(x)| leq epsilon * (|x^n| + |x^m|) , forall x geq max(M,N) $ (triangle inequality)

$ |f(x) + g(x)| leq epsilon * (|x^n| + |x^m|) , forall x geq max(M,N) $ (logical result of the statement above)

$ |f(x) + g(x)| leq epsilon * (|x^n| + |x^m|) leq epsilon * (|x^n| + |x^n|) , \ forall x geq max(M,N) (if n>m then x^n > x^m if x approaches infinity) $

$ |f(x) + g(x)| leq epsilon * 2 * |x^n|, \ forall x geq max(M,N) $ (logical result of the statement above)

$ |f(x) + g(x)| leq epsilon’ * |x^n|, forall x geq max(M,N), (where epsilon’ = 2*epsilon)$

$ f(x) + g(x) = o(x^n), forall x geq max(M,N) $ (formel definition of little-o)

$ o(x^n) + o(x^m) = o(x^n), forall x geq max(M,N) $ (replace f(x) and g(x) back)

$ o(x^n) + o(x^m) = O(x^n), forall x geq max(M,N) $ (little-o implies big-O)

time complexity – Comparing the big-$O$ of these four functions

Sometimes you can substitute values for $n_0$ and $c$ in the big-$O$ equation and compare two functions. Or take limits and compare two functions.

But for the following functions, for example, taking the limit in infinity for
$f_3$ over $f_2$ requires using l’Hôpital’s rule which doesn’t simplify anything. $f_3$ is technically the product of a polynomial and an exponential function. And I don’t know how to go with comparing functions like that with others.

Firstly, I know that $f_4$ is the most efficient because it is $O(n^2)$. ($f_4(n) = n + frac{n(n + 1)}{2}$) and the rest are exponential.

But for the rest, I really don’t know what to besides using my intuition which could be really far from the correct answer anyway. Please help me compare these rigorously.

$f_1(n) = n^{sqrt{n}}$

$f_2(n) = 2^n$

$f_3(n) = n^{100}2^{frac{n}{2}}$

$f_4(n) = Sigma_{i=1}^{n}i + 1$

time complexity – Big-O of iterating through nested structure

While trying to understand complexity I run into an example of going through records organized in following way:

data = (
  {"name": "category1", "files": ({"name": "file1"}, {"name": "file2"}),
  {"name": "category2", "files": ({"name": "file3"})

The task requires to go through all file records which is straight forward:

for category in data:
  for file in category("files"):

It seems like complexity of this algorithm is O(n * m), where n is length of data and m is max length of files array in any of data records. But is O(n * m) only correct answer?

Because even there are two for-loops it still looks like iterating over a global array of file records organized in nested way. Is it legit to compare with iteration over different structure like that:

data = (
  ('category1', 'file1'),
  ('category1', 'file2'),
  ('category2', 'file3'),
for category, file in data:

…where complexity is obviously O(n), and n is a total number of records?

Sum rule for Big-O with equal complexity-functions?

One property of the Big-O-notation is the sum rule, which states that when I have two functions $f1$ and $f2$ and their corresponding complexity functions are $g1$ and $g2$, then the combined complexity is $f1 + f2 = O(max(g1, g2))$.

But what do we pick if both complexity-functions are equal? E.g., if $f1(n)$ is the sorting of an array and $f2(m)$ as well, then the complexities are $O(nlog(n))$ and $O(mlog(m))$. Applying the rule would be $O(max(nlog(n), mlog(m)))$. I think that picking any of those would yield a valid but very unintuitive estimation as you would drop one variable. Besides that, it’s not clear which one to pick.

Is it the case that you are not supposed to use the rule when there are multiple variables involved?

What is the Big-O of this algorithm?

What is the Big-O of the current algorithm Is it O (N) or O (N lg N)?

        count = 0
        for i = N : 1 {
            for j = 0 : i {
                count = count + 1
                j = j + 1
            i = i/2