complexity theory – Consequence of NP-complete, and DP-complete w.r.t. randomized reductions

If a problem is NP-complete with respect to randomized (polynomial time) reductions, but not with respect to deterministic reductions, then we have P $neq$ BPP (See Question 2 here and its answer).

Suppose a decision problem is proved to be NP-complete; and it is also proved to be DP-complete with respect to randomized reductions. Does this have any real consequence?

Context
Given a graph $G$, it is coNP-complete to test whether $G$ has exactly one 3-colouring upto swapping of colours (because the “another solution” problem associated with 3-colouring is NP-complete (1)). The same problem is DP-complete with respect to randomized reductions (2).

(1) Dailey, David P., Uniqueness of colorability and colorability of planar 4-regular graphs are NP-complete, Discrete Math. 30, 289-293 (1980). ZBL0448.05030.

(2) Barbanchon, Régis, On unique graph 3-colorability and parsimonious reductions in the plane, Theor. Comput. Sci. 319, No. 1-3, 455-482 (2004). ZBL1043.05043.

complexity theory – Classes of Functions Closed Under Polynomial Composition – Papadimitriou Exercise 7.4.4

I am studying Computation complexity using Papadimitrious’s book: “Computational Complexity”.

I am trying to do Problem 7.4.4:

“Let $C$ be a class of functions from nonnegative integers to nonnegative integers. We say that $C$ is closed under left polynomial composition if $f(n) in C$ implies $p(f(n))=O(g(n))$ for some $g(n) in C$, for all polynomials $p(n)$. We say that $C$ is closed under right polynomial composition if $f(n) in C$ implies $f(p(n))=O(g(n))$ for some $g(n) in C$, for all polynomials $p(n)$.

Intuitively, the first closure property implies that the corresponding complexity class is “computational model-independent”, that is, it is robust under reasonable changes in the underlying model of computation (from RAM’s to Turing machines, to multistring Turing machines, etc.) while closure under right polynomial composition suggests closure under reductions (see the next chapter).”

Which of the following classes of functions are closed under left polynomial composition, and which under right polynomial composition?

(a) – ${n^k: k > 0 }$

(b) – ${k cdot n: k > 0 }$

(c) – ${k^n : k > 0 }$

(d) – ${2^{n^k} : k > 0 }$

(e) – ${log^k n: k > 0 }$

(f) – ${log n}$

After understanding the definition of closed under left/right polynomial composition, I think I was able to solve items (a), (b), (c) and (f). However, I was not able to solve items (d) and (e).

My solution for item (a):

Closed Under Left Polynomial Composition: consider an arbitrary $f(n) in C$ and an arbitrary polynomial $p(n)$. Then, $f(n)$ is of the form $n^{k’}$, for some $k’ > 0$ and therefore $p(f(n))$ is a polynomial. Let $k”$ be the degree of the polynomial $p(f(n))$. Take $g(n) = n^{k”} in C$ and we have $p(f(n)) = O(g(n))$.

Closed Under Right Polynomial Composition: same reasoning.

My solution for item (b):

Not Closed Under Left Polynomial Composition: consider as a counterexample $f(n) = n in C$ and $p(n) = n^2$. Then, $p(f(n)) = n^2$. For every $g(n) = k n in C$ we have $O(g(n)) = O(n)$. Since $n^2 neq O(n)$ we conclude.

Not Closed Under Right Polynomial Composition: the previous counterexample applies.

My solution for item (c):

Closed Under Left Polynomial Composition: Consider an arbitrary $f(n) = k_1^n$ and a polynomial $p(n)$. Notice that $p(f(n))$ is a polynomial in $k_1^n$. For sufficiently large $n$, there exists some $k_2$ such that $p(n) leq n^{k_2}$ and therefore $p(f(n)) leq (f(n))^{k_2} = (k_1^{n})^{k_2} = (k_1^{k_2})^n$. Therefore, taking $g(n) = (k_1^{k_2})^n in C$ we obtain that $p(f(n)) = O(g(n))$.

Not Closed Under Right Polynomial Composition: Consider as a counterexample $f(n) = 2^n$ and $p(n) = n^2$. Then, $f(p(n)) = 2^{n^2}$, which is greater than $g(n) = k^n$, for every fixed value of $k$, if $n$ is sufficiently large. Therefore, $f(p(n)) not in O(g(n))$.

My solution for (f):

Not Closed Under Left Polynomial Composition: Consider as a counterexample $f(n) = log n$ and $p(n) = n^2$. Then, $p(f(n)) = log^2 n$. Also, $g(n) in C$ implies that $g(n) = O(log n)$. We have $log^2 n not in O(log n)$.

Closed Under Right Polynomial Composition: If $f(n) in C$ then $f(n) = log n$. Given an arbitrary polynomial $p(n)$, we have that there exists some $k’$ such that, for sufficiently large $n$, $p(n) < n^{k’}$. Then, for sufficiently large $n$:
$ f(p(n)) leq f(n^{k’}) = log n^{k’} = k’ log n = O(log n) = O(g(n)).$

Can anyone help me with items (d) and (e)?

Thanks in advance. Of course, corrections/comments on the other items are also welcomed.

complexity theory – If P = NP, would dynamic programming be obsolete?

I know that dynamic programming is used to solve in “pseudo-polynomial time” some NP problems, like the knapsack. If P = NP, would it mean that every problem that we solve with dynamic programming would have a more efficient (polynomial) way to solve? Or are there problems that even if P = NP, we would still use dynamic programming

algorithms – Time complexity of finding median in data stream

I was reading a solution to the problem in the title on leetcode and the article says that the time complexity of the following solution is O(n)

  1. setup a data structure to hold stream value and insert new element in the right place using linear search or binary search
  2. return median

my solution is as follows:

class MedianFinder:

    def __init__(self):
        """
        initialize your data structure here.
        """
        self.arr = ()
        

    def addNum(self, num: int) -> None:
        idx = bisect.bisect_left(self.arr, num)
        self.arr.insert(idx, num)

    def findMedian(self) -> float:
        # self.arr.sort()
        if len(self.arr) % 2 != 0:
            return self.arr(len(self.arr)//2)
        else:
            return (self.arr(len(self.arr)//2 -1) + self.arr(len(self.arr)//2 ))/2

My question is about the time complexity of the push method. the binary search will take O(log n) to find the index. the insert will take O(n). but since the method will be called on the stream, will the complexity be O(n^2) or O(n) ?

enter image description here

complexity theory – Why can’t MIP be simulated in PSPACE

The proof that $IP subseteq PSPACE$ is done by considering by game tree of the interactions between the prover and the verifier; this tree is polynomial in depth and each message is polynomial in length, so the tree can be explored in a DFS like fashion to simulate an optimal prover in PSPACE.

Why can’t the same be done for MIP? Proofs that $MIP subseteq NEXP$ usually show that the multiple provers can be simulated by non-deterministically guessing a strategy for each, but why can’t you just simulate the tree of interactions as in the single-prover case?

logic – Complexity of pattern matching for modus ponens logical conclusions

Is a Turing machine with added the following contant-time operation equivalent (in the sense that polynomial time remains polynomial time and exponential time remains exponential time) to a (usual) Turing machine:

By predicates I will mean predicates in first-order predicate calculus. (Note that predicates may have free variables.)

  • constant-time modus-ponens resolution (yes or no) and then adding $y$ to the end of this array if yes, for given predicates $x$ and $y$ and an array (or a linked list) of predicates. By definition of modus ponens, it’s yes, if and only if some element of the arrays is $XRightarrow y$ where $X$ is a pattern matching $x$.

Remark: The above operation is a part of the standard procedure of proof-checking is first-order predicate logic.

If the above hypothesis is false, then what is the running time upped bounds of the above operation in different kinds of Turning machine equivalents (such as Turing machine, Markov algorithms, von Neumann architecture with infinitely many infinitely big words of memory, etc.)?

BTW, is von Neumann architecture with infinitely many infinitely big words of memory a Turning machine equivalent? (I think yes, but not 100% sure.)

complexity theory – Constant-time adding an element?

Is a computer with infinite memory and infinite word size a Truing machine equivalent (in the sense that polynomial time remains polynomial time and exponential time remains exponential time) if we allow constant-time linked link element insertion (at the beginning of the list)?

I doubt this because element insertion requires memory allocation and allocation is usually not a constant-time operation.

complexity theory – Show that NL ⊆ P

$textsf{PATH}$ is in $textsf{NL}$, because to solve it, you just need to keep in memory the current vertex you are in, and guess (non-deterministicaly) the next one on the path until you reach your destination. Since you keep the current vertex $v$, numbered from $0$ to $|V| – 1$, you need a memory space corresponding to the binary encoding of $v$, which is at most $1 + log_2(|V| – 1)$. You also need to keep the potential adjacent vertex of $v$, next in the path.

All in all, a Turing Machine solving this problem would only need $O(log |V|)$ additionnal space memory (the memory of the graph and of the starting vertex and the destination vertex of the path you are guessing are not considered in the memory used, because they are part of the input).

$textsf{PATH}$ is $textsf{NL}$-hard, because to solve any $textsf{NL}$ problem, you have to determine if there exists a sequence of possible transitions from the initial configuration to an accepting configuration in the Turing Machine of the problem. If you consider a graph of the possible configurations (where there exists an edge from a configuration $alpha$ to a configuration $beta$ if and only if one can go from $alpha$ to $beta$ in one transition in the Turing Machine), then solving the $textsf{NL}$ problem is the same as solving $textsf{PATH}$ in the graph of possible configurations.

You then need to prove that the graph of configurations can be constructed in logarithmic additionnal space. This can be done, because if a non-deterministic Turing Machine works in space $s(n)$, then the number of possible configurations is $2^{O(s(n))}$. Considering the binary encoding of those configurations, one can determine if there exists an edge between two configurations in deterministic space $O(s(n))$.

Now, since $textsf{PATH}$ is solvable in polynomial time (with a graph traversal algorithm, for example), that means that any $textsf{NL}$ problem is solvable in polynomial time (via the $textsf{NL}$-completude of $textsf{PATH}$), so $textsf{NL}subseteq textsf{P}$. This stands true, because if a Turing Machine uses $s(n)$ space memory, then it has at most $2^{O(s(n))}$ configurations, and exploring all of them takes time $2^{O(s(n))}$. Since $s(n) = log n$ for problems in $textsf{NL}$, the total time is indeed $n^{O(1)}$.

Kolmogorovs complexity proof

Prove that there is a constant c ∈ N such that, for all n ∈ N,
|C(sn) − C(sn+1)| ≤ c.

So what I know so far is the following:
We can define 2 functions f and g such that
for f:
C(sn) − C(sn+1) ≤ c.

and for g:
C(sn+1) − C(sn) ≤ c.

We also know that C(f(x))<=C(x)+c.
So cross comparing, we can use sn=f(sn+1)for the first part and sn+1=g(sn) for part 2, but Im having a hard time defining the 2 functions. Any guidance or tips?

complexity theory – Why does a polytime hitting set generator derandomize RP?

I am reading Goldreich, Vadhan, Wigderson: Simplified Derandomization of BPP Using a Hitting Set Generator and trying to understand the result that polytime hitting set generators (HSGs) would not only imply $mathsf{RP = P}$ but $mathsf{BPP = P}$ as well, but I can’t even conceptualize why the former holds.

The paper states that

Having such a generator that runs in polynomial time enables a trivial deterministic
simulation of an RP algorithm by using each of the generator’s outputs as the random pad
of the given algorithm.

Every paper or lecture notes I’ve found on the topic of HSGs and $mathsf{BPP}$ derandomization also mentions that $mathsf{RP}$ derandomization is trivial using an HSG, simply by trying all elements of the hitting set as random pads.

I am confused as to why this is the case. At least one element of a hitting set must be accepted by circuits that accept over half of their inputs. However, RP circuits don’t necessarily accept over half their inputs; rather, if $x in L$, only then will the circuit accept over half the time. I assume that fact is related to why a hitting set must contain a valid $mathsf{RP}$ random pad, but I do not understand exactly how.

So, my question is: how do the circuits in the definition of an HSG relate exactly to $mathsf{RP}$ circuits? i.e. how do the elements of the hitting set – which are to be the random pads across all inputs – relate to the randomness of RP algorithms when such algorithms accept over half the time only when $x in L$?