turing machines – Complexity of the language that enters an infinite loop

A few days ago I had a test that I failed to pass, and it had a question that I failed to do.

This is the question

Let’s look at the language $L_loop = ${
$left langle M,w right rangle$ | When M is activated on the input w, the machine M enters an infinite loop }

We will mark our input length: $|left langle M,w right rangle| = n$. Determine which of the claims is correct:

  1. The language is in $P$
  2. The language is not in $NP$ or $CoNP$.
  3. The language is in $NP$ but not in $NPC$.
  4. The language belongs to $CoNP$.
  5. None of the above claims are true.

From what I understand in the question, there is a language that accepts Turing machines that go into an endless loop.

  1. It cannot be, if the machine works infinitely, it is impossible to know in polynomial time when we will get yes and when we will get no. It has no algorithm.
  2. Probably can not either. A problem will be in NP if it has a non-deterministic guessing algorithm. If a witness gives input, it is not possible to say yes to the answer, because it is infinite.
  3. Do not know about it, maybe he is right
  4. Language should belong to Conp only when a guessing algorithm should say no. And I do not know if here should be said no and then it’s CoNP or say yes and then it’s NP.

I tried to figure out what the answer might be, but I could not figure out the answers to the question.

Maybe 5 is right, because I could not tell everyone else that they were right

turing machines – A synctactic property of the complexity class P

In these lecture notes the authors mentions that P is a syntactic complexity class, as we can find a decidable set of encodings for all polynomial time Turing machines. Of course, given a deterministic Turing machine $M$ and a number $k$, we can construct a Turing machine that simulates $M$ for $|x|^k$ steps on input $x$, and, if $M$ halts in these number of steps, returns the same answer as $M$.
Now, every language in P is represented by such a Turing machine, and so, by encoding these Turing machines together with $k$ in a reasonable way, we have a decidable language representing P.

Then, the author defines
$$
L = { langle M, x rangle mid mbox{$M$ is a polynomial-time Turing machine which accepts $x in {0,1}^*$} },
$$

and then claims that $L$ is in P. His argument is that we can first check that we have a valid encoding of $M$ and then run $M$ on input $x$. However, if I assume I have a Turing machine to decide $L$ in time $O(n^k)$ and $M$ is a Turing machine accepting a language in time $n^{k+1}$, then how can the original Turing machine run $M$ on inputs of length that need strictly more time on $M$ than the machine is allowed to run? I somehow think this “diagonalization” argument shows that $L$ could not be in P?

So, what am I missing? I tried to describe as precisely as possible how I understood the claims made, maybe I have understood something wrong…

how can turing machines by universal models of computation if they can’t perform binary search?

Turing Machines can simulate binary search, in the sense that they can compute whatever you can compute using binary search. You seem to be confusing computability and complexity, which are two different things.

Roughly speaking, comparability is about what we are able to compute in a given model of computation. We believe Turing machines to be an universal model of computation. All powerful-enough (i.e., Turing complete) models are equivalent to a Turing Machine.

Complexity is about how long it takes to compute something (actually, this is not restricted to time). Different models of computation are not necessarily equivalent complexity-wise.

turing machines – Prove/disprove that if $L_1, L_2in RE$ then $L_1- L_2, L_1 – bar{L_2}in RE$

For the statement which is true, you take your easiest example of a non-r.e. language (ie the complement of the Halting problem), and then show how it is expressible in the respective form.

For the statement which is true, you take semidecision procedures for $L_1$, $L_2$ and built one for the relevant language. If you do this properly, then the forever-running is not an issue. Or, if available to you, you invoke the closure of $mathrm{RE}$ under intersection.

turing machines – Can there exists mathematical theories not defined via axioms and logic?

Disclaimer

This is a “fuzzy” question so I will try to clarify what I mean. I am not a computer scientist or a mathematician so notations are probably unorthodox and there might be some mistakes in the affirmations below.

Notations

Let $A$ and $N$ be two semi-decidable sets and $n : A to N$ a computable bijection.

The disjoint union $A sqcup N$ can be recursively enumerated, implicitly defining a computable order $<$.

Let $T$ be a semi-decidable $<$-ordered subset of $A sqcup N$.

We can write the following program SearchContradiction:

for t in T:
    let u = n(t) if t in A else n*(t)  // n* denote the inverse of n

    for v in T:
        if u < v:
            break // exit the current loop
        if u = v:
            halt

Note the expression t in A is computable, we can write it as the halting subroutine:

for a in A:
    if t = a:
        return true
    if t < a:
        return false

Definition

We call $(A, N, T, n)$ a theory iff “SearchContradiction does not halt” is a theorem of $ZFC + Con(ZFC)$.

Question

With this definition, $ZFC$, Peano arithmetic, euclidian geometry or constructive analysis are theories. For all these, the program recursively enumerating $T$ is usually defined using a “meta-machine” taking as input a set of axioms and a logic (classical for the first three, intuionistic for the last one).

But is this “meta-machine” Turing-complete or can there exist a theory such that any $T$-enumerator cannot be run on it?

Turing machine for language ${a^p | p~text{is a prime number}}$

I am trying to find a Turing Machine for the above language. I am trying to do it with one tape, to start with. However, I am not quite sure about how to keep track of the number of a’s in the string. Any help is appreciated.

complexity theory – What was the original paper that showed a simulation of turing machines via circuits?

It is a very standard construction in most complexity theory courses to turn a turing machine into a circuit. I thought this was due to Cook, but it looks like he did the reduction to SAT not through circuits and so now I am unsure. Is there a paper that originally established the well-known circuit simulation that has remained state-of-the-art the last 50 or so years, or has it just been folklore?

computability – Issues in the proof of $E_{TM}$ is Turing reducible to $A_{TM}$

First definition:

$A_{TM}$ = ${ <M,w> | $M is a TM and M on w accepts$ }$

Second definition:

$E_{TM} = { <M> |$ M is a TM and L(M) = $phi }$

Let $T^{A_{TM}}$ be an oracle Turing machine with an oracle $A_{TM}$. We want to show that $E_{TM}$ is Turing reducible to $A_{TM}$.

$T^{A_{TM}}$ = “On input $<M>$, where M is a TM:

  1. Construct a TM N:
  1. N = “On any input:
  2. For i=0, 1, 2, 3. Run M on s_i for i steps
    where $s_i in Sigma^*$ and $Sigma^*={s_0, s_1, s_2, …. }$.
  3. If M
    accepts any of these strings, then accept.”
  1. Ask the oracle: Is $<N, 0> in A_{TM}$.
  2. If the oracle answers NO, accept. If YES, reject.”

Now, Sipser said in pp. 236-237, “If M’s language isn’t empty, N will accept every input and in particular, input 0. Hence the oracle will answer YES, and $T^{A_{TM}}$ will reject. Conversely, if M’s language is empty, $T^{A_{TM}}$ will accept.”

My question: Why N will accept every input and in particular, input 0? It is not clear how N will accept every string. For example, M will run on all input of $Sigma^*$ but only those strings that are in L(M) will be accepted and I don’t understand how N will accept every input. Moreover, it is not clear why he choose string “0”.

Note that I changed step 2 where Sipser wrote: “Run M in parallel on all strings in $Sigma^*$” since as I believe mean the same thing unless you have something to say.

turing machines – Confused about definition of a non-deterministic decider

You basically have the right intuition. Let’s expand the definitions one more step, to see it more clearly.

detrminstic decider: if $xin L$, the decider will stop on YES (there is only a single branch for $x$)

non-detrminstic decider: if $xin L$, there exists a branch on which the decider will stop on YES (there are multiple possible branches for $x$ chosen non-deterministically during the computation; some of which might be NO or non-halting).

You can state equivalent statements for any $x notin L$.

As you can see, there is a lot of difference between the two. It is also should be apparent why if first statement holds, the second holds as well, but not the other way.

complexity theory – Does MIP* = RE’s major breakthrough mean quantum computers are more powerful than turing machines?

To summarize: No, this paper does not provide an example of something a quantum computer can compute that a Turing computer can’t.

I believe you have misunderstood Henry’s comment. The problem in question isn’t the halting problem (given any Turing machine and input, will it halt). Rather, the problem is to prove that a program halts, given that it halts. As Henry describes, there is a simple means of solving this problem that Turing computers are capable of-They can list out the steps leading to the program’s halting.

What the quantum computer can do that the Turing computer can’t is the following. Given a Turing computer and input that halts, once the quantum computer knows that the process halts (which can take arbitrarily long to determine by itself), it can provide proof that the process halts in constant time.

Key points: This problem has no relevance to solving the halting problem, because the quantum computer needs to know that the process halts to proceed. Turing computers can also solve this problem, but the time to provide an answer can be arbitrarily long for given inputs.

To your bonus question about humans surpassing Turing computers if we solve a problem they can’t: In theory, the answer is yes, but it is quite hard to do so. To prove that a computer (Turing or human) can solve a problem, you must prove that they can solve it for all inputs. Our current rules of mathematics are describable by a Turing computer, so we would need to provide this proof with something that transcends current mathematics. Additionally, solving the problem for some inputs is not proof we can solve it for all inputs.

Secondly, a very substantial implication of the answer being yes would be that either: Turing computers cannot simulate the laws of physics, or the laws of physics do not fully describe the function of the human brain. As it currently stands, it seems that both of those aren’t true – physics fully explains the human brain, and a quantum computer can simulate the laws of physics. (It is 100% true that a quantum computer can simulate our current laws of physics, while it is likely that it can simulate whatever the true laws of physics are).