complexity theory – Padding in proof of space hierarchy theorems

Suppose that we consider instead the language
$$ L = { langle M rangle : text{$M$ does not accept $langle M rangle$ in space $f(langle M rangle)$} }. $$
We want to show that $L notin mathsf{SPACE}(o(f(n))$, that is, that if $M$ uses space $o(f(n))$ then $L(M) neq L$. This should be the case since $$langle M rangle in L Leftrightarrow langle M rangle notin L(M).$$
But is this really true? According to the definition of $L$, $langle M rangle in L$ iff $M$ does not accept $langle M rangle$ in space $f(langle M rangle)$. It could be that $langle M rangle in L$ and $M$ accepts $langle M rangle$ using more than $f(langle M rangle)$ space. The latter could actually happen, since we are only guaranteed that $M$ uses space $g(n)$ for some function $g(n) = o(f(n))$, which does not preclude $g(|langle M rangle|) > f(|langle M rangle|)$ at the particular value $|langle M rangle|$.

Adding the padding fixes this issue: it cannot be that $g(|(langle M rangle, 10^k)|) > f(|(langle M rangle, 10^k)|)$ for all $k$, since this would contradict $g(n) = o(f(n))$.

real analysis – Wrong inequality in simple proof Cauchy $implies$ Boundedness.

These notes from Oxford University contain an apparently very simple proof that Cauchy sequences (real or complex) imply boundedness.

I understand the Cauchy condition $|a_m – a_n| < epsilon$, and that the proof assigns $epsilon=1$ as an arbitrary value.

Question: I can’t understand how the following inequality is derived from the triangle inequality:

$$|a_m| leq 1 + |a_N|$$

My Attempt: I have tried using the reverse triangle inequality with no success:

$$|a_m| – |a_N| leq |a_n -a_N| < 1$$

And so,

$$|a_m| < 1 + |a_N|$$

Here the inequality is $<$ and not $leq$ as per the reproduced notes.


For convenience, the proof is reproduced below.

enter image description here

probability theory – Donsker and Varadhan inequality proof without absolute continuity assumption

I’ve been attempting to understand the proof of the Donsker-Varadhan dual form of the Kullback-Liebler divergence, as defined by
$$
operatorname{KL}(mu | lambda)
= begin{cases}
int_X logleft(frac{dmu}{dlambda}right) , dmu, & text{if $mu ll lambda$ and $logleft(frac{dmu}{dlambda}right) in L^1(mu)$,} \
infty, & text{otherwise.}
end{cases}
$$

with Donsker-Varadhan dual form
$$
operatorname{KL}(mu | lambda)
= sup_{Phi in mathcal{C}} left(int_X Phi , dmu – logint_X exp(Phi) , dlambdaright).
$$

Many of the steps in the proof are helpfully outlined here: Reconciling Donsker-Varadhan definition of KL divergence with the “usual” definition, and I can follow along readily.


However, a crucial first step is establishing that (for any function $Phi$)
$$tag{1}label{ineq}
operatorname{KL}(mu|lambda)ge left{int Phi dmu-logint e^{Phi}dlambdaright},$$

said to be an immediate consequence of Jensen’s inequality. I can prove this easily in the case when $mu ll lambda$ and $lambda ll mu$:

$$ operatorname{KL}(mu|lambda) – int Phi dmu = int left( -logleft(frac{e^{Phi}}{dmu / dlambda}right) right) dmu ge -log int frac{e^{Phi}}{dmu / dlambda} dmu = -logintexp(Phi)dlambda.$$
However, this last step appears to crucially rely on the existence of $dlambda/dmu$ and thus that $lambda ll mu$, which isn’t assumed by the overall theorem. Where I have been able to find proofs of the above in the machine learning literature, this assumption seems to be implicitly made, but I don’t believe it is necessary and it is very restrictive.


My question is: how can we prove ref{ineq} without assuming $lambda ll mu$?

proof writing – If $X$ is compact then prove that $X$ is complete and totally bounded.

I tried to do it in a way different from my textbook:

Let $(X,d)$ be a compact metric space then it is totally bounded.(I have been Ble to prove this).

My doubt lies in the following part that I have tried to prove:

Let $X$ be a compact metric space then $X$ is totally bounded.We choose a cauchy sequence ${x_n}$ in $(X,d)$ .

Let $A$ be a subset of $(X,d)$ such that $A={x_n}$.Then $A$ will be totally bounded too as $A subset (X,d)$.

Let for an $epsilon > 0$ there exist points ${x_1′,cdots x_n’}$ such that $B_d(x_k’,epsilon)$ contains infinitely many points of $A$.Then as ${x_n}$ is a cauchy sequence so we can conlcude that ${x_n}$ converges to $x_k’$. Also as $(X,d)$ is a metric space so ${x_n}$ can converge to only one points and we have shown that $x_k’$ is a limit point of ${x_n}$.

Another proof for interior of ${(x,y) mid y=0} cup {(x,y) mid x>0 text{and} y neq 0}$

Consider the set $$C= A cup B ={(x,y) mid y=0} cup {(x,y) mid x>0 text{and} y neq 0}$$ I want to prove $text{Int } B = text{Int } C$.
My attempt:

Clearly $text{Int } B subset text{Int } C$ as $B subset C$. To prove the converse inclusion, let $(x,y) in text{Int } C$, then there must exists a $r>0$ such that $B_d((x,y), r) subset C = A cup B$.

Actually I want to show that $B_d((x,y), r) cap A = emptyset$, which implies that $B_d((x,y), r) subset B$.

But I can not procced more. Please help me.

functions – Help with Strong Induction proof


Your privacy


By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.




real analysis – Help understanding the Proof of Cauchy’s criterior for uniform convergence of sequence of functions

Before asking the question I would like to say that I filtered it into mathstackexchange before, and that every answer to a similar question didn’t really convinced me, so please don’t just write “go see this answer” and leave. That being said, my question is: i’ve seen two types of proofs for this theorem: the one that uses a limit argument at a certain point, for which I don’t understand why are we allowed to use that, since the limit in question is a pointwise limit, and so one needs to fix X in order to make it work, but then claims that the obtained inequality holds for every X in the set for which the sequence {f_n(X)} is uniformly Cauchy. Why can this be done? To me it feels like it’s not that rigourous. Is there any other epsilon-N proof which I can understand (I saw a couple of them on this site but I couldn’t figure out why they worked, since they have a similar flaw of the proof above: they use an argument that to me is justified only when X is fixed, while the X is clearly not fixed. Or, if they fix it, then they claim that it works for every X on the set for which f_n is uniformly Cauchy)

proof techniques – Mutual Friends in a Network?

I always seem to have trouble finding a formal way to analyze this (be through proofs or whatever).

The problem statement is as such:

If A and B are friends, and B and C are friends, then A and C are friends too.

In a simple network like the following, this makes complete sense:

1 — 2 — 3

Analysis:
We can see that 1 and 2 are friends, and 2 and 3 are friends. It follows from the problem statement that 1 and 3 must be friends too. This is the most generic case for a problem like this.

Where I get confused is in a following network:

1 — 2 — 3 — 4

Analysis:
We can see that 1 and 2 are friends, and 2 and 3 are friends; therefore, 1 and 3 must be friends. Also, since 2 and 3 are friends, and 3 and 4 are friends; therefore, 2 and 4 must also be friends.

Since 1 and 2 are already friends, would it follow from our conclusion of the last sentence (that 2 and 4 are friends) that 1 and 4 must also be friends?

Moving forward into a bigger picture, any group of connected nodes would also all be friends?

What’s the best way to analyze this?

computer science – Pumping Lemma proof for this language: L={a^ib^jc^k∣k=i∗j}

i have troubles to show that this language is not context free with the pumping lemma.

As a word I chose: a^mb^mc^(m^2)

I solved all the cases but one, which is:
“vxy contains b’s and c’s”

I came up with the following approach to show that the resulting word can not be in the language:

p * (p+|v|) != p*p + |y|

this is equal to:

p * |v| != |y|

Because p must >= 1 and |vxy| > 0 and |vxy| <= p the resulting word is not in the language.
Is this assumption correct?

Proof of limit of $A(P_n) – A(p_n) = 0$ for exhaustion method

This is the last question on an exhaustion method problem set.

I need to show
$$ lim_{n rightarrow infty} n(sin(frac{2pi}{n}) – tan(frac{2pi}{n}) ) = 0$$

Without using L’ Hospital’s rule…
Any pointers on where to start?