## ct.category theory – Lemma 5.4.5.11 of HTT

In Lemma 5.4.5.11 of HTT, the proof given relies on Lemma 5.4.5.10. However it seems that Lurie applies Lemma 5.4.5.10, which requires the given simplicial set to be contractable, to an arbitrary $$kappa$$-small simplicial set.

This seeming incongruity was pointed out in this question on mathoverflow 2 years ago. However an answer was never given, and therefore I thought I might re-ask this in a new question (Let me know if there is a better way to re-ask an unanswered question).

Is there either
a) a way to salvage the proof given, or
b) a new proof which avoids the issue, or
c) is the proof actually correct (and we’re all being daft)

## Use pumping lemma (non regular language) to solve

{0^m 1^n 0^m | m,n >=0}

vv^R :v: {a,b}*

## complex analysis – Generalization of Schwarz’s Lemma

I am reading Lectures on Riemann Surfaces by Otto Forster. He says: (p.110)

The following lemma may be viewed as a generalization of Schwarz’s lemma. Let $$D,D’$$ be a pair of open subsets of $$mathbb{C}$$, where $$D$$ is a relatively compact subset of $$D’$$. For any $$varepsilon>0$$, there is a closed vector space $$Asubset L^2(D,mathcal{O})$$, of finite codimension, with
$$lVert frVert_{L^2(D’)}leq varepsilon lVert frVert_{L^2(D)}$$

He has already shown that $$L^2(D,mathcal{O})$$, the space of holomorphic functions on $$D$$, forms a Hilbert space under the inner product $$iint foverline{g}dxdy$$ thus the “closed” comment.

What does he mean when he says this generalizes Schwarz’s lemma? How is this related to Schwarz’s lemma?

## Holomorphic Urysohn Lemma

Let $$M,N$$ be two disjoint closed holomorphic submanifolds of $$mathbb{C}^n$$. Is there a holomorphic map $$f:mathbb{C}^nto mathbb{C}$$ with $$f(M)=0,;f(N)=1$$.

## regular languages – Why does the Pumping Lemma Constraint |xy| ≤ p mean that y can’t be 1 in the string 0p1p

I am trying to get my head around the Pumping Lemma to prove a language is non-regular.

I am reading the Sipser text book and he gives the following example.

Let B be the language $${0^n 1^n | n ge 0}$$

Let $$s = 0^p 1^p$$

I understand that the idea is you can split this string into xyz such that y can be pumped. It is the constraint of $$|xy| le p$$ that is confusing me.

Sipser notes that due to this constraint y could not equal 01 nor could it equal 1. Why would y equaling either of those values violate the given constraint.

I am generally quite confused by the Pumping Lemma so any general advice or good resources you can recommend, I would appreciate.

Thanks!

## differential equations – Poincaré’s Lemma in the space of tempered distributions

It is well known that if $$fin mathcal{D}'(mathbb{R}^3,mathbb{R}^3)$$ and $$textbf{curl} f= 0$$ then there exists a $$uin mathcal{D}'(mathbb{R}^3)$$ such that $$nabla u = f$$.

Question. Does the same result still hold with $$fin mathcal{S}'(mathbb{R}^3,mathbb{R}^3)$$ and $$uin mathcal{S}'(mathbb{R}^3)$$?

This is a question that I asked some time ago on MSE without having a proper answer.
https://math.stackexchange.com/questions/2405993/poincar%c3%a9s-lemma-in-the-space-of-tempered-distributions

## regular languages – How to proove L 2^n with Pumping lemma

Let $$Sigma$$ is the latin alphabet ({a,b,c…,x,y,z} – 26 letters).

Given language
$$L = { alpha in Sigma^{*} |$$

if $$alpha$$ cointaints $$a$$ then $$N_{a}(alpha) = 4$$

if $$alpha$$ cointaints $$b$$ then $$N_{b}(alpha) = 8$$

$$quad vdots$$

if $$alpha$$ cointaints $$b$$ then $$N_{z}(alpha) = 2^{27}$$

Prove that L is reagular.

Note: I did research and I have found that we can use Pumping lemma

## regular languages – Pumping lemma: why x in ∣xy∣ ≤ p?

Looking at the pumping lemma, I’ve noticed that in the string $$xy^pz$$, there seems to be no rule explicitly stated for $$x$$ and $$z$$. If I understand correctly, $$x$$ and $$y$$ are basically anything on the 2 sides of the string $$y^p$$ that we’re pumping and thus can be anything in $$L$$.

Rule 2 & 3 of the pumping lemma are:

• $$|y| geq 1$$
• $$|xy| leq p$$

Since $$|x| = 0$$ and $$|z| = 0$$ seem to be allowed, as they only need to be of non-negative length, we shouldn’t need $$x$$ in rule 2 and it can be rewritten as $$1 leq |y| leq p$$.

Are $$x$$ and $$y$$ not just a substitute for whatever are on the 2 sides of $$y^p$$ which we’re pumping? Why is $$x$$ in rule 2 if it doesn’t seem to make a difference? If $$x$$ is necessary, why is there no $$|yz| leq p$$?

## mg.metric geometry – An analogue of the Milnor-Švarc lemma for Busemann boundaries

The Milnor-Švarc lemma, is, without doubt, regarded as one of the most important statements in geometric group theory. Roughly, it states that if a hyperbolic group $$G$$ acts geometrically on a hyperbolic space $$X$$, then the Gromov boundaries of $$G$$ and $$X$$ turn out to be homeomorphic.

Now, suppose that $$(X, d_X)$$ is a complete CAT(0)-space, and $$G$$ is a finitely-generated group equipped with a geometric action on $$X$$. Let us fix a point $$x_0 in X$$ with the trivial stabilizer, and consider a distance on $$G$$, which is defined as follows: $$d(g_1, g_2) = d_X(g_1.x_0, g_2.x_0)$$. We need the stabilizer to be trivial so that $$d$$ becomes a well-defined distance.

Conjecture: Fix a point $$x_0 in X$$. The restriction map $$r : partial_B(X, d_X) rightarrow partial_B(G, d)$$, where
$$r(h)(g) = h(g.x_0) text{ for any } h in partial_B(X, d_X).$$
is a homeomorphism of Busemann boundaries $$partial_B(X, d_X)$$ and $$partial_B(G, d)$$. Moreover, if we define the Busemann function and the Busemann cocycle as follows:
$$b_X : X times partial X rightarrow mathbb{R}, quad b_X(x, xi) = limlimits_{t rightarrow infty}(d(x, xi(t)) – t),$$
$$c_B : G times partial_B(G, d), quad c_B(g, xi) := xi(g^{-1}) = lim_{x rightarrow xi} (d(x, g^{-1}) – d(e, x)),$$
then
$$c_B(g, r(xi)) = b_X(g^{-1}.x_0, xi).$$

Keep in mind that $$r$$ is well-defined because for complete CAT(0)-spaces the Gromov and Busemann boundaries are homeomorhpic.

This statement, if true, looks quite natural and should be well-known, but I failed to find this result in standard geometric group theory textbooks.

Because I haven’t found a proof of this fact, I will attempt to prove this fact myself.

For any $$y in X$$ define a function $$h_y(x) = d_X(x,y) – d_X(y,x_0)$$.

Showing that $$r$$ is surjective is not that difficult, because $$partial_B(X, d_X)$$ is sequentially compact. If a sequence $$(h_{g_i})_{i in mathbb{N}}$$, where $$h_{g_i}(x) := d(x, g_i.x_0) – d(g_i.x_0, x_0)$$, converges to $$h$$ in $$partial_B(G, d)$$, then we can find a subseqeuence $$(h_{g_{i_k}.x_0})_{k in mathbb{N}}$$ which converges in $$partial_B(X, d_X)$$, but this limit, restricted to the orbit $$Gx_0$$, has to be equal to $$h$$.

To show that $$r$$ is injective (this is the non-trivial part!), we need to prove the following statement: if $$xi_1, xi_2$$ are non-asymptotical geodesical rays in $$partial(X)$$, then $$lim_{srightarrow infty}(b(xi_2(s), xi_1) + s) = infty.$$ Because these rays are non-asymptotic, we use the fact that the CAT(0)-angle between them is non-trivial, which allows us to use the CAT(0)-law of cosines in a nice way, so that we get

$$begin{gathered} lim_{t rightarrow infty} sqrt{t^2 + s^2 – 2st cos( angle_{x_0}(xi_1, xi_2) – varepsilon)} – t le lim_{t rightarrow infty} d(xi_1(t), xi_2(s)) – t le \ le lim_{t rightarrow infty} sqrt{t^2 + s^2 – 2st cos( angle_{x_0}(xi_1, xi_2) + varepsilon)} – t, end{gathered}$$

for some very small $$varepsilon > 0$$ and a big enough $$s > 0$$. Here I refer to the Propositon II.9.8(1) in Bridson-Haefliger. Keep in mind that these limits can be computed explicitly, and we finish the argument by taking $$limlimits_{s rightarrow infty}$$ and applying the squeeze theorem:
$$-s ( cos( angle(xi_1, xi_2) – varepsilon) – 1) le b(xi_2(s), xi_1) + s le -s (cos( angle(xi_1, xi_2) + varepsilon) – 1).$$

Suppose that $$r$$ isn’t injective, then there are distinct horofunctions $$h_1, h_2$$ which coincide on $$Gx_0$$. However, because the fundamental domain is compact, and horofunctions are 1-Lipschitz, we get $$|h_1 – h_2|$$ is a uniformly bounded function on $$X$$. However, $$X$$ is CAT(0), so we can consider the corresponding rays $$xi_1, xi_2$$. Due to our assumptions, they are non-asymptotic, and we get
$$sup_{t} |h_2(xi_1(t)) – h_1(xi_1(t))| = sup_{t} |h_2(xi_1(t)) + t| = infty,$$
and this yields a contradiction with the uniform boundedness of $$|h_1 – h_2|$$.

If this “theorem” is true, then we can use it to explicitly recover the Busemann cocycle for any hyperbolic group acting on a hyperbolic space for which we have a nice description of the Busemann function ($$mathbb{H}^n$$, for example). Also, we could use this statement as a tremendously inefficient way to check whether a particular non-elementary hyperbolic group $$G$$ is not CAT(0): consider all left-invariant distances $$d$$ on $$G$$, and prove that for any such $$d$$ the Busemann boundary $$partial_B(G, d)$$ is not homeomorphic to its Gromov boundary $$partial G$$. Of course, such an example isn’t known…

So, here are my questions: is this a known generalization, and are there better applications? I do admit that due to the fact that the Busemann boundary is not a quasi-isometric invariant of a metric space, we don’t have as much freedom and flexibility as in the hyperbolic setting. However, maybe we can use a statement like this in a different way?

## linear algebra – How to prove strict complementary slackness by means of Farkas’ lemma

This question relates to pages 89 and 96 the following text (pages 102 and 109 of the pdf):

https://promathmedia.files.wordpress.com/2013/10/alexander_schrijver_theory_of_linear_and_integerbookfi-org.pdf

The author gives the following variant of Farkas’ lemma:

Let A be a matrix and b be a vector. Then the system of linear inequalities $$Ax le b$$ has a solution $$x$$, if and only if $$yb ge 0$$ for each row vector $$y ge 0$$ with $$yA = 0$$.

He then proves that:

for a bounded and feasible linear program

$$max { cx | Ax le b } = min { cx | y ge 0, yA = c }$$

if the maximum problem has no optimum $$x_0$$ with $$a_ix_0 lt b_i$$ then the minimum problem must have an optimum $$y_0$$ with a positive $$mathit{i}$$th component.

The proof starts as follows:

It is assumed that there is no optimum solution $$x_0$$ for the maximum problem with $$a_ix_0 lt b_i$$.
Then, if $$delta$$ is the optimum value of the maximum and minimum problems, $$Ax le b, cx ge delta$$ implies $$a_ix ge beta_i$$. So, by Corollary 7.1e (Farkas’ Lemma as given above), $$yA – lambda c = -a_i$$ and $$yb – lambda delta le -beta_i$$ for some $$y, lambda ge 0$$.

My question is:

How does the last sentence (“So”) follow from what preceeds it?

Specifically, how can “$$Ax le b, cx ge delta$$ implies $$a_ix ge beta_i$$” be cast in a form such that application of Farkas’ Lemma yields $$yA – lambda c = -a_i$$ and $$yb – lambda delta le -beta_i$$ for some $$y, lambda ge 0$$?

Neither the positive form (“$$Ax le b, -cx le -delta, -a_ix le -beta_i$$ has a solution”) nor the negative form (“$$Ax le b, -cx le -delta, a_ix le beta’_i$$, with $$beta’_i lt beta_i$$, has no solution) yields exactly the desired inequalities when Farkas’ lemma is applied. In each case either signs or inequality directions differ from those sought.

Any help would be greatly appreciated.