## On the Hahn-Banach Theorem

The Hahn-Banach Theorem on extensions of linear forms is well-known for real or complex vector spaces. Are there examples of vector spaces $$E$$ on a field $$mathbb F$$ such that some linear form defined on a subspace of $$E$$ cannot be extended to the whole $$E$$?

## turing machines – Rice’s theorem extension

I’ve tried to solve this question below but I got stuck.

Let $$P$$ be a non-trivial property. Proof that the extension of Rice’s theorem is true:

If $$L_1$$ $$in$$ $$P,$$ $$and$$ $$L_2$$ $$in$$ $$RE$$ $$setminus$$ $$P$$ such that $$L_1$$ $$subset$$ $$L_2,$$ then $$L_P$$ $$notin$$ $$RE.$$,

such that $$L_P = left { langle M rangle | L(M)in P right }$$.

## sequences and series – Rudin Theorem 3.4(a)

I believe I understand the proof, but I just want to be sure I understand fully what Rudin is saying.

The theorem is:

Suppose $$x_n in mathbb{R}^k$$ and $$x_n = (a_{1,n}, ldots, a_{k,n})$$ Then $${x_n}$$ converges to $$x = (a_1, ldots, a_k)$$ if and only if $$limlimits_{n to infty}$$ if and only if $$limlimits_{n to infty} a_{j,n} = a_j$$ for $$1 leq j leq k$$.

The forward direction is more or less clear, though I have only one small question.. Rudin writes that
$$|a_{j,n} – a_j| leq |x_n -x|.$$
I assume the right side is a vector norm, the right-hand side is the absoltue value of a difference of scalars, because the inequality comes from taking the square root of a square. Is that right?

The backward direction is a bit more confusing. Rudin’s proof, replicated verbatim, is:

Conversely, if (2) holds, then to each $$epsilon > 0$$ there corresponds an integer $$N$$ such that $$n geq N$$ implies $$|a_{j,n} – a_j| < frac{epsilon}{sqrt{k}}$$. Hence $$n geq N$$ implies
$$|x_n – x| = left(sumlimits_{j=1}^k |a_{j,n} – a_j|^2 right)^{1/2} < epsilon,$$
so that $$x_n to x$$.

Here is my confusion: I think Rudin has skipped a step. He picks an $$N$$ for only a single $$j$$, but not for each $$j$$. It seems to me that we should, for each $$j$$, pick $$N_j$$ so that $$n geq N_j$$ implies $$|a_{j,n} – a_j| < frac{epsilon}{sqrt{k}}$$ and then set $$N = max(N_1, ldots, N_k)$$. Otherwise, Rudin is in some way bounding the above sum by a single index, which seems peculiar.

Would I be correct that Rudin has actually done what I just suggested, but been silent about it in the write-up?

## real analysis – \$f(b) – f(a) = f'(c)(b-a)\$ theorem name

real analysis – \$f(b) – f(a) = f'(c)(b-a)\$ theorem name – MathOverflow

## complex analysis – calculate \$int_{0}^{infty} e^{-t^{2}} sin left(t^{2}right) d t\$ by residue theorem

I am trying to calculate $$int_{0}^{infty} e^{-t^{2}} sin left(t^{2}right) d t$$ by residue theorem, what I did according to the hint I got is take a path as follow

while it has angle $$theta = frac{pi}{8}$$ and it go from the origin to $$R$$ like $$(0,R)$$
I showed that $$int_{(eta_R)} f(z)= -e^{- pi i/8} int_{(0,R)} f(z)$$
and now I just need take the limit $$R rightarrow infty$$ and calculate $$int_{(gamma_R)} f(z)$$ wich is a bit problem because I dont see any poles of that function, but this integral is nonezero.

## nt.number theory – Optimal exponent in Dirichlet’s theorem on diophantine approximation

Let $$vec x = (x_1,x_2,dots, x_k) in mathbb{R}^k$$.
Dirichlet’s theorem guarantees that for each $$N$$, there exists $$(n_0,n_1,n_2,dots,n_k) in mathbb{Z}^{k+1} setminus {vec 0}$$ with $$max(|n_1|,|n_2|,dots,|n_k|) leq N$$ and
$$|n_0+ n_1x_1 + n_2 x_2 + dots + n_k x_k| < N^{-k}.$$
It is known that for almost all $$vec x$$ this cannot be improved, in the sense that it’s not possible to replace the bounds $$N$$ and $$N^{-k}$$ with $$(1-varepsilon)N$$ and $$(1-varepsilon)N^{-k}$$ respectively, for a positive constant $$varepsilon > 0$$. Conversely, the “trivial” example where $$vec x$$ satisfies an affine relation over $$mathbb{Q}$$ shows that there certainly are some $$vec x$$ for which the above can be improved in a rather spectacular manner.

I’m curious if, except for the trivial case mentioned above, the exponent $$k$$ in Dirichlet’s theorem is optimal. More precisely, I wonder if the following is true:

Let $$vec x = (x_1,x_2,dots, x_k) in mathbb{R}^k$$ and let $$varepsilon > 0$$. Suppose that for each sufficiently large $$N$$ there exists $$(n_0,n_1,n_2,dots,n_k) in mathbb{Z}^{k+1} setminus {vec 0}$$ with $$max(|n_1|,|n_2|,dots,|n_k|) leq N$$ and
$$|n_0+ n_1x_1 + n_2 x_2 + dots + n_k x_k| < N^{-k-varepsilon}.$$
Then $$1,x_1,x_2,dots, x_k$$ are linearly dependent over $$mathbb{Q}$$.

As a rationale, let’s consider the case $$k = 1$$. In this case, we have a single real number $$x in mathbb{R}$$ with the property that for each $$N$$ there exist $$n < N$$ such that $$Vert n x Vert < N^{-1-varepsilon}$$, and we claim that $$x$$ needs to be rational. (Here $$Vert t Vert$$ denotes the distance of $$t$$ from $$mathbb{Z}$$.) Suppose conversely that $$x$$ is irrational, and let $$x = (a_0;a_1,a_2,dots)$$ be the continued fraction expansion of $$x$$, and let $$p_1/q_1,p_2/q_2,dots$$ be the convergents. Then, setting $$N = q_{i+1} – 1$$ and recalling the convergents are the best rational approximations of the second kind, we see that $$Vert q_i x Vert < q_{i+1}^{-1-varepsilon}$$. On the other hand, we have the estimate $$(q_{i+1}+ q_i)^{-1} < Vert q_i x Vert < q_{i+1}^{-1}$$. This would imply that $$(q_{i+1}+ q_i)^{-1} < q_{i+1}^{-1-varepsilon}$$, which is impossible for sufficiently large $$i$$.

The basic intuition, as far as I can tell, is that if for a specific threshold $$N_0$$ we have an approximation that is much better than expected, then we can use it to describe all reasonable approximations for a larger threshold $$N_1$$ and none of those approximations is particularly good (above, $$N_0 = q_i$$ and $$N_1 = q_{i+1}-1$$). I’m not sure how (and if) this idea works for $$k geq 2$$, and I suspect the above conjecture is either well-known or demonstrably false.

## algorithms – How to solve T(n)=4T(√n/3)+(log n)^2 with the master theorem?

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

## algebraic topology – Hatcher Exercise 1.2.8 via the van Kampen theorem

Hatcher’s Exercise 1.2.8 is the following : Compute the fundamental group of the space $$X$$ obtained from two tori $$S^1 times S^1$$ by identifying a circle $$S^1 times {x_0}$$ in one torus with the corresponding circle $$S^1 times {x_0}$$ in the other torus.

There are solutions on this site already, as in the discussion here, where the original poster mentions that one needs open sets in order to apply the van Kampen theorem. I’d like to follow up on this thought with the solution I found here (and also shown below).

Here are my questions about the provided solution:

• What allows us to take an open neighborhood of $$S^1 times {x_0}$$ that deformation retracts onto $$S^1 times {x_0}$$? Does any space admit such an open neighborhood that deformation retracts onto it? Or is there something special about $$S^1 times {x_0}$$ that admits this open neighborhood?
• Applying the van Kampen theorem here requires $$X$$ to be the union of two path-connected open sets with path-connected intersection. How is $$X$$ being written as a union of two path-connected open sets here? Is $$X = (T_1 cup U_1) cup (T_2 cup U_2)$$? If so, how can we see that $$T_1 cup U_1$$ and $$T_2 cup U_2$$ are path-connected and open, and have path-connected intersection?
• Below is the statement of the van Kampen theorem in Hatcher. With this in mind, it looks like the normal subgroup $$N$$ in the provided solution is $$pi_1(S^1)$$. How can one see that this is the case?

Thanks!

## mg.metric geometry – Banach fixed point theorem / convergence squeeze

I am currently investigating an iterative learning algorithm and its convergence time. If we let $$x_1 = g(x_0)$$ and let $$epsilon := |x_t – x^*|$$ be our desired error bound from the fixed state, then we have
$$t geq lnleft( frac{epsilon(1-L)}{|g(x_0) – x_0|} right) / ln(L)$$
where L is the Lipschitz constant. My question is this: our function is of the form $$g(x) = frac{1 – s(x)}{Ccdot s(x)}$$ where $$C$$ is some constant and $$s(x)$$ is a function is not always known. If I can verify that the unknown $$s(x)$$ is sandwiched between two polynomials, does this guarantee that $$g(x)$$‘s convergence time can thus be bounded as well? For example if I prove
$$C_1x^{k_1} leq s(x) leq C_2x^{k_2}$$
then can I say
$$text{Conv. time of } C_1x^{k_1} leq text{Conv. time of } s(x) leq text{Conv. time of } C_2x^{k_2}$$

## algebra precalculus – Function composition under Binomial Theorem

We are asked if the binomial theorem works for composition of functions

I am given the answer, but need to understand

The answer and a bit of given explanation:

-it does not hold

-using the distributive property, (a+b)^2 must be a^2+ab+ba+b^2

-the binomial theorem says it will be a^2 +2ab + b^2

-therefore the binomial theorem holds if the operation is communitive under multiplication, or ab = ba

-It is understood that (fog)^2 = fogofog and same for gof, therefore (fog)^2 is not equal to (gof)^2

-(fog)^2 may be understood as a multiplication operation also, being (fog)(fog)

-we combine both definitions here

Now, my three confusions:

(1) How are we allowed to just mix the definitions of f^2 = f(f(x)) and f^2 = f(x)*f(x) here? Is there is a rule?

(2) How can we have fog^2 and gof^2 being ab and ba, if they have no multiplication to each other, just squares? It seems like aa and bb is all it is.

(3) How can functions f(g(x)) =a and g(f(x)) = b be written in form a+b ?