## let \$X,X_i\$ random variables such that \$(X_n-X)^2underset{p}{to}0\$ prove \$X_i^2\$ converges to \$X^2\$

let $$X,X_i$$ random variables such that $$(X_n-X)^2underset{p}{to}0$$
prove $$X_i^2$$ converges to in probability $$X^2$$

I’ve tried using the definition of converges in probability but didn’t get too far
any hint?

## measure theory – Prove that there exists \$g in C_c(X)\$ such that \$|g|_{infty} leq |f|_{infty}.\$

Let $$(X,mathcal B, mu)$$ be a Borel measure space where $$X$$ is a locally compact Hausdorff space and $$mu$$ is a regular Borel measure. Let $$f geq 0$$ be a non-negative simple function in $$L^p(mu),$$ for $$1 leq p lt infty.$$ Show that for a given $$varepsilon gt 0$$ there exists a Borel measurable subset $$X_0$$ of $$X$$ and $$g in C_c(X)$$ such that $$g = f$$ on $$X_0$$ and $$mu (X setminus X_0) lt varepsilon.$$ Furthermore $$|g|_{infty} leq |f|_{infty}.$$

I used Lusin’s theorem to conclude the first part of the theorem. But how do I prove that $$|g|_{infty} leq |f|_{infty}$$? Any help in this regard will be highly appreciated.

Thanks in advance.

## norm – How to prove this fractional limit is \$0\$?

I guess that
$$frac{lambda^2-||y||_infty^2}{||y||_infty||x||_1-langle y, xrangle} xrightarrow{ xlongrightarrow x^*} 0$$
under the assumptions:

• $$f: xin mathbb R^n longmapsto yin mathbb R^n$$ is continuous,
• $$||y||_infty xrightarrow{ xlongrightarrow x^*} lambda$$ and $$||y||_inftyleq lambda$$, i.e. the nominator is always non-negative and converges to zero.
• $$||y||_infty||x||_1-langle y, xrangle xrightarrow{ xlongrightarrow x^*} 0$$, i.e. the denominator is always non-negative and converges to zero.

I have the intuition that the limit holds true since the nominator has order $$2$$. Could anyone give me some hints how to prove or disprove it?

## logic – Axioms used to prove Gödel’s second incompleteness theorem

Gödel’s second incompleteness theorem concerns the question of proving consistency of certain axiomatic systems.

But what set of axioms is used to prove Gödel’s second incompleteness theorem itself? Is there a possibility that the set itself is proven inconsistent in the future?

## sequences and series – How to prove that these partial binomial sums are zero?

I am trying to prove that the following equation is equal to zero.
$$0= sum_{j=J+1}^N Big(j (1-q)+ (j-J) (q N-j) Big) cdot q^{j} (1-q)^{N -j} binom{N}{j} label{zero1}$$

Where
$$J,N in mathbb{Z}^+$$ and $$J and
$$0 is a probability.

Numerical simulations (see link) suggest that this is true.
Tips and hints (or solutions) are very welcome!

## 1-tan^2A=cos2sec^2 prove this identity.

I have to prove identity and I keep getting such because I don’t know which side to work with and how to continue. I changed the tan to sin^2/cos^2 but then from there I am stuck. If someone could show me the steps that would be great. In addition to give me tips on proving identity.

## hash – How can I prove or disprove that the following function is bijection?

For a reserch project, I tried to prove or disprove thata function called xxhash128_low is a bijection from 64 bit unsigned integer to 64 bit unsigned integer. I have shown that it is significant to prove that the following critical code is a bijection:

``````Input: a 64-bit integer x
0. Let c=0x9E3779B185EBCB87
1. Let x_low= the 8-low bytes of c*x.
2. Let x_high= the 8-high bytes of c*x.
3. x_high= x_high+2*x_low.
4  y= shift right 3 bits of x_high.
5. return x_low XOR y.
``````

What I have proved till noe: returning only the low 64 bits, x_low, can be shown to be a permutation. Also, y can be expressed as a function of x_low (i.e., y=f(x_low)).

## discrete mathematics – Prove that a strictly decreasing function from \$f:Bbb R to Bbb R\$ is one-to-one

I would like to prove that that a strictly decreasing function from $$f:Bbb R to Bbb R$$ is one-to-one.

We want to show that show that $$f(a) = f(b)$$ implies $$a = b$$ for all $$a, b in Bbb R$$.

One proof I saw online was as follows (although I did the same proof using contrapositive technique), but I just want to get better understanding as to why he did the proof as follows:

Proof:

Since the function is strictly decreasing, it means that if $$x lt y implies f(x) gt f(y)$$. To proof that it’s one-to-one function, we need to prove that if $$f(a)=f(b) implies a=b$$.

Let $$f(a) = f(b)$$.

Case 1: Consider when $$a lt b$$, then this implies that $$f(a) gt f(b)$$ since $$f(x)$$ is strictly decreasing. This implies that $$f(a) ne f(b) therefore age b$$.

Case 2: Consider when $$a gt b$$, then this implies that $$f(a) lt f(b)$$ since $$f(x)$$ is strictly decreasing. This implies that $$f(a) ne f(b) therefore a = b$$.

Questions:

1. It seems the proof that was used in the question is proof by cases, was not it?
2. Why it was assumed, in Case 1, that $$f(a) = f(b)$$ although what is given in the question is that $$f(x)$$ is strictly decreasing?
3. Why it was concluded ,in Case 1, that since $$f(a) ne f(b) therefore age b$$?
4. I assume that it was finally concluded that $$therefore a = b$$ is because no other scenarios left as to why $$f(a) = f(b)$$ except by equality of $$a$$ and $$b$$.

## How to prove Pasch’s Axiom?

I am trying to show that

If a line enters a triangle at a vertex, then the line intersects the opposite side.

is the same as

If a line enters a triangle at a side without intersecting the opposite vertex, then the line intersects one of the other two sides.

## logic – how to prove equivalence of two substitutions by induction

I’m trying to prove the following reduction
t{x:=u}{y:=v} = t{y:=v}{x:=u{y:=v}}
We have the following Assuming:
(1)($$x neq y$$ )

(2)x is not a free Variable of v (i.e $$x notin$$fv$$(v)$$🙂

My idea is to do it by induction, but I’m a bit stuck with the base case. I would appreciate if someone can explain to me how to such a proof. I’m including what I tried (but it’s wrong/incomplete)

``````*Base case*: Assuming t is a just a variable `m` (t=m),
#on the left side we have
if m = x ==> u{y:=v}  (if u=y then `v`  else `u`)
else   ==> m{y:=v} (if m=y then `v`  else `m`)

#On the right side
if m=y ==> v{x:=u{y:=v}}  (if u=y then `v{x:=v}`  else `u{x:=u}` ( so we get either (v=x==> `v` else `v`) or else u=x==> `u` or else `u`) )
else   ==> m{x:=u{y:=v}} (if m=y then `m{x:=v}`  else `m{x:=u}`( so we get either v or m))
``````

I understand we end up getting the same four branches, but is that considered really a proof? Is this the proper way to write such proof and conclude for the base case? Also the given assumptions didn’t help much here so I think I’m missing the part where I need to use those assumptions…

I think once the base case is proven,

we can do the following

case t = ($$t_{1}t_{2}$$)

``````t{x:=u}{y:=v} ==> t1t2{x:=u}{y:=v}
==> t1{x:=u}{y:=v} t2{x:=u}{y:=v} and we have just proven in the base case that:
t1{x:=u}{y:=v} = t1{y:=v}{x:=u{y:=v}}  and  t2{x:=u}{y:=v} = t2{y:=v}{x:=u{y:=v}}
so  t1{x:=u}{y:=v} t2{x:=u}{y:=v} = t1{y:=v}{x:=u{y:=v}} t2{y:=v}{x:=u{y:=v}}
= t1t2{x:=u}{y:=v} = t1t2{y:=v}{x:=u{y:=v}}

``````

Now only part left is if $$t= lambda m . t$$

so we have $$(lambda m . t)$${x:=u}{y:=v}
this can be directly re-written as $$lambda m . t$${x:=u}{y:=v}, which is the base case again…

Could someone please help ne finish this proof correctly and explain to me the right way to do it?

Thanks in advance