real analysis – How to get the reverse Hölder inequality for the weak solution of elliptic equations?

real analysis – How to get the reverse Hölder inequality for the weak solution of elliptic equations? – MathOverflow

complex analysis – Find the smallest positive real number $k$ such that the following inequality holds.

Find the smallest positive real number $k$ such that, given any finite set $z_1,cdots, z_n$ of complex numbers, all with strictly positive real and imaginary parts, the following inequality holds: $$|z_1+z_2+cdots+z_n|geq frac{1}{k}(|z_1|+|z_2|+cdots+|z_n|).$$

Answer- $sqrt{2}$

My Attempt:

First, we take $n=2$. Let $z_i=r_ie^{itheta_i}$ for $i=1, 2$. Then $$|z_1+z_2|^2=|r_1e^{itheta_1}+r_2e^{itheta_2}|^2=
r_1^2+r_2^2+r_1r_2e^{i(theta_1-theta_2)}+r_1r_2e^{i(theta_2-theta_1)}.$$
Also $|z_1|+|z_2|=r_1+r_2.$ Therefore, the given inequality holds if
$$r_1^2+r_2^2+r_1r_2e^{i(theta_1-theta_2)}+r_1r_2e^{i(theta_2-theta_1)}geq frac{1}{k^2}(r_1+r_2)^2$$ $$implies (k^2-1)(r_1^2+r_2^2)+r_1r_2(k^2 e^{i(theta_1-theta_2)}+k^2e^{i(theta_2-theta_1)}-2)geq 0.$$
which holds if $$k^2(e^{i(theta_1-theta_2)}+e^{i(theta_2-theta_1)})geq 2$$

I struck at this point. Please help.

fa.functional analysis – How to prove the second Korn inequality?

$textbf{Theorem}.1$ (The first Korn inequality) Suppose that $ Omega $ is a bounded domain in $ mathbb{R}^d $ with Lipschitz boundary. Then
begin{eqnarray}
sqrt{2}left|triangledown uright|_{L^2(Omega)}leq left|triangledown u+(triangledown u)^Tright|_{L^2(Omega)}
end{eqnarray}

for any $ uin H_{0}^{1}(Omega;mathbb{R}^d) $, where $ (triangledown u)^T $ denotes the transpose of $ triangledown u $.

$textbf{Theorem}.2$ (The second Korn inequality) Suppose that $ Omega $ is a bounded domain in $ mathbb{R}^d $ with Lipschitz boundary. If $ uin H^{1}(Omega,mathbb{R}^d) $ is a function with the property that $ uperp R $ in $ H^{1}(Omega;mathbb{R}^d) $, then
begin{eqnarray}
int_{Omega}|triangledown u|^2dxleq Cint_{Omega}|triangledown u+(triangledown u)^T|^2dx
end{eqnarray}

where $ R=left{phi=Bx+b:Binmathbb{R}^{dtimes d} text{ is skew-symmetric and }binmathbb{R}^dright} $ and $ C $ is a constant.

I recently see the two theroems in a book about elliptic equations. I tried to get the estimate for the second inequality by direct computation which works in the proof of the first Korn inequality, but for this inequality, I cannot combine the condition $ uperp R $ with the final results. Can you give me some hints or references?

matrices – inequality of integral on compact set

For any compact set $Kinmathbb{R}^{n}$, we associate the symmetric matrix
$$rho_{ij}=int_{K}x_ix_jdx_{1}dots dx_{n}. $$
(a) Show that if $n=2$ then $det(rho_{ij})neq0$ for any compact set $Kinmathbb{R}^{2}$.
(b) Assume $K=(0,1)^{n}$. Show that $det(rho_{ij})neq0$ for any $ngeq1$.

pr.probability – Trying to prove an inequality (looks similar to entropy)

I’m trying to prove the following inequality (or something similar, up to a constant factor in either side of the inequality):
$$kcdotsum_{i=1}^{k}x_{i}cdotlnleft(x_{i}right)geqsum_{i=1}^{k}x_{i}cdotleft(x_{i}-1right)$$
where $forall iinleft(kright)$, $x_i inleft(0,kright)$ (the $x_i$s are not necessarily natural numbers, but we can assume that they’re rational if it helps), and $sum_{i=0}^k x_i=k$.

I’ve tried plotting it for $k=2,3$ and ran some numerical experiments for larger $k$, and I’m 99% sure this inequality is correct, but I’m still struggling with the proof.

Up to some normalizing, I find the left-hand side quite similar to the entropy of a probability distribution, but I didn’t manage to take advantage of this fact either. I also tried looking for inequalities that only hold on simplex-like hyperplanes, but couldn’t find anything useful.

Any ideas?
Thanks!

Inequality Tests With Resolve, ForAll

Suppose we are given a<=-2 (strictly) and a<=x, where a and x are reals. I would like to run Mathematica tests that produce "true" or "false" that x<=-2 (strictly). I find that Resolve, FindAll can be quirky … any advice is most appreciated!!

equation solving – How to determine values of parameters such that an inequality is satisfied?

It seems the problem is not precisely formulated. Nonetheless we can reconstruct reasoning behind the result obtained in the article metioned.

Let’s define

f(x_, y_) := 1/16 (-1 + x (2 - x + x^3 (-1 + y)^2 y^2))

then it is straightforward to exploit Reduce this way:

Reduce( f(x, y) > 0, y)

nevertheless the result is much more involved than the formula (107). So we deduce that the both 0 <= x <= 1 and 0 <= y <= 1 should be satisfied as well, and so we get

Reduce(f(x, y) > 0 && 0 <= x <= 1 && 0 <= y <= 1, y)
  2 (-1 + Sqrt(2)) < x <= 1 && 
 1/2 - 1/2 Sqrt((-4 + 4 x + x^2)/x^2) < y < 1/2 + 1/2 Sqrt((-4 + 4 x + x^2)/x^2)

analogically we can obtain the result assuming that y is fixed by

Reduce(f(x, y) > 0 && 0 <= x <= 1 && 0 <= y <= 1, x)

probability – Maximal Inequality for a triangular array of negatively correlated (or even associated) random variables

I am interested in maximal inequalities in the following setup:
Let $left( (X_{n,i})_{i=1,ldots, n}right)_{ninmathbb N}$ be random variables such that $X_{n,i}$ takes values in ${0,ldots, n}$ for $i=1,ldots, n$ and such that for all $n$
$$sum_{i=1}^n X_{n,i}=n.$$
In particular assume that for all $n$ the family $(X_{n,i})_{i=1,ldots, n}$ is negatively correlated (or even negatively associated) and that for all $ain {0,ldots, n}$ and $i=1,ldots, n$
$$mathbf P(X_{n,1} geq a) geq mathbf P(X_{n,i}geq a).$$

So it’s easy to show (standard result I would say) for $sgeq 0$
$$ e^{ s mathbf E(M_n)} leq n mathbb Eleft( e^{ s X_{n,1}} right) .$$

But is there anything we can do to get a bound with $mathbb Eleft({ X_{n,1}} right)$ on the right side?

I would be glad for any hint, advice or literature…

Wish you all a nice day!

equation solving – Can’t this inequality really not be solved with Reduce?

Taking $alpha$ as given, I am trying to understand/know under which restrictions on $n$, the following holds:
$$ 0 leq n+alpha Wleft(-e^{frac{-n}{alpha}} right) leq 1.$$
i.e I want to solve for $n$ the following two inequalities: a) $0 leq n+alpha Wleft(-e^{frac{-n}{alpha}} right)$, and b) $n+alpha Wleft(-e^{frac{-n}{alpha}} right) leq 1$.
In addition, I know $alpha >0$, that both $n$ and $alpha$ are reals.

  • I manage to get that a) is satisfied for $ngeq alpha$,
    using: Reduce(n+a*ProductLog(-E^(-n/a))>=0, n, Reals) which returns (a<0&&n==a)||(a>0&&n≥a).
  • However I struggle to find any solution for the inequality b). I have tried all of the following:
(* for convenience I first define the function *)
f(n_, a_) =n+a*ProductLog(-E^(-n/a))

Reduce(f(n, a)<=1, n, Reals)
Reduce(f(n, a)<=1 && a>0, n, Reals)
Solve(f(n, a)<=1, n, Reals)
Solve(f(n, a)<=1 && a>0, n, Reals)

Which all return:

“This system cannot be solved with the methods available to Solve/Reduce.”

Even though I specified the domain as suggested here.

Is the inequality in b) really not solvable for $n$? Could Anyone think of an alternative command or commands that would do the trick?

Thank you,

EDIT:

In my context, f(n_, a_)being strictly increasing in n, even the solution for the equality to 1 would suffice me.

Manipulate(Plot(f(n, a), {n, -1, 10}), {a, 0, 5, 0.05, Appearance -> "Labeled"})

increasing in n
Also because I know that, due to the domain of the Lambert W function, we must have n ≥ a (on the graph of f(n, a)above, we see clearly that it is not defined otherwise).

linear algebra – Why does this inequality involving norms hold?

We are given $n$ linearly independent vectors $lbrace Phi_1, dots, Phi_n rbrace$ with $||{Phi_i}||=1$. We consider any linear combination $sum_{i=1}^n alpha_i Phi_i$.

My linear algebra script states the following inequality: $sum alpha_i^2 – sum_{i neq j} |alpha_i alpha_j <Phi_i, Phi_j>| leq || sum alpha_i Phi_i ||^2$.

I am trying to understand where this inequality comes from, but I don’t see it. I think I have to use the inverse triangular inequality, but I am always ending up with a to high term.

begin{align*}
&sum alpha_i^2 – sum_{i neq j} |alpha_i alpha_j <Phi_i, Phi_j >| \ leq &|sum alpha_i^2 | – |sum_{i neq j} alpha_i alpha_j <Phi_i, Phi_j>| \leq &||sum alpha_i^2 | – |sum_{i neq j} alpha_i alpha_j <Phi_i, Phi_j>|| \leq &|sum alpha_i^2 <Phi_i, Phi_i> – sum_{i neq j} alpha_i alpha_j <Phi_i, Phi_j>|
\ leq & | sum_{i,j} alpha_i alpha_j <Phi_i, Phi_j>|
end{align*}

But how do I continue without overshooting? Cauchy-Schwarz inequality yields a to large estimation. Or maybe in an earlier step I am already estimating to high?

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies 5000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive.com Proxies-free.com New Proxy Lists Every Day Proxies123.com Buy Cheap Private Proxies; Best Quality USA Private Proxies