Estimation of the integral of a non-negative integrable function via the measurement of the integration domain

I was trying to solve a probability theory exercise and wondered if the following was true:

Let $ ( Omega, mathfrak {A}, mu) $ to be a measurement space, $ A in mathfrak {A} $ and $ f in L ^ 1 ( mu) $ not negative. Then we have this $ int_ mu { chi_A f text {d} omega} leq mu (A) || f || _ {L ^ 1 ( mu)} $.

I tried to find a counterexample on the real line, but I couldn't find one. Maybe there is one really easy one that I overlooked. I tried to prove the inequality but I couldn't find the right approach. I tried to argue via the step by step functions, but it doesn't seem to work. Perhaps it is necessary to limit oneself to finite measures for the above to be true. Any help is appreciated.

nt.number theory – Is there a non-negative sequence $ a_p $ such that $ sum_p frac {a_p} {p} $ converges but $ sum_p frac { sqrt {a_p}} {p} $ diverge ?

Is there a true non-negative sequence $ a_p $ indexed on prime numbers such as $ sum_p frac {a_p} {p} $ converges but $ sum_p frac { sqrt {a_p}} {p} $ diverges? If so, what is an example of such a sequence, and if not, how can it be proven?

(This appeared when studying the pretension distance on multiplicative functions in analytic number theory. A sequence satisfying these conditions is necessary to find a multiplicative function $ f $ such as $ sum_p frac {1 – Re (f (p))} {p} $ converges but $ sum_p frac {| 1 – f (p) |} {p} $ diverges.)

real analysis – Lebesgue Integral of the non-negative simple measurable function

I have a measurement space $ (X, Sigma, mu) $ and a simple measurable function $ f: X to (0, infty) $, or $ f $ can be represented as $ displaystyle { sum_ {i = 1} ^ n a_i chi_ {A_i}} $ so that the $ A_i $ are mutually disjointed and $ displaystyle { bigcup_ {i = 1} ^ n A_i = X} $

So for everything $ B in Sigma, displaystyle { int_B f d mu = sum_ {i = 1} ^ n a_i mu (A_i cap B)} $

Now i want to show $ displaystyle { int_B f d mu = int_X chi_ {B} s d mu} $

I can write $ s chi_B = displaystyle { sum_ {i = 1} ^ n a_i chi_ {A_i cap B}} $

but then $ displaystyle { bigcup_ {i = 1} ^ n (A_i cap B) = X cap B = B ne X} $, therefore the $ (A_i cap B) $ don't partition $ X $. With that in mind, can I deal $ chi_B s $ as a simple function and integrate as before?

My instinct would be to write $ displaystyle { int_X chi_ {B} sd mu = sum_ {i = 1} ^ n a_i mu (B cap A_i cap X) = sum_ {i = 1} ^ n a_i mu ( B cap A_i) = int_B fd mu} $

but it seems too practical and as if I was missing something. Any help appreciated.

maximum – Why the `NMaximize` function cannot use a non-negative integer field as a solution condition

Maximize[{91 x1 + 71 x2 + 105 x3 + 103 x4 + 96 x5, 
  2.36 x1 + 2.12 x2 + 1.89 x3 + 3.77 x4 + 2.87 x5 <= 50, 0 <= x1 <= 3,
   1 <= x3 <= 5, 1 <= x4, 2 <= x3 + x4 <= 5}, {x1, x2, x3, x4, 
  x5}, NonNegativeIntegers]

Because the Maximize function sometimes returns the optimal local value, I want to use the NMaximize function to solve it. But the NMaximize The function cannot use the non-negative integer field as a solution condition:

NMaximize[-x^4 - 3 x^2 + x, x∈ 
      Reals]  
NMaximize[-x^4 - 3 x^2 + x, x∈ 
       NonNegativeReals]  
NMaximize[{91 x1 + 71 x2 + 105 x3 + 103 x4 + 96 x5, 
      2.36 x1 + 2.12 x2 + 1.89 x3 + 3.77 x4 + 2.87 x5 <= 50, 0 <= x1 <= 3,
       0 <= x2, 1 <= x3 <= 5, 1 <= x4, 
      2 <= x3 + x4 <= 5}, {x1, x2, x3, x4, x5} ∈ 
      NonNegativeIntegers]

Matrix iteration for non-negative matrices. Does this converge on an eigenvector?

Let $ A $ be a non-negative matrix (as input) and $ u $ a non-negative vector. Is it still true that there is a non-negative eigenvector $ v $ of $ A $ such as $ lim_ {n rightarrow infty} frac {A ^ nu} {|| A ^ nu || _1} = frac {v} {|| v || _1} $?

Differential Geometry – Non-negative Homogeneous Polynomial on Stiefel Variety

Let $ mathcal {M} = {X in mathbb {R} ^ {n times r}: X ^ TX = I } $ or $ r <n $ to be the variety of Stiefel.

Let $ f: mathcal {M} to (0,1) $ to be a homogeneous polynomial of the inputs of $ X = (X_ {ij}) _ {1 leq i leq n, 1 leq j leq r} in mathcal {M} $ with equal degree.

It is also assumed that the polynomial takes values ​​in $ (0.1) $ sure $ mathcal {M} $ and 0 is achievable and $ f ((X_ {ij})) $ can be written as the sum of the polynomial squares of the entries.

Let $ Flight _ { mathcal {M}} $ be the measure of volume on $ mathcal {M} $. Is the following conjecture true?

There is a constant $ C> $ 1 independent of $ n, r $ such as:
$$ sup_ {0 <t leq1 / 2} frac {Vol _ { mathcal {M}} left (f ^ {- 1} ((0,2t)) right)} {Vol _ { mathcal {M}} left (f ^ {- 1} ((0, t)) right)} leq C ^ {rn} $$

How to make a polynomial division of arbitrary non-negative power in Mathematica?

Suppose I have a simple polynomial in $ {a, b } $, defined as $ a ^ k-b ^ k forall k in mathbb {Z} ^ { geq0} $. If I know that one of the factors is $ (a-b) $is there a way to get a representation of its remaining factors in Wolfram Language?

I know that one of his representations is $ sum_ {j = 0} ^ {k-1} {(a ^ {(k-1) -j} b ^ j)} $ and Mathematica recognizes that

Sum[a^((k-1)-j) b^j,{j,0,k-1}]

given

(a ^ k - b ^ k) / (a ​​- b)

But if I ask

FullSimplify[(a^k-b^k)/(a-b),Assumptions->k[Element]NonNegativeIntegers]

He is unable to do anything.

Also, what is the right way to formulate hypotheses?

Is the expression above interpreted differently if I give as

Supposing[k[Element]NonNegativeIntegers, FullSimplify[(a^k-b^k)/(a-b)]]

Also tried to factoring directly without giving a single factor without success,

Supposing[k[Element]NonNegativeIntegers, Factor[a^k-b^k]]

$ x $ is a non-negative integer and $ sqrt {x ^ 2 + sqrt {x + 1}} $ is a positive integer.

Find a non-negative integer $ x $ such as $ sqrt {x ^ 2 + sqrt {x + 1}} $ is a positive integer

Because $ sqrt {x ^ 2 + sqrt {x + 1}}> x $, we leave $ x ^ 2 + sqrt {x + 1} = (x + y) ^ 2, (y> 0) $

That means $$ begin {aligned}
& x ^ 2 + sqrt {x + 1} = x ^ 2 + y ^ 2 + 2xy \
& implies sqrt {x + 1} = y ^ 2 + 2xy \
& implies x + 1 = y ^ 4 + 4x ^ 2y ^ 2 + 4xy ^ 3
end {aligned} $$

And that 's where I was stuck.

Max vs. bound min for non-negative harmonic function

Problem: Let $ Omega $ to be an open, bounded, simply connected subset of $ mathbb {C} $ and let $ u colon Omega to mathbb {R} $ to be a non-negative harmonic function. Show that for each compact subset $ K subseteq Omega $ there is a constant $ C_K> $ 0 it depends on K $ such as
$$ sup_ {x in K} u (x) leq C_K inf_ {x in K} u (x). $$

It seems to me that if $ u $ There are no zeros in $ Omega $, so we just take $ C_K = sup_ {x in K} u (x) / inf_ {x in K} u (x) $. But if $ u $ has a zero in $ Omega $then $ u $ reaches its minimum in $ Omega $ since $ u geq 0 $, So $ u = $ 0 by the minimum principle. So it seems we do not have $ Omega $ to be open and connected.

linear algebra – non-negative irreducible matrices with random (correlated or independent) nonzero entries

allows $ M $ to be a non-negative irreducible matrix. According to Perron-Frobenius' theorem, the maximum eigenvalue of $ M $, $ lambda $, is positive and equal to its spectral radius $ rho (M) $.

Suppose now the matrix $ M $ it is not deterministic and its non-zero elements are equal to the random variables $ tanh (x_i) $ with $ x_i sim N (m> 0, sigma ^ 2) $. However, the null elements are the same deterministic zeros as before. My question is: what will happen to the expected value of the maximum eigenvalue if $ x_i $The s are correlated in case they are independent.

My observation is that the existence of a positive correlation between non-zero inputs increases the expected maximum eigenvalue compared to the case where the inputs are independent. But I can not justify this experience.