real analysis – Proof of convergence for the $ 0 case

assume $ (a_n) _1 ^ infty $ is a growing sequence of positive real numbers with limit $ + infty $. Yes $ p> 0 $, shows CA $$ sum_ {n = 1} ^ {+ infty} frac {a_ {n + 1} – a_n} {a_ {n + 1} a_n ^ p} $$ converges.

Easy to make the cases $ p geqslant $ 1:
$$
sum_ {n = 1} ^ {+ infty} frac {a_ {n + 1} – a_n} {a_ {n + 1} a_n ^ p} leqslant sum_ {n = 1} ^ {+ infty } frac {a_ {n + 1} – a_n} {a_ {n + 1} a_n} = frac 1 {a_1} <+ infty.
$$

What about the case $ 0 <p <1 $? I'm pretty sure it would not cause a lot of problems, but I just can not understand it at the moment. All tips are welcome. Thank you in advance.

Explanatory Evidence – Statement on Sequence Convergence

I'm trying to prove this next theorem about sequences.

Define the monotone sequence $ s_n = t_ {n + 1} – t_n $, or $ (t_n) $ is a bounded sequence of real numbers. Prove it $ (t_n) $ is convergent.

I am pretty sure that this fact could not be proved by the definition of convergence. We must therefore apply the fact that any monotonous bounded sequence converges (an increasing sequence of its supremum, a decreasing sequence of its infimum). Since $ (t_n) $ is limited, just show that it is monotonous. That's where I fight.

Say, without loss of generality for the moment (since we can reverse it otherwise), that $ (s_n) $ increases. So
$$
s_n leq s_ {n + 1}
$$

and so
$$
t_ {n + 1} – t_n leq t_ {n + 2} – t_ {n + 1}
$$

This must be the case that we use does it somewhere that $ (t_n) $ is delimited. So let's call the terminal $ M $who has the property that $ forall n in mathbb {N}, ; | t_n | $ M, which means that $ -M leq t_n leq M $. This implies that $ M geq -t_n geq -M $, that is to say., $ -M leq -t_n $ M $. Therefore,
$$
t_ {n + 2} – t_ {n + 1} leq M – t_ {n + 1} leq M – M = 0.
$$

So we have
$$
t_ {n + 2} – t_ {n + 1} leq 0,
$$

which implies
begin {align *}
t_ {n + 2} leq t_ {n + 1}.
end {align *}

As is true for all natural numbers, this implies that $ (t_n) $ is down monotonous.

How does it sound? The opposite direction seems similar to this one, so I'm mainly interested in getting one side of this argument.

Double integral convergence

Suppose that $ q> $ 1. Let $ U $ to be the disc of unity. For who $ a in (2,4) $, integral $$ I = int_U left ( int_U 1 / | 1-z bar w | ^ dA (z) right) ^ q dA (w) $$ converges. Right here $ dA (z) $ is the surface measurement of the unit disk.

analysis fa.functional – Point Convergence of the extension of the eigenfunctions of $ f (x) = frac {1} {| x |} $

Let $ Omega subset mathbb {R} ^ n $ a bounded domain with a smooth boundary, $ 0 < lambda_1 leq lambda_2 leq dots leq lambda_k leq dots $ the Dirichlet eigenvalues ​​and $ {w_k } _ {k = 1} ^ {+ infty} $ a $ L ^ 2 ( Omega) $orthonormal basis consisting of eigenfunctions ordered by the corresponding eigenvalues. In fact, using faked Hilbert spaces, $ {w_k } _ {k = 1} ^ {+ infty} $ (up to renormalization) is an orthonormal basis in $ H ^ 1 ( Omega) $. So for everything $ f in H ^ 1 ( Omega) $ we have $$ sum_ {k = 1} ^ {N} a_k w_k (x) to f (x) quad text {in} quad H ^ 1 ( Omega), quad text {where} quad a_k: = int _ { Omega} w_k (x) f (x) , dx. $$

In addition, it is well known that $ w_k (x) in C ^ infty ( bar { Omega}) $ and is real analytic $ Omega $.

Assume that $ Omega subset mathbb {R} ^ $ 3 and $ Omega $ contains the origin.

Main questions.

  1. Can we hope that the convergence of $ sum_ {k = 1} ^ N a_k w_k (x) $ at $ f (x) = frac {1} {| x |} $ is punctual in any compact set $ K subset Omega setminus {0 } $? In this case, what can we say about the asymptotic behavior from the Serie $ S (x) = sum_ {k = 1} ^ {+ infty} a_kw_k (x) $ near $ x = $ 0 (suspecting that $ S (x) sim frac {1} {| x |} $?
  2. Yes $ Omega = B_1 (0) $Using the Laplacian's own functions as Bessel's functions multiplied by spherical armonics, can we say something more in this particular case?

A general problem for $ f (x) $ in $ Omega subset mathbb {R} ^ n $: Can we hope that converge on the inside $ Omega $ is pointwise where $ f (x) $ is continuous? This resembles a Carleson theorem for a general (not just rectangular) non-trigonometric domain. I guess it's an open problem. This is strictly related to this question

Any idea, counterexample or related reference is welcome!

real analysis – Uniform convergence preserves continuity – limited argument of exchange

Let $ R $ subset and for everyone $ n in N, fn: D to R $ is continuous on
$ D $. If the sequence $ (fn) $ be uniformly convergent on D towards a function
$ f $then $ f $ is continuous on $ D $

From my post here I am allowed to trade limits

Yes $ c in D $,
then $$ lim_ {x to c} f (x) = lim_ {x to c} lim_ {n to infty} f_n (x) = lim_ {n to infty} lim_ {x to c} f_n (x) = lim_ {n to infty} f_n (c) = f (c) $$

so that proves $ f $ is continuous on $ D $?

Is it correct?
I've posted this method because I found it relatively simpler.

real analysis – Convergence of a sequence $ y_k $ defined by a recurrence relation

Let $ | a | <1 $ and let $ left (x_k right) _ {k ge 0} $ to be a sequence that converges to zero. Define a sequence $ (y_k) _ {k ge 1} $ given by the following relation

$ y_k = x_k + ay_ {k-1} $.

Determine if $ y_k to $ 0.

My attempt: We can show that $ y_k = x_k + ax_ {k-1} + a ^ 2x_ {k-2} + dotsb + a ^ {k-1} x_1 + a ^ kx_0 $ for each $ k $ 1 $. The first and last term of RHS goes to zero. But until now, I am unable to estimate the terms
$ ax_ {k-1} + a ^ 2x_ {k-2} + dotsb + a ^ {k-1} x_1 $ if it converges to zero. Help is appreciated.

Functional Analysis – Does the convergence $ L ^ 1 $ preserve the regularity of this sequence of functions?

Let $ f_n $ to be a sequence of $ L ^ 1 () 0.1 () $ functions such as $ f_n $ is not decreasing, at least left continuous, $ f_n (0 ^ +) <0 $, $ f_n (1 ^ -)> $ 0 (so $ exists! c_n in) 0.1 ($ such as $ f_n (c_n) = $ 0), for everyone $ n in mathbb N $. This sequence converges

$$ f_n rightarrow f $$

in $ L ^ 1 $. Is it true that $ f $ is not descending and at least left with $ f (0 ^ +) <0 $, $ f (1 ^ -)> $ 0 ?

measure theory – Using the dominated convergence theorem when the limit is only for the limit

I have a sequence of functions $ (f_n) $ from a measurement space $ (X, Sigma, mu) $ which converge punctually to an integrable function $ f $, and I'm interested in showing
$$
lim_ {n to infty} int f_n , text d mu = int f , text d mu
$$

via the Dominance Convergence Theorem (DCT), but my problem is that I only have one limit on the limit, ie a function $ g $ with $ g (x) geq | f (x) | $ for each $ x in X $. I do not have a limit on the individual $ f_n $. Can I still use DCT?

My intuition is that because $ f_n (x) to f (x) $ then finally the $ f_n $ are really close to being linked by $ g $, but I'm not sure how to get a function $ tilde g $ which is actually an upper limit for all $ f_n $ or how to make this intuition rigorous.

I thought of something in the sense of making a whole $ A (n, varepsilon) = {x in X: | f_k (x) | leq g (x) + varepsilon text {for all but a lot} k geq n } and looking $ mu (A (n, varepsilon)) $ but I still do not know how to turn that into evidence.

convergence – How to describe the process of adding more random intervals without overlaps in a given interval?

Suppose I divide the interval $ (0, 1) $ on the real axis by $ N $ smaller intervals randomly placed but not overlapping $ (X_ {i-1, N}, X_ {i-1, N} + Delta_N), i = 1, cdots, N $, such as
$$
N Delta_N + sum_ {i = 1} ^ N y_ {i, N} = 1,
$$

or $ y_ {i, N} $ represent the length of interval between intervals $ (X_ {i-1, N}, X_ {i-1, N} + Delta_N) $ and $ (X_ {i, N}, X_ {i, N} + Delta_N) $. (together $ X_ {0, N} = 0 $.)

Now, I want to study the convergence of some functions on the domain $ (0, 1) $ as $ N to infty $. My question is: how can I describe the process $ N to infty $? In other words, are there any known mathematical models that can be used to describe the process? $ N to infty $? (I think there may be models such as random processes related to this process $ N to infty $. But I can not search for them. I do not know them well now.)

linear algebra – Convergence at $ 2 $ e largest eigenvalue of the column stochastic matrix.

Yes $ M $ is column stochastic, so $ dfrac { lVert M ^ {k} z-q rGreen_1} { lVert M ^ {k-1} z-q rVert_1} $ converges to the absolute value of the second largest eigenvalue of $ M, $ or $ lVert. Green = $ sum of the module of the entries of a column vector, $ z $ is a column vector such as $ lVert. rGreen_1 = $ 1 and $ q $ is a clean vector of $ M $ for eigenvalue 1 such that $ lVert q rVert_1 = $ 1

I have tried $$ A = begin {pmatrix} 0 & 0 & 0.5 & 0.5 & 0 \ 1/3 & 0 & 0 & 0 \ 1/3 & 1/2 & 0 & 1/0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 0 & 0 & 0 0 & 0 & 0 0 & 0 0 & 0 & 0 & 0/0 & 0 & 0/0 & 0 & 0 0
and $$ S = dfrac {1} {5} begin {pmatrix} 1 & 1 & 1 & 1 & 1 1 1 & 1 & 1 & 1 1 & 1 & 1 & 1 & 1 & 1 & 1 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1

Thenn, M $ = 0.85A + 0.15S $ is the stochastic and absolute value column of the second largest eigenvalue of $ M about 0.6113. $

Using Matlab, I calculated $ q = (0.2371 …, 0.0972 …, 0.3489 …, 0.13385 …, 0.178 … 3) ^ T $ Up & # 39; to $ 100 decimal places. But it seems that $ dfrac { lVert M ^ {k} z-q rGreen_1} { lVert M ^ {k-1} z-q rVert_1} $ converges to $ 1 $ instead of 0.6113. $

Can someone advise you please? Thank you.