na.numerical analysis – Floating point representation and rounding error

Anyone want to explain to me how it works? I know how to convert the number, I just don't know how to calculate the error. I know you should take the first larger number, but all that follows is just lost to me.

The video below explains some things pretty well, although I seem to be missing information on where it gets 0.4 and 0.8 and how to round op to 1.0 (time: 12:20)

Double dependent analysis for

Here is the algorithm:

int sum = 0
for (int i = 2N; i > 0; i = i / 4) {
  for (int j = 0; j < i; j+=2) {
    sum++
  }
}

I thought it would be linear, but it's just linear. I would appreciate seeing a formalized summary and not a qualitative explanation. I tried to do it but I can't seem to get a linear execution.

fa.functional analysis – Sine-Gordon transformation and functional integrals

Over the past few months, I have been trying to understand the so-called Sine-Gordon transformation, so I have posted a few questions on this topic here. I also did extensive research on this subject, so I came to some conclusions. Still, I have a few questions that I would like to share with you. I will first draw the general picture and some of my conclusions.

We consider a function $ V: mathbb {R} ^ {n} times mathbb {R} ^ {n} $ who is continuously differentiable, satisfied $ sup_ {x, y in mathbb {R} ^ {n}} | V (x, y) | le K $ and
$$ langle f, Vg rangle: = int _ { mathbb {R} ^ {n}} int _ { mathbb {R} ^ {n}} f (x) V (x, y) g (y) ddy ge 0 tag {1} $$
for each $ f, g in L ^ {2} ( mathbb {R} ^ {n}) $. If we define $ B: mathcal {S} ( mathbb {R} ^ {n}) times mathcal {S} ( mathbb {R} ^ {n}) $ to be $ B (f, g) equiv langle f, Vg rangle $, the associated quadratic form $ f mapsto B (f, f) $ is non-negative, so that, according to Minlos' theorem, there is a (Gaussian) measure $ mu_ {V} $ sure $ mathcal {S} & # 39; ( mathbb {R} ^ {n}) $ such as
$$ W (f): = e ^ {- frac {1} {2} B (f, f)} = int _ { mathcal {S} & # 39; ( mathbb {R} ^ {n })} d mu_ {V} (T) e ^ {iT (f)} $$
Because $ mathcal {S} ( mathbb {R} ^ {n}) subset mathcal {S} & # 39; ( mathbb {R} ^ {n}) $, each $ f in mathcal {S} ( mathbb {R} ^ {n}) $ induces a distribution in $ mathcal {S} & # 39; ( mathbb {R} ^ {n}) $. So if we fix it $ epsilon_ {1}, …, epsilon_ {N} in mathbb {R} $ and $ x_ {1}, …, x_ {N} in mathbb {R} ^ {n} $, we can choose sequences $ {f_ {l} ^ {(j)} } _ {l in mathbb {N}} $ such as $ f_ {l} ^ {(j)} to epsilon_ {j} delta_ {x_ {j}} $, for each $ j = 1, …, N $. We can prove that:
$$ lim_ {l to infty} int _ { mathcal {S} & # 39; ( mathbb {R} ^ {n})} d mu_ {V} (T) prod_ {j = 1} ^ {{N}: e ^ {iT (f_ {l} ^ {(j)})}: _ {V} = e ^ {- sum_ {1 le i <j le N} epsilon_ {i} epsilon_ {j} V (x_ {i}, x_ {j})} $$
or $: e ^ {iT (f)}: _ {V}: = e ^ {iT (f)} e ^ { frac {1} {2} B (f, f)} $. Let's introduce the notation:
begin {eqnarray}
lim_ {l to infty} int _ { mathcal {S} & # 39; ( mathbb {R} ^ {n})} d mu_ {V} (T) prod_ {j = 1} ^ {N}: e ^ {iT (f_ {l} ^ {(j)})}: _ {V} equiv bigg { langle} prod_ {j = 1} ^ {N}: e ^ {i epsilon_ {j} T (x_ {j})}: _ V bigg { rangle} _ {V} tag {2} label {2}
end {eqnarray}

The right side of ( ref {2}) makes no sense once $ T $ cannot be assessed on an ad hoc basis. However, this is just a notation. Now the partition function of a system in the grand canonical ensemble is given by:
begin {eqnarray}
Xi _ { Lambda} ( beta, z) = 1+ sum_ {N = 1} ^ { infty} frac {z ^ {N}} {N! 2 ^ {N}} sum _ { substack { epsilon_ {j} = pm 1 \ j = 1, …, N}} int _ { Lambda ^ {N}} dx_ {1} cdots dx_ {N} e ^ {- beta sum_ {1 le i <j le N} epsilon_ {i} epsilon_ {j} V (x_ {i}, x_ {j})} tag {3} label {3}
end {eqnarray}

So, we can rewrite ( ref {3}) using the notation in ( ref {2}):
begin {eqnarray}
Xi _ { Lambda} ( beta, z) = 1+ sum_ {N = 1} ^ { infty} frac {z ^ {N}} {N! 2 ^ {N}} sum _ { substack { epsilon_ {j} = pm 1 \ j = 1, …, N}} int _ { Lambda ^ {N}} dx_ {1} cdots dx_ {N} bigg { langle} prod_ {j = 1} ^ {N}: e ^ {i sqrt { beta} epsilon_ {j} T (x_ {j})}: _ V bigg { rangle} _ {V} tag {4} label {4}
end {eqnarray}

This motivates another simplification of the notation. We interpret the integrand in ( ref {4}) as a $ N $ iterated integrals, in order to write:
begin {eqnarray}
frac {1} {2 ^ {N}} sum _ { substack { epsilon_ {j} = pm 1 \ j = 1, …, N}} int _ { Lambda ^ {N }} dx_ {1} cdots dx_ {N} bigg { langle} prod_ {j = 1} ^ {N}: e ^ {i sqrt { beta} epsilon_ {j} T (x_ {j })}: _ V bigg { rangle} _ {V} equiv bigg {(} frac {1} {2} bigg { langle} sum _ { epsilon = pm 1} int _ { Lambda} dx: e ^ {i sqrt { beta} epsilon T (x)}: _ {V} bigg { rangle} _ {V} bigg {)} ^ {N} equiv bigg {(} bigg { langle} int _ { Lambda}: cos sqrt { beta} T (x): _ {V} dx bigg { rangle} _ {V} bigg { )} ^ {N}: = langle C _ { Lambda, beta} rangle_ {V} ^ {N} tag {5} label {5}
end {eqnarray}

Finally, we have:
begin {eqnarray}
Xi _ { Lambda} ( beta, z) = sum_ {N = 0} ^ { infty} frac {z ^ {N}} {N!} Langle C _ { Lambda, beta} rangle_ {V} ^ {N} equiv langle exp (z C _ { Lambda, beta}) rangle_ {V} tag {6} label {6}
end {eqnarray}

The relation ( ref {6}) is called the Sine-Gordon transformation. Having said that, I would like to raise a few questions.

(1) If my reasoning is correct, the representation of Sine-Gordon is formal, in the sense that it does not represent a proper Gaussian integral; instead, it's just a matter of scoring. If that is the case, I agree with that, but I don't understand the point here. If ( ref {6}) is just a scoring question, why is it useful? If I draw a conclusion with ( ref {6}) as a starting point, why should it really hold if all of this is formally written? I know that Gaussian integrals are useful tools but it is not an appropriate Gaussian integral, right?

(2) Is it possible to give a precise meaning to ( ref {6})? Is there a construction where $ Xi _ { Lambda} ( beta, z) $ is a real Gaussian measurement and, if so, how to do it? (It is not unusual to come across an article where the Sine-Gordon transformation is treated as a true mathematically significant representation, so I wonder if I am not reading it correctly or if there is in fact a significant version).

(3) In practice, the notation ( ref {2}) is useful because it allows us to perform certain formal operations such as the exchange of integrals and products as in ( ref {5}) and ( ref {6}). Can any of these operations be properly justified? In other words, to what extent ( ref {6}) does it hold only as a formal series?

Note: To be complete, I based this post mainly on the work of Fröhlich and the work of Fröhlich and Park. Other good references are the work of Brydges and Federbush and the work of Dimock.

real analysis – Given the sequence, how can I show that this series diverges?

I currently have some problems with this math exercise. I have to show that the series $ sum_ {n = 1} ^ infty b_n $ with $ b_n = sum_ {k = n + 1} ^ {2n} frac {1} {k ^ 2} $ diverges.

I think I have problems with this problem because I don't really know how to apply the quotient rule or the comparison test to the given sequence.

Could someone help me a little?

Thank you!

actual analysis – Evidence of derivative product rule.

The usual proof of the product rule is:

$ begin {align *} (fg) & # 39; (x) & = lim_ {h rightarrow 0} frac {f (x + h) g (x + h) – f (x) g (x )} {h} \ & = lim_ {h rightarrow 0} frac {f (x + h) g (x + h) – f (x + h) g (x) + f (x + h) g (x) – f (x) g (x)} {h} \ & = lim_ {h rightarrow 0} left ( frac {f (x + h) (g (x + h) -g (x))} {h} + frac {g (x) (f (x + h) -f (x))} {h} right) tag 1 \ & = lim_ {h rightarrow 0 } f (x + h) frac {g (x + h) -g (x)} {h} + lim_ {h rightarrow 0} g (x) frac {f (x + h) -f ( x)} {h} tag 2 \ & = f (x) g & # 39; (x) + g (x) f & # 39; (X). tag 3 end {align *} $

Why is this evidence not generally written upside down?
Transition from $ (1) $ at $ (2) $ can only be done knowing that the corresponding parts of the $ (1) $ exist (same for $ (2) $ at $ (3) $).
We are also supposed to show that $ (fg) & # 39; $ exist and is equal to $ (3) $, given that $ f & # 39; $ and $ g & # 39; $ to exist. Isn't it clearer to start with $ (3) $?
To me, the proof above looks more like a sketch of "working backwards".

real analysis – Show that a subsequence is a subset of the original sequence

My main question is the 4th point, but I hope you can clarify some things for me along the way.

The definition of a sequence says that a function $ a: mathbb {N} to S $ is a sequence on a set $ S $, denoted $ (a_n) $.

  1. Can i freely restrict the area of ​​function $ a $ and still call it a sequence? In particular, a) is it valid to define $ a_n $ on a finite subset of $ mathbb {N} $ b) on an infinite subset of $ mathbb {N} $?

Let's say that at each term in the sequence $ (a_n) _ {n in mathbb {N}} $, I have to define a new sequence from there. I first define a sequence $ (m_k) $ who maps $ {k in mathbb {N}: k geq n } to mathbb {N}, forall n in mathbb {N} $ with $ m_k <m_ {k + 1} $ (assuming affirmative on question 1b because the domain is an infinite subset of $ mathbb {N} $). Then, like $ (a_ {m_k}) $ is the composition of the sequence $ (a_n) $ and the increasing sequence $ (m_k) $, by definition $ (a_ {m_k}) $ is a subsequence of $ (a_n) $.

  1. Is it a good way to show that these new sequences $ (a_ {m_k}) $ are subsequences?

A set of points defined for the sequence $ (a_n) $ East $ left {a_n: n in mathbb {N} right } $.

  1. How do you define a set of points for a subsequence $ (a_ {m_k}) $? East $ forall n in mathbb {N}, left {a_ {m_k}: k geq n right } $ well?

Assuming yes on question 3, the main question is:

  1. How to prove this $ forall n in mathbb {N}, left {a_ {m_k}: k geq n right } subseteq left {a_n: n in mathbb {N} right } $? In other words, that the set of points defined for a subsequence is a subset of the set of points in the original sequence. I think i should take a term off $ left {a_ {m_k}: k geq n right } $ and deduce that its also in the whole $ left {a_n: n in mathbb {N} right } $, but I don't know how to do it rigorously.

To give you context on my questions, I want to show that $ sup { left {a_n: n in mathbb {N} right }} geq sup { left {a_ {k}: k geq n right }} $. Given that $ (a_n) $ is bounded, defines $ left {a_n: n in mathbb {N} right } $ and $ left {a_ {k}: k geq n right } $ are delimited. Having $ left {a_ {k}: k geq n right } subseteq left {a_n: n in mathbb {N} right } $ I would prove the supremums as in the question Prove the supremum of a subset is smaller than the supremum of the whole.

real analysis – Dirichlet problem for a subharmonic function

assume $ K $ is a compact subset of $ mathbb ^ n $, $ V_0 $ and $ V_1 $ the complements of $ K $ in $ mathbb ^ n $ a T $ mathbb ^ n_ infty $ (compacting at one point), respectively. Let $ u $ to be subharmonic on $ V_0 $ and $ H $ to be Dirichlet's generalized solution of $ u $ sure $ V_1 $. So in particular $ H $ is harmonic $ V_1 $; which means that it’s harmonic in the usual sense on any open subset of $ V_1 $ which does not contain infinity, and if $ W $ is an open subset of $ V_1 $ which contains infinity, $ H $ is continuous to infinity and $ H ( infty) $ is equal to the average value of $ H $ on any ball $ B $ whose closure is contained in $ W $ (see Helms, "introduction to the theory of potential", chapter on the Dirichlet problem for unlimited domains). My question is: can we say $ u leq H $ sure $ V_0 $?

ca.classical analysis and odes – Is it possible to express the functional square root of the sinus as an infinite product?

MSE cross-post.

It is known that the sinus can be expressed as an infinite product: $$ sin (x) = x prod_ {n = 1} ^ { infty} Big {(} 1 – frac {x ^ {2}} {n ^ {2} { pi} ^ {2 }} Big {)}. $$ We can define this functional square root of a function $ g ( cdot) $ be the function $ f ( cdot) $ who satisfies $ f (f (x)) = g (x) $. The square root of the sine function in relation to the composition of the function has been discussed previously on MO on several occasions. For example, here the series of formal powers is considered.

I wonder if the functional square root of the sinus also has a representation of infinite power. If not, has research been done on this issue?

Fourier analysis – Reference request: explicit construction of Kakeya sets using Perron's tree

I have found many excellent notes online that illustrate how to build a set of Kakeya needles (with measure $ < varepsilon $.) Yet none of them gives a complete argument on the construction of a set of Kakeya (with zero measure). The closest is given on page 6 of

https://web.stanford.edu/~yuvalwig/math/teaching/KakeyaNotes.pdf,

which unfortunately leaves the detail to argue for the existence of a unitary line segment in $ cap_ {i = 1} ^ infty U_n $. He says this is proven subtly using a compactness argument.

What is this argument?

ca.classical analysis and odes – Second order differentiation of convex functions

Let $ f: mathbb {R} ^ n to mathbb {R} $ to be a convex function. so $ f $ is locally Lipschitz and therefore differentiable a.e. (Rademacher). Let $ E subset mathbb {R} ^ n $ be the set of points where $ f $ is differentiable.

In fact, the second-order distributional derivatives of $ f $ are measurements of radon (a simple consequence of Riesz's representation theorem). Let $ D ^ 2f $ to be the absolute continuous part of the second order distributional derivative.

Aleskandrov's classic theorem states that for almost all $ x in mathbb {R} ^ n $,
$$
lim_ {y to x}
frac {| f (y) -f (x) -Df (x) (yx) – frac {1} {2} (yx) ^ TD ^ 2f (x) (yx) |} {| yx | ^ 2} = 0.
$$

This is the 6.9 inch theorem (EG). The argument used here is purely analytical and is based on a careful analysis of weak derivatives.

In fact, using a very different and more geometric argument (see (AA) (7.3) and (7.4)), we can prove that in addition to the second order differentiability above:

For almost everyone $ x in E $ We have
$$
(*)
lim_ {E ni y to x}
frac {| Df (y) -Df (x) -D ^ 2f (x) (y-x) |} {| y-x |} = 0.
$$

The proof given in (AA) is limited due to its geometrical nature to the case of monotonic operators (the derivative of a convex function is an example of monotonic operator) while the proof given in (EG) seems be more flexible.

Question.
Is it possible to modify the proof given in (EG) so that it also includes the result listed in (*)?

(AA) L. Ambrosio, G. Alberti, A geometric approach to the monotonic function in $ mathbb {R} ^ n $. Math Z. 230 (1999), 259-316.

(EG) L. C. Evans, R. F. Gariepy, Measure the theory and fine properties of functions. CRC Press.