## fourier analysis – How is the Cauchy-Schwarz inequality used to bound this derivative?

In “Hardy’s Uncertainty Principle, Convexity and Schrödinger Evolutions” (link) on page 5, the authors state that they are using the Cauchy-Schwarz inequality to bound the derivative of the $$L^2(mathbb{R}^n)$$ norm of a solution to a certain differential equations, but I am not sure how exactly they applied it.

Some context: Let $$v$$, $$phi$$, $$V$$, $$F$$ be nice enough functions of $$x$$ and $$t$$ so that the following integrals are well-defined, $$A>0, Binmathbb{R}$$ be constants, and $$u=e^{-phi} v$$ solve
$$partial_t u = (A+iB)left( Delta u + Vu+Fright).$$
Denote the $$L^2$$ inner product on $$mathbb{R}^n$$ between some $$f$$ and $$g$$ as $$(f, g) = int f g^{dagger} dx$$, where $$g^dagger$$ is the complex conjugate of $$g$$, and define $$f^+=mathrm{max}{f, 0}$$.

We know the equality
$$partial_t vert !vert v vert!vert^2_{L^2} = 2,mathrm{Re}left(Sv,vright) + 2,mathrm{Re}left((A+iB)e^phi F, vright),$$
where
$$mathrm{Re}left(Sv,vright) = -Aint |nabla v|^2 + left(A|nabla phi|^2+partial_t phi right) |v|^2 + 2B ,mathrm{Im}, v^{dagger} nablaphicdotnabla v + left( A,mathrm{Re},V – B,mathrm{Im}, Vright)|v|^2dx,$$
holds true. The authors go on to conclude that the Cauchy-Schwarz inequality implies that
$$partial_t vert !vert v(t) vert!vert^2_{L^2} le 2vert !vert A , left(mathrm{Re},V(t)right)^+ – B,mathrm{Im}, V(t) vert !vert_{infty}vert !vert v(t) vert !vert^2_{L^2} + 2 sqrt{A^2+B^2} vert !vert F e^phi vert !vert_{L^2} vert !vert v(t) vert !vert_{L^2}$$
when
$$left(A+frac{B^2}{A}right)|nablaphi|^2 + partial_t phile 0, ,,,,mathrm{in}, mathbb{R}_+^{n+1}.$$
However, I am not sure how the authors used the C.S. inequality to arrive at this conclusion, and am especially confused as to where the factor of $$B^2/A$$ came from, and why we only need the constraint to hold over $$mathbb{R}_+^{n+1}$$ when we are integrating over all of $$mathbb{R}^{n}$$, though I understand why we only care about positive time.

Does anyone have any insight here?

## calculus – factoring out total derivative from integral

What is the easiest way to factor out the total derivative from the following integral?
I show my attempt and method below.

$$I=int_a^b dt int_a^b dx , frac{df(t)}{dt} left(g(x,t)h(x,t)+g^2(x,t)h^2(x,t)right)=0.$$
Is it correct if I write,
$$I= int_a^b dt, frac{df(t)}{dt}int_a^b dx left(g(x,t)h(x,t)+g^2(x,t)h^2(x,t)right)=0.$$
Moving the derivative outside the integral, I get
$$I = frac{d}{dt}int_a^b f(t) int_a^b dx left(g(x,t)h(x,t)+g^2(x,t)h^2(x,t)right)=0$$
Since $$f(t)neq 0$$, this is just
$$I = frac{d}{dt}int_a^b dx , left(g(x,t)h(x,t)+g^2(x,t)h^2(x,t)right) = 0.$$

Is this correct? If not, where is the mistake, and how can I do this correctly?

## matrices – Derivative of a Matrix w.r.t. it’s Matrix Square, \$frac{partial text{vec}X}{partialtext{vec}(XX’)}\$

Let $$X$$ be a nonsingular square matrix.

What is
$$frac{partial text{vec}X}{partialtext{vec}(XX’)},$$
where the vec operator stacks all columns of a matrix in a single column vector?

It is easy to derive that
$$frac{partialtext{vec}(XX’)}{partial text{vec}X} = (I + K)(X otimes I),$$
where $$K$$ is the commutation matrix that is defined by
$$text{vec}(X) = Ktext{vec}(X’).$$

Now $$(I + K)(X otimes I)$$ is a singular matrix, so that the intuitive solution
$$frac{partial text{vec}X}{partialtext{vec}(XX’)} = left( frac{partialtext{vec}(XX’)}{partial text{vec}X} right)^{-1}$$
does not work.

Is the solution simply the Moore-Penrose inverse of $$(I + K)(X otimes I)$$, or is it more complicated?

## calculus – In Taylor’s formula, there is a remainder term that includes an unknown value z at which the (n+1)st derivative is taken; what is the range for it?

In the final exam of MIT OCW 18.01 there is the following problem:

a) Find the Taylor Series of $$ln(1+x)$$ centered at $$a=0$$.

c) Use the first two non-zero terms of the power series found in a) to
approximate $$ln(3/2)$$

d) Give an upper bound on the error in your approximation in (c) using Taylor’s inequality.

My question is about d), finding the upper bound on the error. I will show my reasoning first, and then the solution I saw which I can’t understand

The Taylor Series isn’t difficult to find, it is $$x – frac{x^2}{2}+ frac{x^3}{3} – frac{x^4}{4} + ldots = sum_{k=1}^{infty} frac{(-1)^{n-1}x^n}{n}$$

The approximation for $$ln(3/2)$$ is $$ln(3.2) = ln(1+0.5) approx 0.5 – frac{0.5^2}{2} = frac{3}{8}$$

Regarding the error for this approximation, here is my line of reasoning:

We can use Taylor’s formula expressing our function as a sum of a polynomial approximation term and a remainder $$f(x) = P_2(x) + R_2(x)$$

$$P_2(x) = x – frac{x^2}{2}$$

$$R_2(x) = frac{f^{3}(z)}{3!}x^3$$

with z a number between 0 and x.

$$f^{3}(z)=frac{(-1)^2 2!}{(1+z)^3} = frac{2!}{(1+z)^3}$$

So

$$R_2(x) = frac{x^3}{3(1+z)^3}$$

In the case of $$x=0.5$$

$$R_2(0.5) = frac{1}{24(1+z)^3}$$

We don’t know exactly what z is, but since it is between 0 and $$1/2$$, and $$R_2(0.5)$$ is decreasing in z, we can check the value at $$z=1/2$$ to find an upper bound.

For $$z=0.5$$ we get $$R_2(0.5) < frac{1}{81}$$

The official solution for finding the upper bound of the error is as follows:

$$|R_n(x)| leq M_n frac{|x^{n+1}|}{(n+1)!}$$

where $$x=0.5$$ and $$n=2$$

In addition, $$M_n geq |f^{(n+1)}(x)| implies M_2 geq frac{2}{(1+x)^3}$$

for all $$|x| leq 0.5$$. The maximum of $$M_2$$ in this range is for $$x=-0.5$$, which gives M_2 = 16, so $$|R_n(0.5)| leq 16frac{(0.5)^3}{3!}=frac{1}{3}$$

Okay, so the length of the above was just to have all the steps in case someone needs to see them. Thing is the calculations are in agreement, but instead of restricting the $$z$$ value in the remainder term to between 0 and 1/2, they allow $$z leq 0.5$$, so negative values are allowed. What am I missing?

## differential equations – NDSolve not recognizing symbolic derivative as a function

I’m trying to solve a nonlinear first order initial value problem $${x'(t)= f(x,t), x(t_0)=x_0 }$$ using NDSolve. The function $$f(x,t)$$ is complicated, as is the computation of the initial x-value. The code computes these values accurately, but when I go to solve the IVP I get the error message:

``````NDSolve::ndode: The equations {True,0==SolSpeed(xsol(tsol)/tsol,0.25,1,2)} are not differential equations or initial conditions in the dependent variables {xsol}.
``````

I’ve read lots of help files and tried restarting the kernel, but nothing seems to help. I’m sure my code isn’t optimal, but I can’t figure out why it won’t run at all.

Any ideas?

Here is the full code, the `NDSolve` call is at the bottom inside a function called `SolPeakLeading`.

``````XiCrit(eta1_, eta2_) :=
Module({Whitham, m},
Whitham(m_?NumberQ) =
4 (1 - m) EllipticK(m)/EllipticE(m) + 2 (1 + m);
eta2^2*Whitham(eta1^2/eta2^2)
)

ComputeAlpha(xi_?NumericQ, eta1_?NumericQ, eta2_?NumericQ) :=

Module({xi$$critical}, xi$$critical = XiCrit(eta1, eta2);
If(xi >= xi$$critical, eta2, Module({Whitham, asq, ans, m}, Whitham(m_?NumberQ) = 4 (1 - m) EllipticK(m)/EllipticE(m) + 2 (1 + m); asq = asq /. FindRoot( xi == asq * Whitham(eta1^2/asq), {asq, ( eta1 + (eta2 - eta1)*(xi - 4 eta1^2)/(xi$$critical - 4 eta1^2))^2},
MaxIterations -> 150
);
ans = Sqrt(asq);
(* Now test if FindRoot gives a spurious complex answer *)

If(Im (ans) != 0,
(* If a nonzero real part switch to a bracketing method to
compute alpha *)
Clear(asq);
Sqrt(
asq /. FindRoot(
xi == asq*Whitham(eta1^2/asq), {asq, eta1^2, eta2^2},
Method -> "Secant")),
(* If false, return the original answer *)
ans
)
)
)
)

sol(xval_, tval_, eta1_, eta2_, kappa_, x0_, (Sigma)_) :=

If(xval < 4 eta1^2*tval,
2 (Sigma) kappa Sech(2 kappa (xval - x0 - 4 kappa^2 tval)), (*
If true, evaluate the zero background soliton *)
(* If false,
the module computes the background and soliton solution in the
elliptic regions *)
Module(
{
xi, alpha, R, m1, eK, eK1, nome, (Gamma)sq, Abel, (CurlyPhi),
snSlow, dnSlow, cnSlow,
Xi, TT, Omega, qBackground, Qminus, QminusPrime, w22sq, Y, X,
qsol
},
xi = xval/tval;
alpha = ComputeAlpha(xi, eta1, eta2);
R = -Sqrt(( kappa^2 - alpha^2) (kappa^2 - eta1^2)); (*
R(k) evaluated at k=i(Kappa) *)

m1 = 4*alpha*eta1/(alpha + eta1)^2;
eK = EllipticK(eta1^2/alpha^2);
eK1 = EllipticK(m1);
nome = Exp(-Pi EllipticK(1 - eta1^2/alpha^2)/(2 eK)) ;
(Gamma)sq =
Sqrt((kappa - alpha)/(kappa - eta1) * (kappa + eta1)/(kappa +
alpha)); (* (Gamma)^2 evaluated at k=i(Kappa) *)

Abel = -InverseJacobiDC(kappa/alpha, eta1^2/alpha^2)/(4 eK); (*
Abel evaluated at k=i(Kappa) > i (Alpha)*)

snSlow = JacobiSN(2 eK1 (Abel + 1/4), m1);
cnSlow = JacobiCN(2 eK1 (Abel + 1/4), m1);
dnSlow = JacobiDN(2 eK1 (Abel + 1/4), m1);
(*
Everything before this is a function of the slow evolution
variable (Xi).
Everything after depends on the fast (order one) evolution in xval
and tval
*)
Omega(Xi_, TT_) = -Pi*alpha/eK*TT*(Xi - 2 (eta1^2 + alpha^2));
(* Compared with notes (CurlyPhi) (here) is really -
I*(CurlyPhi) + log((Chi))/
2 (from notes)  *)
(CurlyPhi)(Xi_, TT_) =
R *TT*(4 kappa - (1/
kappa)*(EllipticPi(eta1^2/kappa^2, eta1^2/alpha^2)/
eK)*(Xi - 2 (eta1^2 + alpha^2))) - kappa x0 +
Log(2 kappa)/2;
qBackground(Xi_,
TT_) = (alpha +
eta1) JacobiDN((alpha + eta1) TT (Xi - 2*(eta1^2 + alpha^2)) +
eK1, m1);
Qminus(Xi_,
TT_) = ( (Gamma)sq - 1)/((Gamma)sq + 1) * (alpha +
eta1)/(alpha - eta1)* dnSlow*
JacobiDN(2 eK1 (Abel - 1/4 + Omega(xi, TT)/(2 Pi)), m1);
QminusPrime(Xi_,
TT_) = ( 2 (Gamma)sq)/((Gamma)sq + 1)^2 * (alpha +
eta1)*(kappa^2 + alpha *eta1)/R^2*dnSlow*
JacobiDN(2 eK1 (Abel - 1/4 + Omega(Xi, TT)/(2 Pi)), m1)
- 2 alpha *
eta1/((alpha - eta1) R) *((Gamma)sq - 1)/((Gamma)sq + 1)*(
dnSlow*JacobiSN(2 eK1 (Abel - 1/4 + Omega(Xi, TT)/(2 Pi)), m1)*
JacobiCN(2 eK1 (Abel - 1/4 + Omega(Xi, TT)/(2 Pi)), m1) +
snSlow*cnSlow*
JacobiDN(2 eK1 (Abel - 1/4 + Omega(Xi, TT)/(2 Pi)), m1)
);
w22sq(Xi_, TT_) =  ((Gamma)sq + 1)^2/(4 (Gamma)sq) *
EllipticTheta(3, 0 , nome)^2*
EllipticTheta(3, Pi (Abel + 1/4) + Omega(Xi, TT)/2 ,
nome)^2/(EllipticTheta(3, Omega(Xi, TT)/2 , nome)^2*
EllipticTheta(3, Pi (Abel + 1/4) , nome)^2);
Y(Xi_, TT_) = (1 + Qminus(Xi, TT)^2)/(2 kappa);
X(Xi_, TT_) =
1/(w22sq(Xi, TT)*(Sigma)*Exp(2*(CurlyPhi)(Xi, TT))) +
QminusPrime(Xi, TT);
qsol(Xi_,
TT_) = (2 (1 - Qminus(Xi, TT)^2) X(Xi, TT) +
4 Qminus(Xi, TT)* Y(Xi, TT))/(X(Xi, TT)^2 + Y(Xi, TT)^2);

qsol(xi, tval) + qBackground(xi, tval)
)
)

SolSpeed(xi_?NumericQ, eta1_, eta2_, kappa_) :=
Module({alpha},
alpha = ComputeAlpha(x$$val/t$$val, eta1, eta2);
2 (eta1^2 + alpha^2) +
4 kappa^2 EllipticK(eta1^2/alpha^2) /
EllipticPi(eta1^2/kappa^2, eta1^2/alpha^2)
);

SolPeakLeading(eta1_, eta2_, kappa_, x0_, sigma_, t\$right_) :=

Module({t$$initial, x$$initial, t$$left, ODEeqns, xmax}, t$$initial = (-x0 + 3)/(4 kappa^2 - 4 eta1^2);
x$$initial = NArgMax(sol(xmax, t$$initial, eta1, eta2, kappa, x0, sigma), xmax);
t$$left = -x0/(4 kappa^2 - 4 eta1^2) + .1; ODEeqns = { xsol'(tsol) == SolSpeed(xsol(tsol)/tsol, eta1, eta2, kappa), x(tinitial) == xinitial }; NDSolve(ODEeqns, xsol, {tsol, tleft, t$$right})
)

SolPeakLeading(.25, 1, 2, -64, 1, 30)

NDSolve::ndode: The equations {True,0==SolSpeed(xsol(tsol)/tsol,0.25,1,2)} are not differential equations or initial conditions in the dependent variables {xsol}.
$$```$$
``````

## calculus and analysis – How to do the derivative for the following function?

I have an energy function of the angles of the electrons’ spins. th1 is vector with (2l+2) elements and each element represents the angle of an individual electron spin. I need to eventually find the angles for which my energy is minimum. (I can use NMinimize but I want to make sure that my answer is the Global minimum, so I want to figure out the derivative first and see how many minimum my function has).

So here is the simplified version of my function:

``````  (ScriptL)0 = 5
(Gamma) =
Table({Riffle(Range(0, -(ScriptL)0, -1), Range((ScriptL)0))((i)),
1}, {i, 1, 2 (ScriptL)0 + 1});
th1 = Table(Subscript(t, n) , {n, 1, 2 (ScriptL)0 + 2})
deriv = Table(1, {n, 1, 2 (ScriptL)0 + 2})

factorFxn((ScriptL)_, m1_, m2_, p1_, p2_) :=

If((Gamma)((p1, 1)) - (Gamma)((m1, 1)) == (Gamma)((m2,
1)) - (Gamma)((p2, 1)),
Sum((2 (ScriptL) + 1)^2 Sum(
If((Gamma)((p1, 1)) - (Gamma)((m1, 1)) ==
mval && (Gamma)((m2, 1)) - (Gamma)((p2, 1)) ==
mval, (-1)^((Gamma)((m1, 1)) + (Gamma)((m2, 1)) +
mval) ThreeJSymbol({(ScriptL), -(Gamma)((m1,
1))}, {(ScriptL), (Gamma)((p1,
1))}, {(ScriptL)temp, -mval}) ThreeJSymbol({(ScriptL)temp,
mval}, {(ScriptL), -(Gamma)((m2,
1))}, {(ScriptL), (Gamma)((p2,
1))}) ThreeJSymbol({(ScriptL), 0}, {(ScriptL),
0}, {(ScriptL)temp, 0})^2,
0), {mval, -(ScriptL)temp, (ScriptL)temp}), {(ScriptL)temp,
0, 2 (ScriptL)}), 0)

energy(th1_) :=(*(2 (ScriptL)0 +1)^2*) Sum(
(* Find out which states we're calculating the matrix element of *)

(Cos(th1((p2))) Cos(th1((p1))) +
Cos(th1((p2))) Sin(th1((p1 + 1))) +
Cos(th1((p1))) Sin(th1((p2 + 1))) +
Sin(th1((p2 + 1))) Sin(th1((p1 + 1))) +
If(p1 == p2, Cos(th1((p1))) Sin(th1((p1 + 1))),
0)) factorFxn((ScriptL)0, m1, m2, p1, p2)

, {p1, 1, 2 (ScriptL)0 + 1}, {m1, 1, 2 (ScriptL)0 + 1}, {p2, 1,
p1}, {m2, 1, m1});
``````

I was trying to use Derivative function, but I guess I was doing something wrong as it is giving me either zero or nothing. Can somebody help me to calculate the first derivative of energy with respect to t1,t2,t3,t4,t5 in this case?

Thanks a lot.

## real analysis – Why does not differentialbility at a point imply the existence of limit of derivative tending to the point

What is wrong with the following argument?

Let $$f$$ be a real continuous function differentiable at every point $$xnot=a$$. By mean value theorem,
$$frac{f(a+h)-f(a)}{h} = f'(xi)$$
where $$xi$$ is between $$a$$ and $$a+h$$. Letting $$hrightarrow0$$, we obtain
$$f'(a) = limlimits_{xirightarrow a}f'(xi) mbox{.}$$
Therefore, the f'(a) exists if and only if $$limlimits_{xirightarrow a}f'(xi)$$ exists.

That the only if statement is false can be see from $$f(x)=x^2sinfrac{1}{x}$$ for $$xnot=0$$, $$f(0)=0$$. But why?

## fa.functional analysis – Derivative of trace

Consider two positive-semi definite trace class operators $$T_1, T_2$$ of unit trace.

Let $$T(lambda):=T_1 + lambda(T_2-T_1)$$ be the convex combination of the two.

We then study $$f(lambda) := operatorname{tr}(T(lambda)log(T(lambda)).$$

I conjecture that $$f'(lambda) = operatorname{tr}(P_{operatorname{ker}(T_1)} T_2)log(lambda)+mathcal O(1),$$ where $$P_V$$ is the projection on the space $$V$$.

I actually did not want to use the spectral theorem (intentionally) to show this but rather go via Jacobi’s formula

$$e^{ operatorname{tr}(T(lambda)log(T(lambda))} = operatorname{det}(e^{T(lambda)log(T(lambda))})$$
since this way everything looks much more smooth.

Are there any elementary proofs of this? Or is my conjecture even wrong?

## DSolve::derlen: The length of the derivative operator Derivative[1] in (w^[Prime])[x,t] is not the same as the number of arguments. PDE

I am getting this error when I try to solve this system of PDEs. I have shown my work below as well as a picture (could not figure out how to post it directly).

## differential equations – NDSolve problem: WhenEvents detects the event, but the derivative of the function does not change

I have struggled a lot lately with the combination of NDSolve and WhenEvents. At this point I think it must be a bug of some sort.

I solve a differential equation with NDSolve, using WhenEvents to detect “impacts”, i.e., time instants when θ(t)==0. At that time instant, I want to reduce θ'(t) by multiplying it by, say, 0.5.

Here’s an example:

``````α = 10 π/180; p = 1.4; αp = 5 g Tan(α); g = 9.81; tmax = 1;

Eq = θ''(t) +
p^2 (Sin(α Sign(θ(t)) - θ(t)) +
αp (Sin(1 t) - Cos(8 t))/g Cos(α Sign(θ(t)) - θ(t))) == 0;

s = NDSolve({Eq, θ(0) == 0, θ'(0) == 0,
WhenEvent(θ(t) == 0, {Print(t), θ'(t) -> 0.5 θ'(t)},
"DetectionMethod" -> "Interpolation")}, θ, {t, 0, 1});
``````

When I run the code, it detects an impact successfully (as t is printed), around 0.43 seconds:

Yet the derivative is not changed. If I plot θ'(t), it shows that “something” happens to θ'(t), like a change in its slope, but there is no step change as I would expect.

If I change the code into:

``````s = NDSolve({Eq, θ(0) == 0, θ'(0) == 0,
WhenEvent(θ(t) == 0.000001, {Print(t), θ'(t) -> 0.5 θ'(t)},
"DetectionMethod" -> "Interpolation")}, θ, {t, 0, 1});
``````

i.e., when I change θ(t) == 0 into θ(t) == 0.000001, within the WhenEvent, then the change in derivative works:

I have experienced similar problems with other excitation functions instead of αp (Sin(1 t) – Cos(8 t))/g, which is just an example. Sometimes even if I put 0.000001 in the event, the derivative sometimes changes, sometimes doesn’t. I have tried all “detection method” options too.

Any ideas?

TIA