calculus – Solve this differential equation when $y(0)=1$

Let $y”-xy=0$, $y(0)=alpha$, $y'(0)=beta$. Also, $y=sum_{n=0}^{infty}a_n x^n$ is a solution to y.

Is this differential equation first order linear? And if $alpha=1$, $beta=0$, then does $a_0=1, a_1=0$? (recall that $a_n$ is the series)

For the first question, my answer is yes. Also if $alpha=0, beta=0$, then $y(0)=1, y'(0)=0$. I’m not really sure how $alpha, beta$ are related to the equation, but my intuition tells me that this is true. Intuitions are not really that reliable so I wanted to verify if this is true or not.

differential geometry – A polygon with constant angular momentum bounds a circle

Let $alpha:(0,L) to mathbb{R}^2$ be a piecewise affine map satisfying $alpha(0)= alpha(L)$ and $|dot alpha|=1$. Supopse that $alpha(t) times dot alpha(t)$ is constant.

How to prove that $operatorname{Image}(alpha)$ is a tangential polygon, i.e. a polygon whose edges are all tangent to a fixed circle, centered at the origin?

It suffices to prove that for each subinterval $(a,b) subseteq (0,L)$ where $alpha|_{(a,b)}$ is affine, there exists a $t_0 in (a,b)$ such that $dot alpha(t_0) perp alpha(t_0)$.

Indeed, if this is the case, then $|alpha(t_0)|=|alpha(t) times dot alpha(t)|=C$ is independent of the segment $(a,b)$ chosen. Thus, every “edge” $alpha((a,b))$, contains a point $P_{a,b}=alpha(t_0)$ on the circle with radius $C$, and the edge is perpendicular to radius at $P_{a,b}$, i.e. it is tangent to the circle at $P_{a,b}$.

I am not sure how to prove the bold statement. I think we need to use somehow the fact that the polygon “closes”.


The converse implication is easy:

If there exists such a circle with radius $R$, then $|alpha(t) times dot alpha(t)|=R$ is constant: Indeed, suppose that $alpha(t_0)$ lies on the circle — so it is a tangency point.

Then $dot alpha(t_0) perp alpha(t_0)$, and $|alpha(t_0)|=R$.

Let $t$ satisfies $dot alpha(t)=dot alpha(t_0)$, i.e. $alpha(t)$ belongs to the same edge as $alpha(t_0)$. Then

$alpha(t)=alpha(t_0)+beta(t)$, where $beta(t) || dot alpha(t_0)$, so
$$
alpha(t) times dot alpha(t)=big( alpha(t_0)+beta(t) big) times dot alpha(t_0)=alpha(t_0) times dot alpha(t_0),
$$

which implies $|alpha(t) times dot alpha(t)|=R$.

differential equations – optimization of second order ODE with more then one parameter

I want to optimize second order ODE with more than one variable

Exp(2*alpha*x)*D(theta(x), {x, 2}) + (2*alpha + A)*Exp(2*alpha*x)*
D(theta(x), x) - B^2*(theta(x) - thetaa) - C*(theta(x) - thetaa)^2 + D*Exp(2*alpha*x) == 0;

With boundary conditions

theta(1) == 1, theta'(0) == 0.20;

Here A, B, C, D are parameters. I want to maximize theta(x) for different parameters
I have no idea how to do it please help me

differential equations – Asymptotic Output Tracking – Where to Place the Input Control Signal?

Asymptotic Output Tracking: Code Issues

I ask for help from specialists in differential equations, dynamical systems, optimal control and
general control theory;

I have the following system of differential equations:

begin{cases} frac{dx(t)}{dt}=G(t) \ frac{dz(t)}{dt}+z(t)=frac{df}{dt} \ frac{dG(t)}{dt}+G(t)=z(t) cdot alpha sin(omega t) \ frac{dH(t)}{dt}+H(t)=z(t) cdot (frac{16}{alpha^2}(sin(omega t)-frac{1}{2})) \ frac{dX(t)}{dt}+X(t)=frac{dx(t)}{dt} end{cases}

where, $x,z,G,H,X$ – variables; $f=-(x(t)+alpha sin(omega t)-x_e)^2$; $alpha, omega$ – parameters.

As an output $y$, I assign:

$y=tanh(k cdot H(t))$

As an reference signal $r_1$, I assign:

$r_1=-1$

As an constant time $p_1$, I assign:

$p_1=-1$

Well, I tried to program this in the Mathematica program and ran into a difficulty that I can’t get over yet. Question: in which of the equations should the control signal $u(t)$ be placed?

I chose the first equation, then the original system of equations will look like this:

begin{cases} frac{dx(t)}{dt}=G(t)+u(t) \ frac{dz(t)}{dt}+z(t)=frac{df}{dt} \ frac{dG(t)}{dt}+G(t)=z(t) cdot alpha sin(omega t) \ frac{dH(t)}{dt}+H(t)=z(t) cdot (frac{16}{alpha^2}(sin(omega t)-frac{1}{2})) \ frac{dX(t)}{dt}+X(t)=frac{dx(t)}{dt} end{cases}

(***)

Clear("Derivative")

ClearAll("Global`*")

Needs("Parallel`Developer`")

S(t) = (Alpha) Sin((Omega) t)

M(t) = 16/(Alpha)^2 (Sin((Omega) t) - 1/2)

f = -(x(t) + S(t) - xe)^2

Parallelize(
 asys = AffineStateSpaceModel({x'(t) == G(t) + u(t), 
     z'(t) + z(t) == D(f, t), G'(t) + G(t) == z(t) S(t), 
     H'(t) + H(t) == z(t) M(t), 
     1/k X'(t) + X(t) == D(x(t), t)}, {{x(t), xs}, {z(t), 0.1}, {G(t),
       0}, {H(t), 0}, {X(t), 0}}, {u(t)}, {Tanh(k H(t))}, t) // 
   Simplify)

pars1 = {Subscript(r, 1) -> -1, Subscript(p, 1) -> -1}

Parallelize(
 fb = AsymptoticOutputTracker(asys, {-1}, {-1, -1}) // Simplify)

pars = {xs = -1, xe = 1, (Alpha) = 0.3, (Omega) = 2 Pi*1/2/Pi, 
  k = 100, (Mu) = 1}

Parallelize(
 csys = SystemsModelStateFeedbackConnect(asys, fb) /. pars1 // 
    Simplify // Chop)

plots = {OutputResponse({csys}, {0, 0}, {t, 0, 1})}

At the end, I get an error.

At t == 0.005418556209176463`, step size is effectively zero; 
singularity or stiff system suspected

It seems to me that this is due to the fact that either in the system there is a ksk somewhere, or I have put the control input signal in the wrong equation. I need the support of a theorist who can help me choose the right sequence of actions to solve the problem.

I would be glad to any advice and help.

differential equations – time dependent hamiltonian with random numbers

I have a Hamiltonian (Z) in matrix form, I solved it for time independent random real numbers, now want to introduce time dependent in such a way at any time the random real numbers change between the range {-Sqrt(3sigma2), Sqrt(3sigma2)}, here is my code

Nmax = 100; (*Number of sites*)

tini = 0; (*initial time*)

tmax = 200; (*maximal time*)

(Sigma)2 = 0.1; (*Variance*)

n0 = 50; (*initial condition*)

ra = 1; (*coupling range*)

(Psi)ini = Table(KroneckerDelta(n0 - i), {i, 1, Nmax});

RR = RandomReal({-Sqrt(3*(Sigma)2), Sqrt(3*(Sigma)2)}, Nmax);

Z = Table(
    Sum(KroneckerDelta(i - j + k), {k, 1, ra}) + 
     Sum(KroneckerDelta(i - j - k), {k, 1, ra}), {i, 1, Nmax}, {j, 1, 
     Nmax}) + DiagonalMatrix(RR);

usol = NDSolveValue({I D((Psi)(t), t) == 
     Z.(Psi)(t), (Psi)(0) == (Psi)ini}, (Psi), {t, tini, tmax});

What can I do for introduce this time dependent and solve the differential equation(usol)? I hope my question is clear

differential equations – How to pose Dirichlet and Neumann BCs on same boundary?

Let’ s look on the Laplace equation in a rectangle area:

Eq0 = Inactive[Laplacian][u[x, y], {x, y}]
[CapitalOmega] = Rectangle[{0, 0}, {2, 1}]

and try to solve it with various pairs of Dirichlet an Newman BCs on horizontal boundaries:

BCD0 = DirichletCondition[u[x, y] == 0, y == 0]
BCD1 = DirichletCondition[u[x, y] == 1, y == 1]
BCN0 = NeumannValue[1, y == 0]
BCN1 = NeumannValue[1, y == 1]

NDSolve yields reasonable solution when the Dirichlet and Neumann BCs are posed on different edges of the rectangle. For example:

u1 = NDSolveValue[{Eq0 == BCN1, BCD0}, 
  u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u1[x, y], {x, y} [Element] [CapitalOmega], 
 AspectRatio -> Automatic
  , PlotLegends -> Automatic]

enter image description here

However it fails if the BCs are set on same edge:

u2 = NDSolveValue[{Eq0 == BCN0, BCD0}, 
  u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u2[x, y], {x, y} [Element] [CapitalOmega], 
 AspectRatio -> Automatic
  , PlotLegends -> Automatic]

enter image description here

Nevertheless it is obvious that solution exists and is equal to u[x_,y_]=y.

My Question is: Is it possible to set 2 BCs on same edge of the rectangle?

partial differential equations – Expectation of share of Geometric Brownian motions

Fix $Ninmathbb{N}$. For any $iin{1,…,N}$,
begin{equation}
X^i_t=x_iexpleft(left(mu-frac{sigma^2}{2}right)t+sigma B_t^iright)
end{equation}

where $B^i$ are independent standard Brownian motions, and $x_i>0$.

Question:Let $r_i(x)=frac{x_i}{sum_{k=1}^Nx_k}$ for any $xinmathbb{R}_+^N$, then what is $mathbb{E}(r_i(X_T))$?

My attempt is the martingale approach as follows:

Let $f(t,x)=mathbb{E}(r_i(X_T)|X_t=x)$, then apply Ito lemma and set the drift to zero since $f(t,X_t)$ is a martingale we obtain the following PDE

begin{equation}
f_t+musum_{k=1}^Nx_kf_{x_k}+frac{sigma^2}{2}sum_{k=1}^Nx_k^2f_{x_kx_k}=0
end{equation}

with the terminal condition $f(T,x)=r_i(x)$ for any $xinmathbb{R}_+^N$. But solving this PDE is troublesome for me.

ordinary differential equations – A non-linear coupled second order ODE for a geodesic

Let $g = (dx)^2 + e^x (dy)^2$ be a metric on $mathbb{R}^2$. I want to find the equation of the geodesic. To do this, I know that I can solve a system of coupled non-linear second order ODEs. After calculating the Christoffel symbols, I have arrived at the following system for $gamma(t)=(gamma^1(t),gamma^2(t)) =(u(t),v(t))$:
$$ u”-frac{1}{2}e^u(v’)^2 =0 $$
$$ v”+u’v’=0 $$
I’m not very familiar with ODEs beyond the basic techniques that one might learn in a first course. I started by guessing $u=-klog n$ for some $k>0$, but this turned out to not work. Can someone help me solve this system? 🥺 This is not for homework. I wanted to do an example to help me better understand parallel transport, but it ended up being too complicated. Thanks.

differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for g(y) (given below as odey) and I want to obtain a series solution at $y=infty$ where g(y) should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to p (can take positive non-integer values). I have also expanded ir at infinity and have taken up to first order (given by irInf) since if I directly use ir, then it would be a mess later for the ODE.

ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify

What steps can I take to solve this? Thanks

differential geometry – Concrete example of a tangent (space) vector velocity on a sphere (S^2)

The text I am using states without proof that the tangent vector representing the velocity field due to a rigid rotation about the x-axis is:
$$
V= -sin{phi}, partial_theta -cot{theta} cos{phi} , partial_phi
$$

where $theta$ is the standard polar angle (measured from the z-axis) and $phi$ is the standard azimuthal angle.

I tried to get to get this from what I think is the representation of $V$ in Cartesian coordinates
$$
V= V^y partial_y+ V^zpartial_z = -z , partial_y + y ,partial_z
$$

based on the rotation matrix for infinitesimal rotations about the x-axis.
I thought I would then calculate the components of $V$ in spherical polar coordinates via
$$
V^theta = left(frac{partialtheta}{partial y} right)V^y +left(frac{partial theta}{partial z} right)V^z
$$

and
$$
V^phi = left(frac{partialphi}{partial y} right)V^y +left(frac{partial phi}{partial z} right)V^z
$$

since that is how coordinates are supposed to transform. However, I don’t get the above result.

I think that I have a fundamental misunderstanding here and need a clue as where to start.