Differential Equations – Integration with ParametricNDSolveValue

I managed to integrate the function given in this post, with ParametricNDSolveVale as following:

f[r_?NumericQ, t_?NumericQ] := h[r] r^3 Cos[2 [CapitalOmega]k[r] t];
g = ParametricNDSolveValue[{F'[r] == f[r, t], 
F[10^6] == h[10^6]*10^18*Cos[[CapitalOmega]k[10^6] t]}, F, {r, 10^6, 10^8}, {t}]

right here h[r] is the memoized function calculated in the post cited. The problem is that if I write g[0] I get an interpolation function instead of a number, what am I doing wrong?

Differential Geometry – How Recessed and Isothermal Transforms Change Curvature?

Let $ X $ be a (closed) surface (or in general a variety). Let $ g $ to be a Riemannian metric on it. I'm thinking about how the following operations modify the curvature of $ g $:

(1) withdrawal by some diffeomorphism of $ X $;

(2) multiplication by a scalar, that is to say $ lambda g $ for some people $ lambda in mathbb R $.

More specifically, can we use these two operations to make it constant?

to plan a full, differential, and cross-country backup in SQL Server 2016

I have been studying backup scheduling with the SQL Server Agent and maintenance plans (which lead to the SQL Server Agent).

My first concern is what is the best way to plan my backups?

I want a full daily backup, a differential backup every hour and a backup of the transnational log every 15 minutes. My second concern, is it a good practice?

I noticed a problem when I used SQL Server Agent. My full and differential backup overwrites. I agree that the full backup should be overwritten, but not for the differential backup, because when I have to recover it, I only have a differential backup, which is not the case. purpose of differential backups. How can I not let this overwrite on previous backups?

The last concern and question is how to implement backup planning. I will use maintenance plans and I can schedule three different backups. One for the complete, one for the differential and one for the transnational. Is this the best practice for a database used daily by users (at least 2,000 transactions per day)?

Differential Equations – Dynamic PDE Tutorials

Thank you for your contribution to Mathematica Stack Exchange!

  • Please make sure to respond to the question. Provide details and share your research!

But to avoid

  • Ask for help, clarification, or answer other answers.
  • Make statements based on the opinion; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, read our tips for writing good answers.

Differential Identity Involving a Logarithm – Mathematics Stack Exchange

By studying an effective string theory, I found the following identity:
$$ ln x = lim_ {s rightarrow0} frac {d x ^ {s}} {ds}. $$
I am, however, puzzled as to its derivation. Naively I would say that
$$ lim_ {s rightarrow0} frac {d x ^ {s}} {ds} = lim_ {s rightarrow0} s x ^ {s – 1} = 0, $$
which is obviously not the right approach. So my question is, how can I prove the first identity?

Plus, I would appreciate it a lot if you could help me give these questions the right labels.

How to solve this partial differential equation for $ f $?

I want Mathematica to solve for the function $ f $.

$ f $ satisfies the following constraints.

$ frac { partial} { partial x} f (x, y) = y $

$ frac { partial} { partial y} f (x, y) = x $

$ f (0,0) = 0 $

It would seem obvious $ f (x, y) = x y $

However, I can not convince Mathematica to return this result.

Here is my attempt.
Mathematica returns it. What do I miss?

DSolve(
    {
       x == D(f(x, y) , y)
     , y == D(f(x, y) , x)
     , f(0, 0) == 0
    }

, {f(x, y), f(x, y)}
, {x, y}
)

Differential geometry – Definition of left invariant vector fields

The definition (after Marsden and Ratiu $ textit {Introduction to mechanics and symmetry} $) of a left invariant vector field $ X $ on a Lie group $ G $ declares that for each $ g in $ G, $ L_g ^ * X = X $. C & # 39; is, $$ (T_hL_g) X (h) = X (gh) $$ for each $ h in $ G. Other sources put $ (dL_g) _h $ instead of $ T_h $ (Jack Lee's $ textit {Intro to Smooth Manifolds} $ follows this latter convention). My question is about the notation $ T_h $ and the result $ X (gh) $. Why do we write $ T_h $ as the partial derivative w.r.t $ h $ and what does it really mean to take the derivative of left-wing action as we do in this definition? It seems like we start with a vector to $ h $ and map to a vector to $ gh $ but these books are content to indicate it and to move on, some clarifications would be useful.

Differential Equations – Problem When Combining ParametricNDSolve with NonlinearModelFit

I notice some inconsistencies in the fact that NonlinearModelFit works with the output of ParametricNDSolve. Here's an example that works (starting with a new kernel):

eqs = {a'(t) == -k1 a(t) - k2 a(t)^2, 
       b'(t) == k1 a(t) + k2 a(t)^2, 
       a(0) == a0, b(0) == b0};
fixedparams = {k1 -> 1.2, b0 -> 0};
fns = {a, b};
params = {k2, a0};
solution = ParametricNDSolve(eqs /. fixedparams, fns, {t, 0, 5}, params)
fitfn = a /. solution;
paramsForDataSet = {k2 -> 1.263, a0 -> 0.0321};
dataset = {#, ((fitfn @@ params) /. paramsForDataSet)(#) + 
  RandomVariate(NormalDistribution(0, 0.0002))} & /@ Range(0, 5, 0.01);
ListPlot(dataset, PlotRange -> Full)

enter the description of the image here

initialGuess = {k2 -> 2.0, a0 -> 0.3};
tmp = Values@initialGuess;
Dynamic@Column({Show(ListPlot(dataset, PlotRange -> Full), 
                     Plot((fitfn @@ tmp)(t), {t, 0, 5}, 
                          PlotRange -> Full, PlotStyle -> Red), 
                  PlotRange -> Full, ImageSize -> Large), 
                ListPlot({#1, #2 - (fitfn @@ tmp)(#1)} & @@@ dataset, 
                         PlotRange -> Full, AspectRatio -> 0.2, 
                         ImageSize -> Large)})

This last bit gives me a dynamic update plot of my fit and residuals when it converges. Here is the editing procedure:

result = NonlinearModelFit(dataset, (fitfn @@ params)(t), 
                       Evaluate(List @@@ initialGuess), t, 
                       StepMonitor :> (tmp = params))
tmp = Values@result("BestFitParameters")

enter the description of the image here

enter the description of the image here

It looks great! But when I slightly complicate the model, the kernel crashes. Again from a new kernel:

eqs = {a'(t) == -k1 a(t) - k2 a(t)^2, b'(t) == k1 a(t) + k2 a(t)^2, 
       c(t) == q a(t) + r b(t), c(0) == q a0 + r b0, a(0) == a0, 
       b(0) == b0};
fixedparams = {k1 -> 1.2, b0 -> 0};
fns = {a, b, c};
params = {k2, a0, q, r};
solution = ParametricNDSolve(eqs /. fixedparams, fns, {t, 0, 5}, params)
fitfn = c /. solution;
paramsForDataSet = {k2 -> 1.263, a0 -> 0.0321, q -> 0.341, 
                    r -> 0.8431};
dataset = {#, ((fitfn @@ params) /. paramsForDataSet)(#) + 
       RandomVariate(NormalDistribution(0, 0.0002))} & /@ Range(0, 5, 0.01);
ListPlot(dataset, PlotRange -> Full)

enter the description of the image here

initialGuess = {k2 -> 2.0, a0 -> 0.3, q -> 0.32, r -> 0.88};
tmp = Values@initialGuess;
Dynamic@Column({Show(ListPlot(dataset, PlotRange -> Full), 
                     Plot((fitfn @@ tmp)(t), {t, 0, 5}, PlotRange -> Full, 
                     PlotStyle -> Red), 
                  PlotRange -> Full, ImageSize -> Large), 
                ListPlot({#1, #2 - (fitfn @@ tmp)(#1)} & @@@ dataset, 
                  PlotRange -> Full, AspectRatio -> 0.2, 
                  ImageSize -> Large)})
result = NonlinearModelFit(dataset, (fitfn @@ params)(t), 
           Evaluate(List @@@ initialGuess), t, 
           StepMonitor :> (tmp = params))
tmp = Values@result("BestFitParameters")

The only differences are:

  • add c (t) and c (0) to equations
  • adding c to fns
  • add q and r to params
  • adding values ​​for q and r to paramsForDataSet and initialGuess
  • change fitfn in c instead of

Everything else is the same, but this time the kernel hangs. All suggestions will be welcome.

(In case there is a bug in Mathematica, I have submitted a bug report to Wolfram, but I do not want to rule out that I can do something wrong, that is That's why I'm asking the question here too.)

Solve an algebraic differential equation – Mathematica Stack Exchange

I can not find the relevant part in the documentation. Maybe you have a quick fix to my problem.

Context

Sometimes you want to propagate an EDO system in time, but not all variables are needed. Think of a Schrödinger matrix equation. Let's say that a sum of squares of unknown functions is desired. One can always solve the EOD of interest, then build the desired observable. But it is also an ideal case for an algebraic-differential resolver. Just spread everything, but keep only the observable.

My problem is the following messages

NDSolve::mconly: For the method IDA, only machine real code is available. Unable to continue with complex values or beyond floating-point exceptions.
NDSolve::icfail: Unable to find initial conditions that satisfy the residual function within specified tolerances. Try giving initial conditions for both values and derivatives of the functions.

Minimum work example

h[t_]:={{1,1,5},{1,2,I},{5,-I,3}}
o={{0,0,5},{0,0,0},{5,0,0}}
dim=3;
vars=y[#]&/@Range[dim];
varsT=y[#]
eqs=MapThread[Equal,{-I D[varsT,t],h
y0={0,1,0};
ics=MapThread[Equal,{y[#][0]&/@Range[dim],y0};
r=NDSolve[Join[eqs,ics],vars,{t,0,10}]//First;

As a result, we get oscillating functions, an observable can be easily calculated and traced

ψ=vars/.r
oT=Sum[Conjugate[ψ[[i]]
Plot[oT,{t,0,10}]

I want to avoid post-processing to evaluate the observable oT.

Minimal example does not work

Instead of evaluating the observable after the ODE resolution, let us spread it with the strangers. We add an additional algebraic equation and the respective initial condition

eqA={τ
icA={τ[0]==0};

However, this fails with the error messages above

NDSolve[Join[eqs,eqA,ics,icA],τ,{t,0,10}]

What can be done here? I'm not even sure of the initial condition icA for the auxiliary variable τ is necessary.

ordinary differential equations – Question on the existence of carathéodoral solutions to a discontinuous ODE (scalar) of first order

Consider the scalar i.v.p. in $ { mathbb R} $
$$
x = f (t, x), ; t in (0, T), ; x (0) = x_0
$$

or $ T in { mathbb R} $, $ T> $ 0, $ x_0 in { mathbb R} $, and $ f: (0, T) times { mathbb R} mapsto { mathbb R} $ has the properties:

(i) for each measurable (Lebesgue) $ y: (0, T) mapsto { mathbb R} $, the map $ (0, T) ni t mapsto f (t, y (t)) in { mathbb R} $ is measurable.

(ii) for almost all $ t in (0, T) $, $ sup_ {x in { mathbb R}} | f (t, x) | leq l (t) $, or $ l: (0, T) mapsto { mathbb R} $ Lebesgue is it integrable?

I am aware of the results in the literature, showing the existence of Carath's solutions to the problem above, in the event that $ f $ is not continuous. These results were obtained assuming (versions) of (i) and (ii) and that $ f $ is not decreasing (in some sense), as in, for example, https://projecteuclid.org/euclid.die/1368638179,
https://doi.org/10.1090/S0002-9939-97-03942-7, etc.).

QUESTION: Are there any results proving the existence (and uniqueness) of Carath's solutions to the above problem with discontinuous solutions? $ f $, in the case where $ f $ is not growing?