ordinary differential equations – Existence and uniqueness of a solution of an implicit ODE linear, inhomogeneous

Problem: Given the differential equation $$ x ^ 2 y & # 39; (x) + 4x , y (x) +2 , y (x) = r (x) $$ for a function $ r en C ^ 2 ( mathbb R) $and the initial values $$ y (0) = y_0, qquad y (0) = y_1, $$ how to analyze if a solution $ y in C ^ 2 ( mathbb R) $ exists locally around $ x_0 = $ 0 and / or is unique locally, depending on the choice of $ y_0 $ and $ y_1 $?

Some ideas

  • Obviously, the Picard-Lindelöf theorem is not applicable around $ x_0 = $ 0 because the ODE is implicit. For $ x_0 neq $ 0however, we have an interval that does not contain $ 0 $, which gives an explicit ODE of the form $ y & # 39; + frac 4x y + frac {2} {x ^ 2} y = frac {r} {x ^ 2} $, where we observe that the homogeneous equation associated with the general solution $ y (x) = frac {c_1} {x} + frac {c_2} {x ^ 2} $ for $ c_1, c_2 in mathbb R $ be sure $ (0, + infty) $ or $ (- infty, 0) $.
  • Yes $ c_1 neq 0 neq c_2 $then the solution is unlimited. However, this does not mean that the general solution of the inhomogeneous problem is not $ x_0 = $ 0 (which would have resulted in a contradiction with the existence of a solution).
  • $ y (0) = y_0 $ implies that $ y_0 = frac {r (0)} {2} $.
  • The first term of the ODE $ x ^ 2 , y & # 39; $ is differentiable with $ frac { mathrm} { mathrm dx} x ^ 2 y & # 39; (x) = lim_ {h to 0} frac {h ^ 2 y & # 39; (h) -0} {h} = lim_ {h to 0} h , y & # 39; (h) = $ 0 (using the continuity of $ y & # 39; $ and therefore the delimitation on a certain interval around $ 0 $). Although we do not know if $ y & # 39; & # 39; $ exists or not we still have an expression for the derivative of $ x ^ 2 y & # 39; $ at $ x_0 = $ 0. This means that after differentiating the ODE to $ x_0 = $ 0, we have $ 6 , y (0) = r & # 39; (x) $ or $ y_1 = frac {r ($)} {6} $.
  • The last two points imply that ODE has no solution for $ y_0 neq r (0) / $ 2 and $ y_1 neq r $ (0) / $ 6. This reduces the problem to the case analysis $ y_0 = r (0) / $ 2, $ y_1 = r $ (0) / $ 6.

Differential Equations – Error When Using NDSolve Can not Find Initial Condition – Bending Beam

please have the following problem: I have a set of equations for the elasto-plastic answer. I have broken the beam into two segments (the first segment is defined by the coordinates (x1, y1) and (x2, y2) and the second segment is defined by (x1, y1) and (x2, y2).

Basically, all of the equations derived from kinematics and equilibrium forces that I want to solve (numerically) are (I'm only interested in the situation i = 1, 2, 3)

  1. $ dfrac {x_ {i + 1} + x_ {i}} {l} – cos theta_i = 0 $ (two equations to the system), $ theta_i $ is an angle in the ith beam joint.

  2. $ dfrac {y_ {i + 1} + y_ {i}} {l} – sin theta_i = 0 $ (two equations to the system)

  3. $ V_ {Xi} – V_ {Xi + 1} – F _ {Xi} = 0 $ (Balance of strength. $ F_ {Xi} $ refers to the external force and $ V_ {Xi} $ denotes the net force vector. One hypothesis is that there are no forces $ F $ or $ V $ in the direction of the x axis.

  4. $ V_ {Yi} – V_ {Yi + 1} – F _ {Yi} = 0 $ (One assumption is that there is no force $ F $ in the direction of the y-axis. $ V_ {Yi} $ are being calculated so that it is simplified to the relationship $ V_ {Yi} = V_ {Yi + 1} $

  5. $ M_ {i} – M_ {i + 1} + (V_ {Xi} – 1 / 2F_ {Xi}) * l * sin theta_ {i} – (V_ {Yi} – 1 / 2F_ {Yi}) * l * cos theta_ {i} = 0 $ (Balance of momentum – using the assumptions, it can be simplified to the form: 5a) $ M_ {i} – M_ {i + 1} – V_ {Yi} * l * cos theta_ {i} = 0 $ (two equations to the system). $ M_i $ is the bending moment at the beam joint.

  6. Response relationship:
    $ dot {M_ {i}} = lbrace 1 – H left (M_i dot { kappa_i} right) left (1 + tanh left ( alpha H left (1 – left | dfrac {M_i} {M_y} right | right) left ( left | dfrac {M_y} right | – 1 right) right) rbrace EI cdot dot { kappa_i} $ (an equation to the system – this response relation is valid only in the internal articulations, which in this case is only i = 2). H denotes the Heaviside function, EI is the Young's modulus, $ kappa $ is the curvature, $ alpha $ is a coefficient, usually between 12 and 30 and $ M_y $ is the moment of flow, which represents whether the answer is plastic or elastic (depending on whether $ M $ is less than $ M_y $).

I have assumed the strength $ Y_3 $ is loaded by the step loading function $ f $. As $ f $ is not differentiable, I manually set her "derivative" as $ g $. I've therefore differentiated equation 5a) to include $ g $ (hidden in each $ V_ {Yi} $).

For the Heaviside function $ H $ I used an approximation $ H (x, n) approximately dfrac {1} {2} + dfrac {1} { pi} arctan (n * x) $. For n defined higher, this is a sufficient approximation.

My code is (including all initial and common conditions – the length of a segment $ l $ is 1 – I guess there are only two segments of the total length $ 2 * l $. Also discretization for curvature $ kappa $ as $ kappa_i approx dfrac { theta_i – theta_ {i-1}} {l} $ is used.

n := 10000; 
H(x_, n_) := (1/2) + (1/Pi) ArcTan(n*x);
My := 1; (*yield bending moment*)
 (*approximation of the Heaviside step function*)
EI := 0.5; (*Young modullus*)
l := 1; (*length of one segment*)
tlimit := 1; (*for the definition of the loading function*)
Alpha1 := 30; (*coefficient*)
f(t_) := Piecewise({{-t, t <= tlimit}, {-tlimit, 
     t <= tlimit + 5}, {t - tlimit - 6, 
     t <= tlimit + 6}}); (*loading function*)
g(t_) := Piecewise({{-1, t <= tlimit}, {0, t <= tlimit + 5}, {1, 
    t <= tlimit + 
      6}}); (*derivative of loading function with definition of 
problematic points*)
eq = {x2(t) - x1(t) - l*Cos(theta1(t)) == 0,
  x3(t) - x2(t) - l*Cos(theta2(t)) == 0,
  y2(t) - y1(t) - l*Sin(theta1(t)) == 0,
  y3(t) - y2(t) - l*Sin(theta2(t)) == 
   0, (theta2'(t) - theta1'(t))*(1 - 
      H((theta2'(t) - theta1'(t))*M2(t)/l, n))*(1 + 
      Tanh(Alpha1*H(1 - Abs(M2(t)/My), n)*(Abs(M2(t)/My) - 1)))*(EI/
      l) == M2'(t), 
  M1'(t) - M2'(t) == 
   g(t)*l*Cos(theta1(t)) - f(t)*l*Sin(theta1(t))*theta1'(t),
  M2'(t) - M3'(t) == 
   g(t)*l*Cos(theta2(t)) - f(t)*l*Sin(theta2(t))*theta2'(t),
  theta1(0) == 0,
  theta2(0) == 0,
  M1(0) == 0,
  M2(0) == 0,
  M3(0) == 0,
  y1(0) == 0,
  y2(0) == 0,
  y3(0) == 0,
  x1(0) == 0,
  x2(0) == l,
  x3(0) == 2*l,
  x1(t) == 0,
  y1(t) == 0,
  M3(t) == 0,
  theta1(t) == 0};
sol = NDSolve(eq, {x1, x2, x3, y1, y2, y3}, {t, 0, 100})
Plot(Evaluate(x3(t)  /. sol), {t, 0, 20}, PlotRange -> All)

Unfortunately, the code does not work. The error message "Can not find the initial conditions satisfying the residual function
within the specified tolerances. Try to give the initial conditions for both
values ​​and derivatives of functions.

Please, anyone can help me ??? Thanks in advance!

Ordinary Differential Equations – Someone can help me solve this ODE

Thank you for your contribution to Mathematics Stack Exchange!

  • Please make sure to respond to the question. Provide details and share your research!

But to avoid

  • Ask for help, clarification, or answer other answers.
  • Make statements based on the opinion; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, read our tips for writing good answers.

$ a $ is a fixed constant. Consider the partial differential equation of first order $ u _t = a u x = 0, x in mathbb R, t> 0 $

$ a $ to be a real fixed constant. Consider the first-order partial differential equation

$ u _t + a u x = 0, x in mathbb R, t> 0 $ with the initial data $ u (x, 0) = u _0 (x), x in mathbb R $ or $ u _ $ 0 is continuously differentiable function.

there is a bounded function $ u _0 $ for which the solution $ u $ is limitless.

if $ u_0 $ disappears outside of a compact set and then for each fixed $ T> $ 0 there is a compact set $ K _ T subset mathbb R $ such as $ u (x, T) $ disappears for each $ x notin K _ T $.
Please help me prove the two facts above.

differential equations – How to correctly display partial derivatives after a change of variable

I am currently looking for a way to be able to easily nondimensional differential equations (such as that of Navier-Stokes) in Mathematica, in order to then be able to search for an asymptotic expansion as a solution and deduce from it some main order equations.

I found, here, a code that displays in a much more natural way all that concerns the partial derivatives, a little as one would write on paper. I had some success with that, and I got it to work with some things. The version of the code that I have been modified (not by me, because I am relatively new in Mathematica) in order to remove the arguments of functions not attached to partial derivatives.

pdConv(expr_) := 
 Module({fns}, 
  fns = DeleteDuplicates(
    Cases(expr, Derivative(__)(g__)(__) :> g, Infinity));
  TraditionalForm(
   expr /. {Derivative(inds__)(g_)(vars__) :> 
      Apply(Defer(D(g, ##)) &, 
       Transpose({{vars}, {inds}}) /. {{var_, 0} :> 
          Sequence(), {var_, 1} :> {var}}), 
     a_(__) :> a /; MemberQ(fns, a)}))

By executing the following code, you try not to size an equation that resembles the Navier-Stokes equation (with some constants) and not to size it:

enter the description of the image here

I am converting the equation of dimensional variables (with a "d") into dimensionless variables (without a subscript). While the desirable algebraic procedure has been performed, it is exasperating to see that the variables for which the derivatives are made are not fully simplified.

This also occurs for very simple expressions:
enter the description of the image here

The 2 should be outside, but not in the partial derivative of the bottom.

Can any one suggest a simple remedy for this, so that (a) either the string rule be applied, or (b) the partial derivatives be properly formatted once they have been computed?

Coupled differential equations

for example, I have pde1 = -y1 & # 39; & # 39; (x) – (2 * y1- (x)) / x + (10 ^ -19 * (y1 (x)) ^ 2 + y2 (x)) * y1 (x) == 0;
pde2 = y2 & # 39; & # 39; (x) + (2 * y2 & (x)) / x – (10 ^ -25)(y1 (x)) ^ 2 == 0
sol = NDSolve ({pde1, pde2, y1 (1) = 0.001, y2 (1) == -0.001, y1 (0.001) == 0.001, y2 (0.001) == 0.001}, { y1, y2}, {x, 30})
{{y1 -> InterpolatingFunction ({{0., 30}}, <>),
y2-> InterpolatingFunction ({{0., 30}}, <>)}}
I need to trace the integration of (solution (y1)
x ^ 2)

differential equations – solve the ODEs with the help of NDSolve

This is my code, I do not understand why it does not work.Please help me NDSolve [{Derivative [1] [x]

differential equations – An element in a Jacobian divided by zero

I linearize fixed points in a dynamic 2D system with Jacobians and meet something I have never seen before. I have an element equivalent to

-x/(x+.25)

and an element equivalent to

-.5+(ax/(x+.25)).

The problem is that one of my fixed points is at (-.25,0), so the denominators are both equal to 0. Do I just have to evaluate them at 0 and -.5 respectively? I have to calculate eigenvalues ​​and I do not know how to handle this case. a is a true parameter valued here.

Differential Equations – Can Mathematica Provide a Reliable Estimate of NDSolve's Numerical Error?

We can adapt the MonitorMethod:

Options(MonitorMethod) = {Method -> Automatic, 
   "MonitorFunction" -> 
    Function({h, state, meth}, 
     Print({"H" -> h, "SD" -> state@"SolutionData"}))};
MonitorMethod /: 
  NDSolve`InitializeMethod(MonitorMethod, stepmode_, sd_, rhs_, 
   state_, OptionsPattern(MonitorMethod)) := Module({submethod, mf},
   mf = OptionValue("MonitorFunction");
   submethod = OptionValue(Method);
   submethod = 
    NDSolve`InitializeSubmethod(MonitorMethod, submethod, stepmode, 
     sd, rhs, state);
   MonitorMethod(submethod, mf));
MonitorMethod(submethod_, mf_)("Step"(f_, h_, sd_, state_)) :=
  Module({res},
   res = NDSolve`InvokeMethod(submethod, f, h, sd, state);
   If(Head(res) === NDSolve`InvokeMethod,
    Return($Failed)); (* submethod not valid for monitoring *)
   mf(h, state, submethod);
   If(SameQ(res((-1)), submethod), res((-1)) = Null, 
    res((-1)) = MonitorMethod(res((-1)), mf));
   res);
MonitorMethod(___)("StepInput") = {"Function"(All), "H", 
   "SolutionData", "StateData"};
MonitorMethod(___)("StepOutput") = {"H", "SD", "MethodObject"};
MonitorMethod(submethod_, ___)(prop_) := submethod(prop);

If the Method implements the "StepError" method, it will return the estimate of the step error (The only way to know the real mistake is to know the real solution and to compare it.) By "scaling" Mathematica means
$$ text {error scaled}
= {| text {error} | over 10 ^ {- text {ag}} + 10 ^ {- text {pg}} | x |} ,, $$

who will be between 0 and 1 when the $ text {error} $ satisfied the AccuracyGoal $ text {ag} $ and the PrecisionGoal $ text {pg} $.

the MonitorMethod take a "MonitorFunction" option, which should be according to the form

Function({h, state, meth}, <...body...>)

or h is the size of the step, state is the NDSolve`StateData object, and meth is the Method object of the sub-method.

Example of use:

{sol, {errdata}} = Reap(
   NDSolveValue({x''(t) + x(t) == 0, x(0) == 1, x'(0) == 1}, 
    x, {t, 0, 2}, 
    Method -> {MonitorMethod, 
      "MonitorFunction" -> 
       Function({h, state, meth}, 
        Sow(meth@"StepError", "ScaledStepError"))}, 
    MaxStepFraction -> 1, WorkingPrecision -> 100, 
    PrecisionGoal -> 23, AccuracyGoal -> 50),
   "ScaledStepError");

GraphicsRow({
  ListLinePlot(Transpose@{Flatten@Rest@sol@"Grid", errdata},
   Mesh -> All, PlotRange -> {0, 1}, PlotRangePadding -> Scaled(.05), 
   PlotLabel -> "Scaled error estimate"),
  Show(
   Plot(Sin(t) + Cos(t), {t, 0, 2}, PlotStyle -> Red),
   ListLinePlot(sol, Mesh -> All),
   PlotRangePadding -> Scaled(.05), 
   PlotLabel -> "Steps on top of exact solution")
  })

enter the description of the image here

In our example, we know the exact solution, so we can check the real error:

Block({t = Flatten@sol@"Grid", data},
 data = Transpose@{t, (Sin(t) + Cos(t) - sol(t))/(
    10^-50 + 10^-23 Abs(Sin(t) + Cos(t)))};
 ListLinePlot(data,
  Epilog -> {PointSize@Medium, Tooltip(Point(#), N@Last@#) & /@ data},
   PlotRange -> All, PlotLabel -> "Actual scaled error")
 )

enter the description of the image here

Of course, when the error is so much interested in me, it is usually because I have reason to wonder whether the estimates of the error, which are based on discrete approximations of a function supposed to present some regularity, are not reliable.

differential equations – Can Mathematica print a reliable estimate of NDSolve numerical error?

In Mathematica Documentation Details for Precision Goal we say that

"… Even if you can specify PrecisionGoal-> n, the results you get can sometimes have much less that precision to n figures … "

and that

"… with PrecisionGoal-> p and AccuracyGoal-> a, the Wolfram language attempts make the numerical error in a result of size x less than 10 ^ (- a) + TemplateBox ({x}, Abs) 10 ^ (- p) … "

According to these after setting up for example WorkingPrecision -> 100, PrecisionGoal -> 23, AccuracyGoal -> 50 then NDSolve can give an output with a precision of -say- $ 19 negligible figures.

How can we solve this problem? If not, how do you know that the mathematica output is less accurate than WorkingPrecision -> 100, PrecisionGoal -> 23, AccuracyGoal -> 50 ? Can Mathematica print an estimate of the resulting numerical error?