mathematical optimization – Bug in NMaximize in 12.2?

It seems (clearly!) to be a bug in the preprocessing for the new convex optimizer. Use one of the other methods (e.g. "DifferentialEvolution"):

Trace(
 NMaximize(E^(-x^2) - 1, x),
 _Optimization`MinimizationProblem,
 TraceForward -> True,
 TraceInternal -> True
 )

Workaround:

NMaximize(E^(-x^2) - 1, x, Method -> "DifferentialEvolution")

(*  {0., {x -> -5.45643*10^-9}}  *)

Alternatively, you can turn off the convex minimizer:

Block({Optimization`UseConvexMinimize = False},
 NMinimize(-(E^(-x^2) - 1), x)
 )

(*   {0., {x -> -5.45643*10^-9}}  *)

How much Mathematical Knowledge do most people on Mathematics Stack Exchange have?

How much Mathematical Knowledge do most people on Mathematics Stack Exchange have in terms of experience, and highest level of education (High school, College, Post-Graduate)

How can I get better at solving problems on this site (I’m a 9th grader who has taken the AP Calculus AB Exam)?

(Sorry if my wording is confusing)

Which branch of computer science uses mathematical optimization the most?

I am beginning with my masters degree in mathematics and I will specialize in optimization theory. My minor is computer science and wanted to know which branch of computer science fits my specialization(optimization theory) the best.

I know that machine learning involves some optimization theory (I don’t know how much though…)

How do I build up mathematical intuition?

I will be a freshman in high school next spring. Recently I felt like my math skills were bad. I can hardly figure out what to do in problems, and if I try and look at it from a different angle it feels so difficult. How do I deal with trying to build up math logic?

mathematical optimization – How to minimize that expression in four variables?

I mean
$sqrt{w^2+(21-x)^2}+sqrt{(20-w)^2+z^2}+sqrt{x^2+(20-y)^2}+sqrt{y^2+(21-z)^2}.$

The command

Minimize(Sqrt(x^2 + (20 - y)^2) + Sqrt(y^2 + (21 - z)^2) + 
Sqrt(z^2 + (20 - w)^2) + Sqrt(w^2 + (21 - x)^2), {x, y, z, w})

is running without any response on my comp for hours. The numerical optimizations

NMinimize( Sqrt(x^2 + (20 - y)^2) + Sqrt(y^2 + (21 - z)^2) + 
Sqrt(z^2 + (20 - w)^2) + Sqrt(w^2 + (21 - x)^2), {x, y, z, w}, 
Method -> "DifferentialEvolution")

{58., {x -> 11.579, y -> 8.97237, z -> 11.579, w -> 8.97237}

and the same with Method->"RandomSearch"

{58., {x -> 10.5551, y -> 9.94753, z -> 10.5551, w -> 9.94753}}

and the same with Method->"NelderMead"

{58., {x -> 18.3218, y -> 2.55062, z -> 18.3218, w -> 2.55062}}

suggest the optimal value under consideration is taken in many points.

mathematical optimization – Stopping condition for FindMinimum / FindMaximum

This seems like it should be a simple question — but I am looking to use a “home made” stopping condition with FindMaximum, while evaluating a very complex function.

Printing out the successive changes in the results for my problem (which cannot be boiled down to a quick code sample) I see that the relative differences in about 100 params are on the order of 10^-3 between steps (and often much smaller) which is more than enough for this application — but it just continues to dance around the “right” answer until I hit MaxIterations, and adjusting Precision and Accuracy does not seem to help.

Q: Is there a way to implement a “stopping condition” option for these functions?

bitcoin core – I can not find a clear mathematical proof method with details and example for “near to zero chance of generating the same pair key wallet”


Your privacy


By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.




mathematical optimization – Maximize returns unevaluated even though there is a global maximum

I was trying to calculate the maximum of a gaussian plus a line, but Maximize return unevaluated:

In(8):= Maximize(1/((Sigma)*Sqrt(2*Pi))*Exp(-1/2*(x-(Mu))^2/(Sigma)^2)+s*x,x )
Out(8)= Maximize(s x+E^(-((x-(Mu))^2/(2 (Sigma)^2)))/(Sqrt(2 (Pi)) (Sigma)),x)

enter image description here

So I thought maybe it doesn’t have an analytical form so I removed the line term but I still get no answer even though the answer is zero.

In(9):= Maximize(1/((Sigma)*Sqrt(2*Pi))*Exp(-1/2*(x-(Mu))^2/(Sigma)^2),x )
Out(9)= Maximize(E^(-((x-(Mu))^2/(2 (Sigma)^2)))/(Sqrt(2 (Pi)) (Sigma)),x)

enter image description here

I’m not really trained in Mathematica so I know I’m probably just doing something wrong, I just don’t know what…

Thanks a lot 🙂

How the property of convergence or divergence of a mathematical function is used in the design of body of a program in a compiler like C

A function in C returns a value how this aspect of C Programming is concern with convergence and divergence of a function in maths.

gm.general mathematics – Need suggestion in writting equation in a simple mathematical way

I have a equation given by
begin{align}
1+F(K_{1},K_{2};beta)=0
end{align}

This equation is needed to solve for $beta$‘s ($beta_1,beta_2,beta_3…..beta_n$) for given value of $K_{1}$ and $K_{2}$. Both $K_{1}$ and $K_{2}$ can take values $epsilon$ and $frac{1}{epsilon}$, corresponding to this four different equation can be obtained. From each of this equations we have get ($beta_1,beta_2,beta_3…..beta_n$), correspondingly we can construct a function called $g(x;beta_{1}),g(x;beta_{2})….g(x;beta_{n})$ for each set.

Table showing different combinations of K_1 and K_2

using these $g(x;beta_{1}),g(x;beta_{2})….g(x;beta_{n})$ for each set I am interested in constructing one final equation called.
begin{align}
W(x)=sum_{i=1}^{n}c_{i}g_{i}(x;beta_{i})+sum_{i=1}^{n}d_{i}g_{i}(x;beta_{i})+sum_{i=1}^{n}e_{i}g_{i}(x;beta_{i})+sum_{i=1}^{n}f_{i}g_{i}(x;beta_{i})
end{align}

So the question now is how to write in $W(x)$ in an elegant manner and how to generalize this procedure for when $1+F(K_{1},K_{2}….,K_{n};beta)=0$, then I will be having $2^n$ combinations.