mathematical optimization – Numerical Minimization for a function of five variables

I have the function

f(w_,x_,y_, (Alpha)_,g_)=Sqrt(((w^2 + x^4) (1 + 2 w (Alpha)^2 + (Alpha)^4))/(
 2 x^2 w) - (Alpha)^2 y)*Sqrt(((g w)^2/x^2) + (2 x^2)/w + (2 w (g (Alpha) - 1 )^2)/x^2)

with the restrictions

$$w geq 1, $$

$$ x>0, $$

$$ y,alpha, g in mathbb{R}.$$

and I appeal to NMinimize() to find a numerical value for the minimum of the function f(w_,x_,y_, (Alpha)_,g_), that is,

NMinimize({f(w, x, y, (Alpha), g), x > 0 && w >= 1}, {w, 
   x, {(Alpha), y, g} (Element) Reals}) // Quiet

therefore mathematica shows the result

{2., {w -> 1.78095, x -> 1.33452, (Alpha) -> -8.73751*10^-9, 
  y -> 0.731324, g -> -2.98148*10^-8}}

On the other hand, inquiring in the package help of the software I find that I can specify a non-default method which could give a better solution (with more accuracy), for example with DifferentialEvolution; that is,

NMinimize({f(w, x, y, (Alpha), g), x > 0 && w >= 1}, {w, 
   x, {(Alpha), y, g} (Element) Reals}, 
  Method -> "DifferentialEvolution") // Quiet

giving the result

{1.09831, {w -> 1.00016, x -> 0.962037, (Alpha) -> 0.276323, 
  y -> 11.3393, g -> -0.0477925}}

Therefore, I have the question:

What is the best method (with mathematica) to obtain the most accurate value for the real minimum of the function?

P.D.
I am a novice with the use of NMinimize comand

mathematical optimization – FindMaximum::eit: The algorithm does not converge to the tolerance of _4.806217383937354`*^-6_ in _500_ iterations

This is an extended comment about your pdf not integrating to 1.

I’ve typed in the code from the image you posted and I think it looks exactly like the image:

dist = ProbabilityDistribution((1 - Exp(λ*(1 - Exp(1))))^(-1) *λ*β*(γ/α)*(x/α)^(γ - 1)*
  (1 - Exp(-(x/α)^γ))^(β - 1)*Exp(λ*(1 - Exp(-(x/α)^γ))^β - (x/α)^γ + (1 - Exp(-(x/α)^γ))), {x, 1, ∞}, 
  Assumptions -> {λ > 0, β > 0, α > 0, γ > 0})

My result as an image:
My distribution

Your image of the resulting distribution:

Your image of the resulting distribution

But if a set of parameters is tried, the pdf does not integrate to 1:

parms = {λ -> 2, α -> 1, β -> 1, γ -> 1};
Integrate(PDF(dist /. parms, x), {x, 1, ∞}) // N
(* 9.2468 *)

There is an option for ProbabilityDistribution to normalize the pdf (Method->"Normalize") but the proposed pdf seems too complicated for that to work on the symbolic distribution. (In other words, doing so does not return an answer in a reasonable or unreasonable amount of time.)

numerics – Improving precision of numerical optimization results

My optimization problem contains an equality constraint ($f(x)=0$), but I learned that numerical approximations have a problem with this and thus I define an interval instead ($-0.1leq f(x)leq 0.1$). However, the results of my code are all over the place, specifically they do not have the relatively smooth behavior that I expected. I already played around by decreasing the size of the interval and running more Iterations, but it won#t work out.

Do you have any idea how I can smoothen out my results (There should be at most one local maximum and minimum)?

 Clear("Global`*")
    def(s_, cs_) := cs s
    main(S_, cm_) := cm S
    y(S_, Tp_, tech_) :=  Sqrt(S (Tp + tech))   
    X = {};
    delta = 1; M = 2; V = 2; dec = 1; db = 
     10^8; ts = 0.3; fs = 30; re = 25; l = 0;
    For(K = 1, K < 100, K += 100,
     For(cs = 1, cs < 100, cs += 100,
      For(cm = 0, cm < 100, cm += 100,
       For(p = 50, p < 100, p += 50,
        For(pm = 1, pm < 100, pm += 100,
         For(S = 0.7, S < 3, S += 0.4,
          For(cb = 1, cb < 100, cb += 100,
           For(eps = 5, eps < 100, eps += 100,
            For(epsm = 1, epsm < 100, epsm += 100,
             For(tech = 1, tech < 100, tech += 100,
              For(re = 77, re < 78, re += 1,
               For(h = 1, h < 51, h += 50,
                
                {tc1, s1, C11, b1} = Values(Last(
                   
                   NMaximize({C1 - b db - cm (s + S) - re tc - 
                      cb (M + V) + 
                      epsm pm Sqrt((s + dec S) (M + tech + V)) + h M 0, 
                     Simplify(
                      And @@ {0 <= tc, tc + 0 ts <= M, C1 >= 0, s >= 0, 
                        b >= 0, 
                        K + eps p y(S, V + delta tc, tech) - def(s, cs) - 
                        main(S, cm) - sch(fs) 0 - C1 + b >= -0.1, 
                        K + eps p y(S, V + delta tc, tech) - def(s, cs) - 
                        main(S, cm) - fs 0 - C1 + b <= 0.1, 
                        C1 >= cb (M + V), (s + dec S) (M + tech + V) >= 0,
                         y(S, V + delta tc, tech)^2 >= 0})}, {tc, s, C1, 
                     b}, Method -> {"DifferentialEvolution", 
                      "InitialPoints" -> ({tc, s, C1, b} /. 
                        FindInstance({K + eps p y(S, V + delta tc, tech) -
                         def(s, cs) - main(S, cm) - sch(fs) 0 - C1 + b >= 
                        0, (s + dec S) (M + tech + V) >= 0, 
                        y(S, V + delta tc, tech)^2 >= 0}, {tc, s, C1, b}, 
                        20))}, MaxIterations -> 1*10^3)));
                AppendTo(X, {S, tc1})
                )
               )))))))))))
ListPlot(X)

mathematical optimization – FindMaximum works on desktop but not on laptop

I have a simple code to find maximum as follows.

myfunc = {-1 + 2/(1 + d), 1/2 (-1 + 1/d), 1 - d, 1 + 1/(-2 + d), 
   1 - d, 1 - d, -1 + 1/d, -1 + 1/d, 1/(1 + d), 1/(2 d), 1/d, 1/d, d/(
   1 - d)};
FindMaximum({##, 0 <= d <= 1}, d) & /@ myfunc

It works well on my desktop and the result is:

{{1., {d -> 0.}}, {(Infinity), {d -> 0.}}, {1., {d -> 
    0.}}, {0.5, {d -> 0.}}, {1., {d -> 0.}}, {1., {d -> 
    0.}}, {(Infinity), {d -> 0.}}, {(Infinity), {d -> 
    0.}}, {1., {d -> 0.}}, {(Infinity), {d -> 
    0.}}, {(Infinity), {d -> 0.}}, {(Infinity), {d -> 
    0.}}, {(Infinity), {d -> 1.}}}

However, I got error with $Failed when I ran it on my laptop.

{{1., {d -> 3.54538*10^-8}}, {-$Failed, {d -> 0.}}, {1., {d -> 
    0.}}, {0.5, {d -> 0.}}, {1., {d -> 0.}}, {1., {d -> 
    0.}}, {-$Failed, {d -> 0.}}, {-$Failed, {d -> 0.}}, {1., {d -> 
    0.}}, {(Infinity), {d -> Indeterminate}}, {(Infinity), {d -> 
    0.}}, {(Infinity), {d -> 0.}}, {(Infinity), {d -> 
    Indeterminate}}}

Why does this happen? How can I solve this?

postgresql – query performance and optimization of Postgres RDS DB – ltree

I have a AWS RDS m5.large Postgres 10 database that performs a lot of the following queries

SELECT "bundles".* FROM "bundles" WHERE "bundles"."version_id" = $1 AND (tree_path ~ ?) LIMIT $2

the problem is the poor performance of the overall system. Via advanced monitoring we see a very high value for current activity:

enter image description here

and it seems that the forementioned query have some sort of impact on the load by waits enter image description here

What do you suggest to check? I’m not a DBA so I can’t judge if those queries are efficent.

mathematical optimization – Minimization of constrained variable

I am trying to perform a minimization of a variable but NMinimize() does not seems what I need (it minimize a function, but my variable is inside a function)

I want to minimize “h” with respect to “P0” and “yp” with the following constraints:

 c1 <= F(h,P0,yp) <= c2
 c1 <= G(h,P0,yp) <= c2
 c1 <= H(h,P0,yp) <= c2
 c1 <= J(h,P0,yp) <= c2
 h>c5
 P0>0
 c3<yp<c4

I tried:

 NMinimize({h,constraints},{P0,yp})

but it does not work.

h,P0,yp are all variables and not functions.

c++ – Strlen function optimization

This seems like the obvious choice searching withing a string. However, while pcmpistri is very general/powerful, it is also not very fast. On typical Intel processors it consists of 3 µops that all go to execution port p0 (therefore limiting this loop to at best running one iteration every 3 cycles), on AMD Zen(1/2) it’s slightly less bad coming in at 2 µops and executing once every 2 cycles.

There is an in way more primitive way (just using SSE2) based on pcmpeqb and pmovmskb. That leaves you with a mask instead of an index, but for most of the loop that doesn’t matter (all that matters is whether the mask is zero or not), and in the final iteration you can use tzcnt (or similar) to find the actual index of the zero byte within the vector.

That technique also scales to AVX2, which pcmpistri does not. Additionally, you could use some unrolling: pminub some successive blocks of 16 bytes together to go through the string quicker at first, at the cost of a more tricky final iteration and a more complex pre-loop setup (see the next point).

While an aligned load that contain at least one byte of the string is safe even if the load pulls in some data that is outside the bounds of the string (an aligned load cannot cross a page boundary), that trick is unsafe for unaligned loads. A string that ends near a page boundary could cause this function to fetch into the next page, and possibly trigger an access violation.

There are different ways to fix it. The obvious one is using the usual byte-by-byte loop until a sufficiently aligned address is reached. A more advanced trick is rounding the address down to a multiple of 16 (32 for AVX2) and doing an aligned load. There are bytes in it that aren’t from the string, maybe including a zero. Therefore those bytes must be explicitly ignored, for example by shifting the mask that pmovmskb returned to the right by data & 15. If you decide to add unrolling, then the address for the main loop should be even more aligned, to guarantee that all the loads in the main loop body are safe.

Different optimal results of SCE-UA optimization algorithm in Julia and MATLAB

I posted this question in Stackoverflow, but some people suggested me to post it here.

I rewrite the SCE-UA optimization algorithm in Julia.

However, when I run my script in Julia. The optimized value is quite different from the results in MATLAB. The global best cost in MATLAB is 2.4598e-63 while the global best cost in Julia is 8.264629809290885e-8. This means something went wrong. The objective function is z=sum(x.^2) for both scripts, and the search range and parameters for the algorithm were also the same. However, I checked, again and again, to make sure the algorithms are the same, but I could not find out why the results were quite different.

Could some please check if those two scripts were different? Please give me any suggestions, thanks a lot.

The SCE-UA code in MATLAB is written by Yarpiz, which can be found in https://github.com/smkalami/ypea110-shuffled-complex-evolution/archive/master.zip

The code in Julia is as follows

using UnPack
using Plots
using StatsBase

mutable struct Pop
    position::Vector{Float64}
    cost::Float64
end

# CCE parameters
mutable struct CCE_params
    q::Int64
    alpha::Int64
    beta::Int64
    lb::Vector{Float64}
    ub::Vector{Float64}
    obj_func
end

# SCE parameters
mutable struct SCE_params
    max_iter::Int64
    n_complex::Int64
    n_complex_pop::Int64
    dim::Int64
    lb::Vector{Float64}
    ub::Vector{Float64}
    obj_func
end

import Base.isless
isless(a::Pop, b::Pop) = isless(a.cost, b.cost)

function cost_func(x)
    return sum(x.^2)
end

function uniform_rand(lb::Array{Float64, 1}, ub::Array{Float64, 1})
    dim = length(lb)
    arr = rand(dim) .* (ub .- lb) .+ lb
    return arr
end
# SCE parameters
max_iter = 500
n_complex = 5
n_complex_pop = 10
dim = 10
lb = ones(dim) * -10
ub = ones(dim) * 10
obj_func = cost_func
n_complex_pop = max(n_complex_pop, dim+1) # Nelder-Mead Standard
sce_params = SCE_params(max_iter, n_complex, n_complex_pop, dim, lb, ub, obj_func)

# CCE parameters
cce_q = max(round(Int64, 0.5*n_complex_pop), 2)
cce_alpha = 3
cce_beta = 5

cce_params = CCE_params(cce_q, cce_alpha, cce_beta, lb, ub, obj_func)

function SCE(sce_params, cce_params)
    @unpack max_iter, n_complex, n_complex_pop, dim, lb, ub, obj_func = sce_params

    n_pop = n_complex * n_complex_pop
    I = reshape(1:n_pop, n_complex, :)

    # Step 1. Generate rand_sample
    best_costs = Vector{Float64}(undef, max_iter)

    pops = ()
    for i in 1:n_pop
        pop_position = uniform_rand(lb, ub)
        pop_cost = obj_func(pop_position)
        pop = Pop(pop_position, pop_cost)
        push!(pops, pop)
    end
    complex = Array{Pop}(undef, n_complex_pop, n_complex)

    # Step 2. Rank Points
    sort!(pops)
    best_pop = pops(1)

    # Main loop
    for iter in 1:max_iter

        # Step 3. Partion into complexes
        for j in 1:n_complex
            complex(:,j) = deepcopy(pops(I(j,:)))
            # Step 4. Evolve complex, run CCE
            complex(:,j) = CCE(complex(:,j), cce_params)
            pops(I(j,:)) = deepcopy(complex(:,j))
        end
        # Step 5. Shuffle Complexes

        sort!(pops)
        best_pop = pops(1)

        best_costs(iter) = best_pop.cost

        # Show Iteration Information
        println("Iter = ", iter)
        println("The Best Cost is: ", best_costs(iter))
    end
    best_costs
end

function rand_sample(P, q)
    L = Vector{Int64}(undef, q)
    for i in 1:q
        L(i) = sample(1:length(P), Weights(P))
        # L(i) = sample(1:sizeof(P), weights(P), 1, replace=true)
    end
    return L
end

function not_in_search_space(position, lb, ub)
    return any(position .<= lb) || any(position .>= ub)
end

function CCE(complex_pops, cce_params)
    # Step 1. Initialize
    @unpack q, alpha, beta, lb, ub, obj_func = cce_params
    n_pop = length(complex_pops)

    # Step 2. Assign weights
    P = (2*(n_pop+1-i) / (n_pop*(n_pop+1)) for i in 1:n_pop)

    # Calculate Population Range (Smallest Hypercube)
    new_lb = complex_pops(1).position
    new_ub = complex_pops(1).position
    for i in 2:n_pop
        new_lb = min.(new_lb, complex_pops(i).position)
        new_ub = max.(new_ub, complex_pops(i).position)
    end

    # CCE main loop
    for it in 1:beta
        # Step 3. Select parents
        L = rand_sample(P, q)
        B = complex_pops(L)

        # Step 4. Generate Offspring
        for k in 1:alpha
            # a) Sort population
            sorted_indexs = sortperm(B)
            sort!(B)
            L(:) = L(sorted_indexs)

            # Calculate the centroid
            g = zeros(length(lb))
            for i in 1:q-1
                g .= g .+ B(i).position
            end
            g .= g ./ (q-1)

            # b) Reflection step
            reflection = deepcopy(B(end))
            reflection.position = 2 .* g .- B(end).position # newly generated point using reflection
            if not_in_search_space(reflection.position, lb, ub)
                reflection.position = uniform_rand(new_lb, new_ub)
            end
            reflection.cost = obj_func(reflection.position)

            if reflection.cost < B(end).cost
                B(end) = deepcopy(reflection)
            else # Contraction
                contraction = deepcopy(B(end))
                contraction.position = (g .+ B(end).position) ./ 2
                contraction.cost = obj_func(contraction.position)

                if contraction.cost < B(end).cost
                    B(end) = deepcopy(contraction)
                else
                    B(end).position = uniform_rand(new_lb, new_ub)
                    B(end).cost = obj_func(B(end).position)
                end
            end

        end

        complex_pops(L) = B
    end
    return complex_pops
end
best_costs = SCE(sce_params, cce_params)

plot(best_costs, yaxis=:log, label = "cost")

# savefig("Julia.png")

Both scripts could run normally with just one click.

The result of SCE-UA algorithm from MATLAB is as follows
enter image description here

The result from Julia is as follows

enter image description here

Obviously, the convergence curves are quite different. The result from Julia is not accurate enough, but I don’t know why.

Can Mathematica solve matrix-based parametric (convex or semidefinite) constrained optimization problems?

I have gone through Mathematica’s documentation and guides on ConvexOptimization, ParametricConvexOptimization and SemidefiniteOptimization. I am also running the latest version of Mathematica.

The kind of matrix-based, parametric, constrained optimization problems I want to solve is this:

begin{equation}
min_{X_j, , L_{jk}} text{Tr}(A L) \
text{such that, } X_j text{ and } L_{jk} text{ are } 4times4 text{ Hermitian matrices} \
G_k cdot X_j = delta_{j k}\
L:=begin{bmatrix}L_{11} &L_{12} &L_{13} \ L_{12} &L_{22} &L_{23} \ L_{13} &L_{23} &L_{33}end{bmatrix} succeq begin{bmatrix} X_1 \ X_2 \ X_3 end{bmatrix}begin{bmatrix}X_1 &X_2 &X_3end{bmatrix}
end{equation}

where the variables to be optimized over are $X_j$ and $L_{jk}$ ($j$ and $k$ run from 1 to 3), which are themselves matrices! The matrices $G_k$ and $A$ depend on some parameter $alpha$ (and satisfy additional properties).

I have been able to run this kind of optimization in MATLAB, and also a much simpler version of this in Mathematica, where $j, k=1$ and the parameter value is fixed, using,

ConvexOptimization(
  Tr((Rho)0 . 
    v11), {VectorGreaterEqual({v11, x1}, "SemidefiniteCone") &&  
    Tr((Rho)0 . x1) == 0  && Tr((Rho)1 . x1) == 1   && 
    Element(x1, Matrices({4, 4}, Complexes, Hermitian)) && 
    Element(v11, Matrices({4, 4}, Complexes, Hermitian))} , {x1, 
   v11}))

However I simply can not get the full problem to run on Mathematica, using either ConvexOptimization( ) (at fixed parameter values), ParametricConvexOptimization( ), SemidefiniteOptimization( ), or Minimize( ).

ConvexOptimization( ) at fixed parameter values for $j, k = 1, 2$ shows the warning ConvexOptimization::ctuc: The curvature (convexity or concavity) of the term X1.X2 in the constraint {{L11,L12},{L12,L22}}Underscript((VectorGreaterEqual), Subsuperscript((ScriptCapitalS), +, (FilledSquare))){{X1.X1,X1.X2},{X1.X2,X2.X2}} could not be determined.

Minimize( ) shows the error Minimize::vecin: Unable to resolve vector inequalities ...

And ParametricConvexOptimization( ) and SemidefiniteOptimization( ) simply return the input as output.

Has anyone got some experience with running such matrix-based optimizations in Mathematica? Thanks for your help.

EDIT 1:
For the two-dimensional case ($j, k=1, 2$) I tried (with $A$ the identity matrix, and at fixed parameter value):

ConvexOptimization(
 Tr(Tr(ArrayFlatten({{L11, L12}, {L12, 
      L22}}))), {VectorGreaterEqual({ArrayFlatten({{L11, L12}, {L12, 
        L22}}), ArrayFlatten({{X1 . X1, X1 . X2}, {X1 . X2, 
        X2 . X2}})}, "SemidefiniteCone") &&  Tr((Rho)0 . X1) == 0  && 
   Tr((Rho)0 . X2) == 0 && Tr((Rho)1 . X1) == 1  && 
   Tr((Rho)1 . X2) == 0  && Tr((Rho)2 . X1) == 0  && 
   Tr((Rho)2 . X2) == 1  && 
   Element(X1, Matrices({4, 4}, Complexes, Hermitian)) && 
   Element(X2, Matrices({4, 4}, Complexes, Hermitian)) && 
   Element(L11, Matrices({4, 4}, Complexes, Hermitian)) && 
   Element(L12, Matrices({4, 4}, Complexes, Hermitian))  && 
   Element(L22, Matrices({4, 4}, Complexes, Hermitian)) }, {X1, X2, 
  L11, L12, L22})

and for the three-dimensional case ($j, k = 1, 2, 3$) with variable parameter value and $A$ the identity matrix, I tried

ParametricConvexOptimization(
 Tr(Tr(ArrayFlatten({{L11, L12, L13}, {L12, L22, L23}, {L13, L23, 
      L33}}))), {VectorGreaterEqual({ArrayFlatten({{L11, L12, 
       L13}, {L12, L22, L23}, {L13, L23, L33}}), 
    ArrayFlatten({{X1}, {X2}, {X3}}) . 
     Transpose(ArrayFlatten({{X1}, {X2}, {X3}}))}, 
   "SemidefiniteCone"),  Tr((Rho)0 . X1) == 0 , 
  Tr((Rho)0 . X2) == 0  , Tr((Rho)0 . X3) == 0 , 
  Tr((Rho)1 . X1) == 1 , Tr((Rho)1 . X2) == 0  , 
  Tr((Rho)1 . X3) == 0  , Tr((Rho)2 . X1) == 0 , 
  Tr((Rho)2 . X2) == 1  , Tr((Rho)2 . X3) == 0 , 
  Tr((Rho)3 . X1) == 0 , Tr((Rho)3 . X2) == 0  , 
  Tr((Rho)3 . X3) == 1 }, {Element(X1, 
   Matrices({4, 4}, Complexes, Hermitian)), 
  Element(X2, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(X3, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(L11, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(L12, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(L13, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(L22, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(L23, Matrices({4, 4}, Complexes, Hermitian)), 
  Element(L33, Matrices({4, 4}, Complexes, Hermitian))}, {(Alpha)})

Here, the $rho_{k}$ matrices are the $G_k$ matrices.

What Are Some Of The Best Practices For Mobile Strategy Optimization?

What are some of the best practices for mobile strategy optimization?