**Background**

I have a function $J$ that I am minimizing, but the function is too expensive to minimize computationally. I derived an upper bound on $J$ (denoted by $overline{J}$) that is not so hard to compute, and I believe the arguments that minimize $overline{J}$ are “close” to the arguments that minimize $J$. Thus, minimizing the $overline{J}$ seems to be a good approximation for minimizing $J$.

However, there is a catch. The upper bound $overline{J}$ includes a constant $theta%$ that is unknown. Essentially, the parameter $theta$ is a positive number that changes the tightness of $overline{J}$ (if $theta$ is chosen poorly $overline{J}$ will no longer be an upper bound), but in this case, the tightness of the bound is not important. Instead, the arguments minimizing $overline{J}$ are important. Based on some numerical experiments, the arguments that minimize $overline{J}$ do not change “much” as $theta$ changes (at least this seems to be the case). I am trying to see how the arguments minimizing $overline{J}$ change with respect to $theta$.

**Question**

Consider the a function $f(x_1,x_2, cdots, x_n, theta)$ where $theta$ is a constant. I just wrote $theta$ as an argument to emphasize the function contains an important parameter $theta$. Now, consider

$$text{argmin}_{x1,x2,cdots, x_n} ; f(x_1,x_2,cdots,x_n, theta)$$

where $mathbf{x} in mathbb{R}^n$ such that $mathbf{x} = {x_1, x_2, cdots, x_n }$.

My question is

- How can I show that the minimizing arguments of $f$ (i.e., $x_1, x_2, cdots, x_n$), is not affected by the choice of $theta$?
- If the choice of $theta$ does affect

the minimizing arguments (which is mostly likely the case), how can I determine how much the parameter $theta$ affect the arguments that minimize $f$?

I should note that I am working on continuous and discrete versions of this problem. Thus, I am trying to understand how to approach this problem in general. While the specific problem I am working with is not so simple, here is a simple example to illustrate my question.

Example 1: Consider

$$ f(x,y,c) = cx^2 + y^2 $$

The minimum of this function is $x=0$ and $y=0$ regardless of the value of $c$. This is simple to see for this function.

Example 2: Consider the following equations,

$$ f_1(x,y,c) = (x+c*0.00001)^2 + y^2 $$

$$ f_2(x,y,c) = (x+c)^2 + y^2 $$

The value of $c$ in the first equation $f_1$ does not affect the minimizing arguments as much as the value of $c$ in the second equation $f_2$.