complexity theory – Multiple Variables in Asymptotic Notation

I am trying to understand the multiple variable definition of an asymtotic notation. Particularly the definition in wikipedia. It’s also discussed in Asymptotic Analysis for two variables? but I think the answer is wrong. At least it is just corrected in the comments and and referenced to a lengthy answer. What I look for is just the answer for my confusion of the example given here. Wikipedia says,

Big $O$ (and little o, $Omega$, etc.) can also be used with multiple
variables. To define big $O$ formally for multiple variables, suppose
$f$ and $g$ are two functions defined on some subset of

We say $f(vec{x})$ is $O(g(vec{x}))$ as
$vec{x} rightarrow infty$ if and only if $exists M exists C>0$ such that for all $vec{x}$ with $x_{i} geq M$ $textbf{for some i}$ $|f(vec{x})| leq C|g(vec{x})|$.

… For example, if $f(n, m)=1$ and $g(n, m)=n$,
then $f(n, m)=O(g(n, m))$ if we restrict $f$ and $g$ to $(1,
> infty)^{2},$
but not if they are defined on $(0, infty)^{2} .$ This
is not the only generalization of big o to multivariate functions, and
in practice, there is some inconsistency in the choice of defintition.

What I don’t understand is, if we only look for some i, why can not we use the domain $(0, infty)^{2} $? For example, if I only take n variable the infinity (i is 0 in this case), then shouldn’t it be fine and $f(n,m) in O(g(n,m))$. Shouldn’t be the definition not for some i but for all i then? Do I understand the notion of “for some” in the wrong way?

mathematical optimization – Numerical Minimization for a function of five variables

I have the function

f(w_,x_,y_, (Alpha)_,g_)=Sqrt(((w^2 + x^4) (1 + 2 w (Alpha)^2 + (Alpha)^4))/(
 2 x^2 w) - (Alpha)^2 y)*Sqrt(((g w)^2/x^2) + (2 x^2)/w + (2 w (g (Alpha) - 1 )^2)/x^2)

with the restrictions

$$w geq 1, $$

$$ x>0, $$

$$ y,alpha, g in mathbb{R}.$$

and I appeal to NMinimize() to find a numerical value for the minimum of the function f(w_,x_,y_, (Alpha)_,g_), that is,

NMinimize({f(w, x, y, (Alpha), g), x > 0 && w >= 1}, {w, 
   x, {(Alpha), y, g} (Element) Reals}) // Quiet

therefore mathematica shows the result

{2., {w -> 1.78095, x -> 1.33452, (Alpha) -> -8.73751*10^-9, 
  y -> 0.731324, g -> -2.98148*10^-8}}

On the other hand, inquiring in the package help of the software I find that I can specify a non-default method which could give a better solution (with more accuracy), for example with DifferentialEvolution; that is,

NMinimize({f(w, x, y, (Alpha), g), x > 0 && w >= 1}, {w, 
   x, {(Alpha), y, g} (Element) Reals}, 
  Method -> "DifferentialEvolution") // Quiet

giving the result

{1.09831, {w -> 1.00016, x -> 0.962037, (Alpha) -> 0.276323, 
  y -> 11.3393, g -> -0.0477925}}

Therefore, I have the question:

What is the best method (with mathematica) to obtain the most accurate value for the real minimum of the function?

I am a novice with the use of NMinimize comand

How to find Windows environment variables from hard drive without booting?

How can I find what the PATH variable on Windows 10 was on a backup “image” of an old system hard drive?

I returned my computer to field support, they “backed it up”, restored it to a new system, sent me the new system and nuked the old system.

When I asked about getting missing stuff off the old system, they said the only option is to see if I can find what’s missing on the backup image which they keep for seven days.

I’m missing my PATH and environment variables.

Unfortunately, restoring the system (not sure what their process was – I know that they re-installed some apps, so it wasn’t a system mirror) didn’t restore my environment variables and PATH.

I know that simply having the old PATH (and other environement variables as well) won’t necessarily “fix” any problems on the new machine.

But, for example, I spent a lot of time setting up my Python environment, and I have no idea what the Python environment variables even were that I used, much less what they were set to (there are several Python instances on the “restored” hard drive).

I have access to the back up that field service made of my boot drive.

Back in the “old” days, I’d just copy the AUTOEXEC.BAT and CONFIG.SYS files. Those haven’t been around (AFAIK) for a long time.

How can I find what the PATH and other environment variables WERE in the backup of the old system that I can no longer boot or use (since it and its hard drive are gone)?

I welcome pointers to other posts. I know Google is my friend, but not today. 99.9% of everything I’ve found is “how do I set my PATH variable”, and so on.

Thank you!


(P.S. I just dumped all my environment variables to a file I named CONFIG.SYS so I never have to consult the answer to this question again)

linear algebra – How can I find the inverse of a matrix with undetermined variables?

all. I am going to find the inverse of the matrix after the derivatives. But, the system terminates the calculation during the process. So, what is the problem inside? How can I fix it? Many thanks!

The code can be found from the following link.

If you cannot download the code, please see the attached screen-shot of the code.

Inverse of matrix

algorithms – Assign few binary variables to make all polynomials identically zero

This is NP-hard. In particular, you can express SAT as an instance of such a problem.

If you have to solve it in practice, there are a few things you can try. You can try using a SAT solver, using the Tseitin transform to convert to SAT. (You might consider using CryptoMiniSAT, which has built-in support for xor, which is relevant since your arithmetic is modulo 0, i.e., your addition is xor.) Or, you can try using an integer linear programming, though I don’t expect this to work as well. Or, you could try using Gröbner bases, e.g., the $F_5$ algorithm. I think the latter are particularly effective in finite fields of large characteristic, but I’m not sure whether they work well over the binary field.

However, in general, I expect your problem will be hard and quite possibly not solvable in a realistic amount of time in practice.

cv.complex variables – Context for this discrete Cauchy integral formula

Notation: I will use the following conventions for discrete Fourier transforms (DFT) and discrete time Fourier transforms (DTFT):
$$mathcal{D}_N(x_j)(k) := sum_{j=0}^{N-1} e^{-2pi i j k} x_j$$
$$mathcal{D}^{-1}_N(y_k)(j) := frac{1}{N}sum_{k=0}^{N-1} e^{2pi i j k} y_k$$
$$mathcal{F}(x_j)(k) := sum_{j=-infty}^infty e^{-2pi i j k} x_j$$
$$mathcal{F}^{-1}(y(k))(j) := int_0^1 e^{2pi i j k} y(k)$$
Additionally, $Omega_N$ will denote the set of $N^{th}$ roots of unity.

Let $f$ be a polynomial of degree $N$ with coefficients $c_j$, $j=0,1,dots,N$. Consider the list of all coefficients as a vector $c$. By taking a DFT of $c$ you can show that $$c_j = mathcal{D}_N^{-1}left(fleft(e^{-2pi i k/N}right)right)(j)tag{1}$$ And analogously with a DTFT $$c_j = mathcal{F}^{-1}left(fleft(e^{-2pi i k}right)right)(j)tag{2}$$

Actually the latter formula works perfectly well for analytic functions $f$, where $c$ is the the vector of Taylor coefficients, or with meromorphic functions and $c$ the Laurant coefficients.

I realized recently that this is basically just Cauchy’s integral formula with a change of variables. To see this, expand out the inverse DTFT:
c_j & = int_0^1 e^{2pi i k j} f(e^{-2pi i k}) dk \
& = oint_{S^1} z^{-j} f(z) frac{dz}{2pi i z} \
& = frac{1}{j!} frac{d^j f(0)}{dz^j}

Where the last line is from Cauchy’s formula.

So far this is all familiar. However, from this perspective we see that eq. (1) can also be interpreted as a “discrete” version of Cauchy’s integral formula, valid only for polynomials. Rewriting it slightly, we have
frac{1}{j!}frac{d^j f(0)}{dz^j} = c_j & = frac{1}{N} sum_k e^{2pi i j k} f(e^{-2pi i k/N})\
& = frac{1}{N} sum_{zin Omega_N} z^{-j} f(z) tag{3}\

This circle of ideas seems like it would be useful for e.g. efficiently determining coefficients of a “black box” polynomial that you can only evaluate. Or more generally any situation where you would want to use Cauchy’s formula with a polynomial, this might be a convenient alternative. However, I’ve never seen any of this before in textbooks or other literature.

Question: What, if anything, are eqs. (1) and (3) useful for? Citations to literature welcome.

As I wrote this, I remembered vaguely something about “z-transforms”. After looking this up, it seems very closely related, though not quite identical. I still haven’t found anything exactly like eqs. (1) or (3).

deployment – Tool to Execute SQL Server Scripts and Automatically Recognize and Prompt for Scripting Variables

I have a folder of scripts that contain multiple objects and jobs that I roll out every time I deploy a new SQL Server Instance. The scripts utilize scripting variables, as an example, here is an abridged example of a job creation script:

DECLARE @Owner SYSNAME = (SELECT (name) FROM sys.server_principals WHERE (sid) = 0x01)

EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'Myjob', 
        @category_name=N'Database Maintenance', 
        @notify_email_operator_name=N'$(AlertOperator)', @job_id = @jobId OUTPUT

Note that @notify_email_operator_name will be set to whatever value is passed to $(AlertOperator)

These scripts are usually run through a Powershell script which loops through the folder and passes values to the $(AlertOperator) variable.

This approach allows a suite of scripts to be kept which can be rolled out to a new server easily.

I was wondering if there was a GUI tool where I can open one or more .sql files and it would automatically recognize the scripting variables in those files and prompt for their values before running the files against one or more defined servers?

equation solving – Finding the joint domain for a couple of functions of three variables

I have two functions $f_{1}(x,y,z)$ and $f_{2}(x,y,z)$ defined respectively as

  Sqrt((2 x)/y + y^2/x + (2 y (-0.04258557948619213` - z)^2)/x) Sqrt(
   x - 17.37121059964452` z + (2 x z^2)/y + (y^2 z^2)/x + (
    y (1 + z^4))/(2 x)); 


 (Sqrt((y^3 + 2 y^2 (-1 + z)^2)/(x y)) Sqrt((
 x (1 + 2 y (-1 + z)^2 + (-2 + z)^2 z^2))/y))/Sqrt(2); 

subject to the restrictions: $x >0,~ygeq 1,~0<z<1.$ Is there a way (using mathematica), to find the domain for the $x,y,z$ variables in order to satsfy $f_{1}(x,y,z)<2$ and $f_{2}(x,y,z)<2$ simultaneously? in other words, how can I find (using mathematica) the allowed values for $x,y,z$ in order to satsfy $f_{1}(x,y,z)<2$ and $f_{2}(x,y,z)<2$?

pr.probability – An Inequality of Expected Value of Random Variables

I encountered the following problem in my research:

Suppose there are $N$ random variables that are independent and identically distributed (IID). The probability density function (PDF) of these random variables $f(x)$ is a unimodal function symmetrical about $0$ (i.e., $f(x)$ is non-decreasing within $(-∞,0)$, and for any $x$, $f(x) = f(-x)$ holds. for example, the distribution can be uniform distribution, normal distribution, Cauchy distribution with mean $0$, etc.).
For a given real number $x_0$, Sort these random variables as $X_1, X_2, …, X_N$ such that $$|X_1-x_0|leq |X_2-x_0| leq … leq |X_N-x_0|$$
For example, if $N = 3$, the $N$ random variables are randomly chosen as $-0.5, 1.5, 5$, and $x_0 = 1$, then $X_1 = 1.5, X_2 = -0.5, X_3 = 5$.
Let $Y_i = |frac{X_1+X_2+…+X_i}{i}-x_0|^r (i=1,…,N, r = 1 or 2)$, then for any $x_0$ and $f(x)$, does the inequality
$$EY_1 leq EY_2 leq… leq EY_N$$
always hold? Where $E$ denotes the expected value.

The inequality above is tested via the Monte Carlo method for cases where the distributions are uniform distribution, normal distribution, and Cauchy distribution. Details can be seen in since I cannot post figures here…

Moreover, is it possible to derive the PDF of $Y_i$?

Answers or ideas for either $r=1$ or $r=2$ would be so grateful!

Confusions about definition of the sum of two or several random variables

Please see the following pictures for detailed descriptions:
enter image description here

enter image description here