oc.optimization and control – How can I analyze the the affect of a constant on the arguments that minimize a function?

Background

I have a function $J$ that I am minimizing, but the function is too expensive to minimize computationally. I derived an upper bound on $J$ (denoted by $overline{J}$) that is not so hard to compute, and I believe the arguments that minimize $overline{J}$ are “close” to the arguments that minimize $J$. Thus, minimizing the $overline{J}$ seems to be a good approximation for minimizing $J$.

However, there is a catch. The upper bound $overline{J}$ includes a constant $theta%$ that is unknown. Essentially, the parameter $theta$ is a positive number that changes the tightness of $overline{J}$ (if $theta$ is chosen poorly $overline{J}$ will no longer be an upper bound), but in this case, the tightness of the bound is not important. Instead, the arguments minimizing $overline{J}$ are important. Based on some numerical experiments, the arguments that minimize $overline{J}$ do not change “much” as $theta$ changes (at least this seems to be the case). I am trying to see how the arguments minimizing $overline{J}$ change with respect to $theta$.

Question

Consider the a function $f(x_1,x_2, cdots, x_n, theta)$ where $theta$ is a constant. I just wrote $theta$ as an argument to emphasize the function contains an important parameter $theta$. Now, consider

$$text{argmin}_{x1,x2,cdots, x_n} ; f(x_1,x_2,cdots,x_n, theta)$$

where $mathbf{x} in mathbb{R}^n$ such that $mathbf{x} = {x_1, x_2, cdots, x_n }$.

My question is

  1. How can I show that the minimizing arguments of $f$ (i.e., $x_1, x_2, cdots, x_n$), is not affected by the choice of $theta$?
  2. If the choice of $theta$ does affect
    the minimizing arguments (which is mostly likely the case), how can I determine how much the parameter $theta$ affect the arguments that minimize $f$?

I should note that I am working on continuous and discrete versions of this problem. Thus, I am trying to understand how to approach this problem in general. While the specific problem I am working with is not so simple, here is a simple example to illustrate my question.

Example 1: Consider
$$ f(x,y,c) = cx^2 + y^2 $$
The minimum of this function is $x=0$ and $y=0$ regardless of the value of $c$. This is simple to see for this function.

Example 2: Consider the following equations,
$$ f_1(x,y,c) = (x+c*0.00001)^2 + y^2 $$
$$ f_2(x,y,c) = (x+c)^2 + y^2 $$

The value of $c$ in the first equation $f_1$ does not affect the minimizing arguments as much as the value of $c$ in the second equation $f_2$.

st.statistics – Do I apply a one-way ANOVA to analyze this data?

enter image description here

So what I see here are four groups, standing/congruent-time, standing/incongruent-time, sitting/congruent-time and sitting/incongruent-time. I want to test if there is any statistical significance between the groups. Is a One-way ANOVA the best test to use or is there something else I can do in addition to that?

Reindex or Analyze Tables on PostgreSQL after import

I used PostgreSQL 9.5 on a server and I ended up migrating the data to a new server using PosgreSQL 12.

Should I perform a REINDEX for all tables or just ANALYZE for all tables?

controllers – Is it a good idea to analyze and compensate for gamepad joystick deadzones?

For example, the player may want to be able to aim precisely in a top down shooter. However, if the player is using a junky controller that has axis-independent deadzones, they will not be able to aim just a few degrees off of top, bottom, left, or right due to deadzones. Is it a good idea to jump through hoops to sense this limitation and compensate for it? Are there any best practices for that?

Here is an example of what I mean by axis independent deadzones:

enter image description here

The red area represents the deadzone. The horizontal red area would give y=0, and the vertical red area would give x = 0.

I imagine that I could find these deadzones if I ask the user to calibrate their gamepad and do a sweep along the perimeter of the joystick. Then I could compensate for them by squashing all values between 0 and (1 or -1) into that grey quadrant. That said, I can see some issues arising with the player performing the calibration sloppily or too quickly, or otherwise sensing the deadzones inaccurately.

Is this a bad idea? Should the player just get a better controller?

bitcoincore development – How can I analyze the test coverage of the Bitcoin Core codebase?

Marco Falke has a site that analyzes the current line, function and branch coverage for unit tests, functional tests and fuzz tests.

Alternatively, vasild runs clang’s tools and then a script to highlight which lines in the coverage report have been modified by a particular patch.

formulas – In Google Sheets, analyze cell content, and if certain values are present, return a response in the next column

In the below example, I want to analyze each cell in column A, and if the cell’s contents match certain parameters, return a value in column B accordingly.
So if the value in a col A cell = apple, banana or pineapple, it should return a value of Fruit in col B for that item.
Or if the value of a cell in col A = truck or motorcycle, it should return Vehicle in col B.

Haven’t been able to figure this one out

ell

calculus and analysis – How to use dimensions to analyze physical equations in polar coordinates

I already know that the geometric equation of an elastic body in polar coordinates is as follows:

$$begin{aligned}
&varepsilon_{rho}=frac{partial u_{rho}}{partial rho} quad \
&varepsilon_{theta}=frac{u_{rho}}{rho}+frac{1}{rho} frac{partial u_{theta}}{partial theta} quad\
&\
&gamma_{rho theta}=-frac{u_{theta}}{rho}+frac{partial u_{theta}}{partial rho}+frac{1}{rho} frac{partial u_{rho}}{partial theta} quad
end{aligned}$$

I find that in $gamma_{rho theta}$ there is a coefficient $frac{1}{rho}$ in front of $frac{partial u_{rho}}{partial theta}$ and not in front of $frac{partial u_{theta}}{partial rho}$.

I want to use MMA to analyze the dimensions of items, such as $frac{partial u_{theta}}{partial rho}$, $varepsilon_{rho}$, $frac{partial u_{rho}}{partial theta}$.

postgresql – Do I need to ANALYZE table after DROP INDEX?

In our postgres db we historically have many similar indexes (e.g. is_deleted boolean) that have quite bad selectivity value. Worse than that, sometimes query planner breaks and postgres starts to use some of them instead of much more efficient indexes so we need to call ANALYZE manually to fix performance, so we decided to delete most of them.

Do we need to call ANALYZE for each table after deleting index?

According to the answer in Is it necessary to ANALYZE a table after an index has been created? postgres collects stats about actual values in table (for simple indexes) without index-related info but I couldn’t find proof in docs.

network – Snort analyze reply based on request

I’m trying to write a snort rule which detects if certain binary files where requested via HTTP based on a regex rule matching there names.
But it should only send an alert if the file exists (e.g. HTTP 200 OK reply).

Is it possible to have this kind of “statefull” scan?
What kind of technique could I use else since the files have no reliable information in them I could search for.

The current look of my rule:

alert TCP $EXTERNAL_NET any -> $HOME_NET $HTTP_PORTS (pcre:"/d{6}-d.d.pdf$/U"; sid:90000512; classtype:patent-access;) 

When should run Analyze table statement in mysql

Hello DBA’s i have also some question
1/ Why should we never run ANALYZE TABLE?
2/ How do I know for sure this table/db needs to be analyzed ? It seems that MySQL doesn’t store when the last time “stat” was updated ?

Just for Example :-
Assume i have queried

+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| goods |          0 |  PRIMARY |            1 |          id |         A |     7765796 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            1 |     shop_id |         A |       14523 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            2 | create_date |         A |      168168 |     NULL |   NULL |  YES |      BTREE |         |               |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.00 sec)```
So to see this output,how do i get to know for this table need to run analyze table command  ?