Calculating maximum with symmetry and something wrong

I’ve heard of a way to calculate the maximum or minimum and it’s quite weird .

For instance,$x>0$,$y>0$,$x+2y+2xy=8$,what is minimum value of $x+2y$?

Make $u=x=2y$,then $u+u+u^2=8$,$u=2$ => $x=2$,$y=1$ => minimum value is $4$

However,this doesn’t work in this situation.

$a+b=4$,what is the maximum of $left(frac{ 1}{a^2+1}right)+left(frac{ 1}{b^2+1}right)$

so I wonder if this way to calculate is correct and if it’s right,which situation to use and which doesn’t?

I thought it might come from$ F(x,y,t)=x+2y+t(x+2y+2xy-8)$

$F'(x)=0,F'(y)=0 => x=2y$

However,if $ G(a,b,t)=left(frac{ 1}{a^2+1}right)+left(frac{ 1}{b^2+1}right)+t(a+b-4)$

still,I can get $a=b$,which is not correct.

opengl – Calculating Camera View Frustum Corner for Directional Light Shadow Map

I’m trying to calculate the 8 corners of the view frustum so that I can use them to calculate the ortho projection and view matrix needed to calculate to calculate shadows based on the camera’s position. Currently, I’m not sure how to convert the frustum corners from local space into world space. Currently, I have calculated the frustum corners in local space as follows (correct me if I’m wrong):

float tan = 2.0 * std::tan(m_Camera->FOV * 0.5);
float nearHeight = tan * m_Camera->Near;
float nearWidth = nearHeight * m_Camera->Aspect;
float farHeight = tan * m_Camera->Far;
float farWidth = farHeight * m_Camera->Aspect;

Vec3 nearCenter = m_Camera->Position + m_Camera->Forward * m_Camera->Near;
Vec3 farCenter = m_Camera->Position + m_Camera->Forward * m_Camera->Far;

Vec3 frustumCorners(8) = {
    nearCenter - m_Camera->Up * nearHeight - m_Camera->Right * nearWidth, // Near bottom left
    nearCenter + m_Camera->Up * nearHeight - m_Camera->Right * nearWidth, // Near top left
    nearCenter + m_Camera->Up * nearHeight + m_Camera->Right * nearWidth, // Near top right
    nearCenter - m_Camera->Up * nearHeight + m_Camera->Right * nearWidth, // Near bottom right

    farCenter - m_Camera->Up * farHeight - m_Camera->Right * nearWidth, // Far bottom left
    farCenter + m_Camera->Up * farHeight - m_Camera->Right * nearWidth, // Far top left
    farCenter + m_Camera->Up * farHeight + m_Camera->Right * nearWidth, // Far top right
    farCenter - m_Camera->Up * farHeight + m_Camera->Right * nearWidth, // Far bottom right
};

How do I move these corners into world space?

Tool for calculating profitable coin trading

I want to know that if I buy 200$ worth of BTC that when I trade to other coin I’m not trading for less than 200$ worth of that coin. This way making sure I’m never loosing money.

I’m doing this manually right now but it takes forever and is not productive. Is there a tool out there to do this faster? Where I could input the amount of BTC I have and how many of the other trading coins I would get and their value in $?

I’m sure there’s a term for what I’m looking for but I don’t know it. Hopefully you’ll get my example :]

python – Calculating the total daily amount of confirmed cases of Coronavirus

So it seams like you have a Pandas dataframe you are working with.
At the moment it looks like this section of code does the following

worldCases = ()                   

for i in range(0,len(dd)):               #iterate through all dates
    count = 0                            #setup a counter
    for j in range(0,len(dd)):           #iterate through all dates
        if dd(j)==dd(i):                 
            count+=dc(i)                 #add one to counter if inner date == outer date
    worldCases.append(count)             #track number of times a unique date occurs

You are effectively binning your data, getting a count of the number of times each unique date occurs. Pandas offers you more efficient and convenient tools for doing this. Specifically look into the groupby method.

A much faster way to get the same output for worldCases would be to do the following:

# group the daily cases data by the date and then compute 
# the sum of cases within each date group

dc = 'Daily confirmed cases (cases)'
worldCases = cases.loc(:, dc).groupby(cases('Date')).sum()

Also, if you find yourself writing lots of loops like the above, you may want to check out the user guide for “group by” operations in the pandas documentation. It is a very powerful way of handling situations like this, and more, but can take a bit of getting used to. It can feel like a whole different way of thinking about your data.

dnd 5e – Calculating the XP threshold for a party with “monster” companions

Encounter difficulty is not an exact science.

(Companions from spells and class features are considered in the power level of the characters. Do I include animal companions when calculating difficulty of an encounter?)

NPC allies or allied monsters that are gained in the story, i.e. not from the characters’ powers might be relevant.
This analysis concluded that there are very large differences between monsters of the same CR:

5e monster manual on a business card

The DMG (p. 83) says that the Encounter Building rules assume three to five PCs. Therefore, a few mounts that are much less powerful than the players will not significantly alter the difficulty. Even one or two NPCs can fall in this range if they are not a much higher level than the PCs.

If there are more than one or two NPCs of a similar power level as the PCs or at least one that is significantly more powerful than a single PC, that influence should be considered.

To gauge the power level, rather than calculate a level for the monster, you can calculate the CR of your player characters (DMG p. 274) and compare. I do this routinely in such situations.

If you use multiple monsters in an encounter, you can simply increase their number. E.g. when you planned using four goblins you can add a fifth and it will make the encounter a bit harder but not too much. Note that multiple monsters are more powerful than the sum of their hit points and damage output (due to the action economy, see also DMG p. 82). If you add one NPC and one opponent, this particular effect will be balanced however. I generally build encounters using groups of monsters which works well and can easily be adjusted. It is easier with self-built monsters however, which I will not go into detail here.

If you want to use a single monster, but stronger you need to be careful since it potentially be more lethal than anticipated (see Challenge Rating sidebar, DMG p. 82). The same considerations apply to monstrous allies such as golems. If the ally can easily slaughter all opponents the players can feel outclassed. Therefore, I use powerful allies sparingly.

When using more dangerous monsters, I prefer erring on the safe side since a slightly underwhelming encounter will have less of a negative impact on my game than dead player characters.

dnd 5e – Calculating the XP treshold for a party with “monster” companions

Encounter difficulty is not an exact science.

(Companions from spells and class features are considered in the power level of the characters. Do I include animal companions when calculating difficulty of an encounter?)

NPC allies or allied monsters that are gained in the story, i.e. not from the characters’ powers might be relevant.
This analysis concluded that there are very large differences between monsters of the same CR:

5e monster manual on a business card

The DMG (p. 83) says that the Encounter Building rules assume three to five PCs. Therefore, a few mounts that are much less powerful than the players will not significantly alter the difficulty. Even one or two NPCs can fall in this range if they are not a much higher level than the PCs.

If there are more than one or two NPCs of a similar power level as the PCs or at least one that is significantly more powerful than a single PC, that influence should be considered.

To gauge the power level, rather than calculate a level for the monster, you can calculate the CR of your player characters (DMG p. 274) and compare. I do this routinely in such situations.

If you use multiple monsters in an encounter, you can simply increase their number. E.g. when you planned using four goblins you can add a fifth and it will make the encounter a bit harder but not too much. Note that multiple monsters are more powerful than the sum of their hit points and damage output (due to the action economy, see also DMG p. 82). If you add one NPC and one opponent, this particular effect will be balanced however. I generally build encounters using groups of monsters which works well and can easily be adjusted. It is easier with self-built monsters however, which I will not go into detail here.

If you want to use a single monster, but stronger you need to be careful since it potentially be more lethal than anticipated (see Challenge Rating sidebar, DMG p. 82). The same considerations apply to monstrous allies such as golems. If the ally can easily slaughter all opponents the players can feel outclassed. Therefore, I use powerful allies sparingly.

When using more dangerous monsters, I prefer erring on the safe side since a slightly underwhelming encounter will have less of a negative impact on my game than dead player characters.

Excel: Calculating GPAs with XLOOKUP

I am making an Excel spreadsheet to calculate grade-point averages (GPAs). In column A, I have the course names; in column B, the corresponding numbers of credits; and in column C, the letter grades.

Course I     4.00      A
Course II    4.00      A
Course III   4.00      B
Course IV    4.00      B

I have the grade system on a separate sheet with the letter grades in column A and their corresponding grade points in column B.

A     4.00
A–    3.67
B+    3.33
B     3.00
B–    2.67
C+    2.33
C     2.00
D     1.00
F     0.00
S     —
AU    —
I     —
W     —
NGR   —

In the simplest case, I calculate GPA as the ratio of the sum of the products of the credits and grade points to the sum of the credits. In the example above, that would be (4.00*4.00+4.00*4.00+4.00*3.00+4.00*3.00)/(4.00+4.00+4.00+4.00). In Excel, I can make a column D that has the weighted grade points: =B1*XLOOKUP(C1,Grades,GradePoints), where Grades and GradePoints are A1:A9 and B1:B9, respectively, on Sheet 2. Each cell in column D takes the grade from column C, looks up the corresponding grade points, and multiplies by the number of credits. Then I can sum column D (=SUM(D1:D4)) and divide it by the number of credits (=SUM(B1:B4)), and voilà, we have a GPA.

BUT…

I have a problem when there’s an S (satisfactory), a W (withdrawn), an I (incomplete), or other designations that don’t have grade points. Let’s replace the first example with this:

Course I     4.00     A
Course II    0.50     S
Course III   4.00     A
Course IV    4.00     B
Course V     4.00     B

When I’m calculating the grade points, I want to exclude course II. Naturally, I would want to use SUMIF or SUMIFS, but I can’t figure out what the criteria would be to sum only the credits that have a corresponding grade in the named range Grades. If I have =ISNUMBER(XMATCH(C1,Grades)), I get TRUE or FALSE appropriately, but I don’t know how to use that in SUMIF(S). I have also tried using INDEX/MATCH without success. For the grade points in column D, I could have XLOOKUP return zero if the grade isn’t in the named range Grades, e.g., =B2*XLOOKUP(C2,Grades,GradePoints,0) would return zero since S is not in Grades. Finally, I don’t actually want to have an explicit column D or hidden cells either. I just want to show the GPA. Any help would be greatly appreciated!

equation solving – Calculating and displaying intersection of cylinder and line

I am trying to compute and display the intersection of a line defined by two points and a cylinder centered around the $z$-axis defined by length and radius. So far I have

cyl = Cylinder[{{0, 0, -1}, {0, 0, 1}}, 1]
line = InfiniteLine[{{0, 0, 0}, {1, 0, 0}}]
pts = Solve[{x, y, z} ∈ cyl && {x, y, z} ∈ line, {x, y, z}, Reals]

But this returns y -> ConditionalExpression[0, -1 <= x <= 1], z ->
ConditionalExpression[0, -1 <= x <= 1]
instead of a single solution. Any hint why this is the case and how to display the solution with the intersection in a nice way?

SQL Server: calculating memory requirements for Full db backups

In 2018, we inherited a production SQL Server 2012 FCI running on Windows 2012 with 32 GB RAM. SQL Server max server memory was set at 23.6 GB, and things were running fine.

However, in 2019, we migrated these databases to a SQL Server 2016 FCI. After this migration, our Full backups began intermittently failing due SQL Server restarts. The log seemed to indicate these restarts were due to low memory.

I noticed all of these SQL Server restarts only happened when a full backup was running for our biggest (~80 GB) db. (Incidentally, in case this matters, this db is set to simple recovery model. I have 4 other dbs in full recovery model on this instance: 10 GB, 110 MB, 100 MB, and 50 MB.)

Each time these “low memory restarts” occur, I have been incrementally increasing RAM and max memory. Currently, I’m at 56 GB RAM and max memory is at 45 GB.

From your experience, does it seem unusual for an 80 GB database to require 45 GB max memory during full backups? Can you please share any ideas how I can better identify how much memory my full backup truly needs? Unfortunately, I don’t have a non-production system with similar specs as this one.

mysql – Condition filtering estimate calculating double dipped extra value without performing range reduction

In a query like below

select  
e.emp_no,  
concat(e.first_name, ' ', e.last_name),  
d.dept_name,  
t.title_name,  
es.salary,  
es.insurance,  
es.pf,  
ea.city,  
ea.state,  
ea.phone  
from employee e  
join emp_salary es on e.emp_no = es.emp_no  
join emp_title et on et.emp_no = e.emp_no  
join title t on t.title_no = et.title_no  
join emp_address_phone ea on ea.emp_no = e.emp_no  
join emp_dept ed on e.emp_no = ed.emp_no  
join department d on d.dept_no = ed.dept_no  
where  
(e.hire_date > '2020-01-01' or e.hire_date < '1972-01-01' or e.hire_date < '1971-01-01') 
and (t.title_created > '2006-01-01') 
and (ea.country = 'Spain' or ea.country = 'Samoa' or ea.country = 'India');

Access happens through the criteria on ea.country, and when e is added to join order, condition filtering is applied for the criteria on e.hire_date.

When there is histogram:

"filtering_effect": (
  {
    "condition": "(`e`.`hire_date` > DATE'2020-01-01')",
    "histogram_selectivity": 0.0085
  },
  {
    "condition": "(`e`.`hire_date` < DATE'1972-01-01')",
    "histogram_selectivity": 0.0426
  },
  {
    "condition": "(`e`.`hire_date` < DATE'1971-01-01')",
    "histogram_selectivity": 0.0204
  }
),
"final_filtering_effect": 0.0701

The value (0.0701) of final_filtering_effect comes from the probability formula, i.e.

final_filtering_effect = 0.0085 + 0.0426 + 0.0204 - (0.0085 * 0.0426) - (0.0426 * 0.0204) - (0.0085 * 0.0204) + (0.0085 * 0.0426 * 0.0204)
final_filtering_effect = 0.0701

In collates the probability of all the ranges, without considering to reduce range. This causes double dipping of number of rows for the values that are less than ‘1971-01-01’ as they are included twice.

If the final_filtering_effect was calculated only using the reduced range, the value would have been

final_filtering_effect = 0.0085 + 0.0426 - (0.0085 * 0.0426)
final_filtering_effect = 0.0507

There are total of 10000 rows in the relation, hence the condition seems to filter around 507 rows, and if we see the real count, that seems to align right. It is 506 and not 701.

mysql > select count(*) from employee where hire_date > '2020-01-01' or hire_date < '1972-01-01' or hire_date < '1971-01-01';
+----------+
| count(*) |
+----------+
|      506 |
+----------+

When there is no histogram, just going on with heuristics of 33.33 percent for col (non eq op) val criteria, the value is double dipped,

"filtering_effect": (
),
"final_filtering_effect": 0.7037

Is there a reason why there is no range reduction performed in this stage?