## convex optimization – proof of the min max theorem, equivalence of duality problems, how to do it?

I'm trying to prove the min-max theorem. In this one, I have to prove the following equivalence of the linear program (one is the duplicate of the other):

$$max {x_0 mid textbf {1} x_0 – A ^ T x leq 0, sum x_i = 1, x > geq 0 }.$$ $$fs$$

$$min {y_0 mid textbf {1} y_0 – A y geq 0, sum y_i = 1, y geq > 0 }.$$

How to do ?

## hash – Is the current hashcash proof of work system vulnerable to computer attacks?

It has been some time since I got caught up in the post-quantum cryptography scene and I understand that RSA or ECC cryptography inherent in a cryptocurrency network can be easily destroyed by a sufficiently large quantum computer. I have however a more specific question …

Is the hashcash proof of work algorithm used by modern crypto-currencies vulnerable to quantum attacks?

## proof verification – Derive Bachet duplication formula

Let $$y ^ 2-x ^ 3 = c$$ to be the equation of Bachet and pretend $$(x, y)$$ is a solution.

The tangent to $$(x, y)$$ of the Bachet curve will cross it into a single new point whose coordinates are supposed to give the "duplication formula":

$$( frac {x ^ 4-8cx} {4y ^ 2}, frac {-x ^ 6-20cx ^ 3 + 8c ^ 2} {8y ^ 3}$$

I have not been able to derive this formula, no matter how much effort I have made. It will not work. Attempt:

Let $$f (x, y) = y ^ 2-x ^ 3-c$$, then the curve is implicitly parameterized by $$f (x, y) = 0$$.
The gradient of $$f$$ at $$(x, y)$$, Which one is $$(- 3x ^ 2,2y)$$, is orthogonal to the curve in $$(x, y)$$.

Therefore, the equation of the tangent is $$-3x ^ 2 (X-x) + 2y (Y-y) = 0$$

(I'm looking at the only orthogonal line in the gradient of $$(x, y)$$ go through $$(x, y)$$)

So if $$(X, Y)$$ is the coordinates of the intersection point I'm looking for, it should be the unique solution (different from $$(x, y)$$), the following conditions:

$$-3x ^ 2 (X-x) + 2y (Y-y) = 0$$ and $$Y ^ 2 – X ^ 3 – c = 0$$

with the assumption that, of course, by hypothesis, $$y ^ 2 – x ^ 3 = c$$

Is my reasoning good until now?

Mathematica does not seem to agree with me, but it could be that the function of complete simplification is not sophisticated enough (it can not be reduced to the Bachet formula)

FullSimplify[
Solve[{2*y*(Y – y) – 3*x^2*(X – x) == 0, Y^2 – X^3 – c == 0}, {X,
Y}], y ^ 2 – x ^ 3 == c]

I do not expect anyone to do the tedious calculations. However, if someone manages to force Mathematica or another software to spit out the formula all by itself, I would be happy to be able to program an algorithm that automatically takes a curve. in and finds the formula

## geometric topology – any two monotonous polygons can be separated with a single proof of translation

Thank you for your contribution to Mathematics Stack Exchange!

• Please make sure to respond to the question. Provide details and share your research!

But to avoid

• Make statements based on the opinion; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

## Dg.differential Geometry – Proof of Poincare's Conjecture: An Unofficial Erratum

We have read and verified the detailed proof of the Poincaré conjecture. One can find the article (Ricci Flow And The Poincare Conjecture of Morgan and Tian) on arXiv. Since the evidence contains shortcomings and mistakes, it took us six months to go through the book. The detailed proof has more than 500 pages, which can lead to problems when writing. We filled all the gaps and corrected all the errors we found. It took us over a hundred hours. In the end, we were tired and checked only 99.99% of the book, but we are sure that Perelman has solved the problem. The Clay Institute, ArXiv and the authors have not given us an answer (maybe we are not well enough known), but incorrect evidence is still incorrect. So we decided to publish our ratings (109 comments) here. We did this for anyone who wants to understand the evidence. This will save them time. Our notes are in our mailbox https://www.dropbox.com/s/73i5wz5390o1lx3/Final_Version_2.pdf?dl=0.

If you think we have understood something wrong, please let us know. Of course, some comments are only accessible to readers of the book / article, but we have tried to make the obvious mistakes for graduate students. The most difficult error / error to correct was 97. We invite you to check and comment our notes after reading them !!! Of course, some people will not take us for real. We only show that the article / book is not perfect. Note that page numbers refer to the book, but our work can also be used for the arXiv version.

Wembley and his friends

We are not interested in self-promotion. We just want to help the readers.
We all have a PhD.

Here is the question: can you find more errors? Have we made mistakes?

## proof of work – Finding the target based on difficulty calculation

I'm trying to find the current target value, but it's not listed anywhere. Until now, I have discovered that the only way to get a target value is to calculate it from the value of "difficulty", which is now: 6 379 265 451 411
and which is here: https://bitcoinwisdom.com/bitcoin/difficulty
However, I am not sure how to convert this number so I can divide the maximum target on this target to get the current target.
Could anyone explain how, given this number 6 379 265 451 411, I can arrive at
hexadecimal version of the current target?

## Combinatorial proof for $sum_ {i = 0} ^ {n-1} 2 ^ i3 ^ {n-1-i} = 3 ^ n – 2 ^ n$ [duplicate]

• Combinatorial proof of \$ 3 ^ n – 2 ^ n = sum_ {i = 0} ^ {n-1} 2 ^ i3 ^ {n-1-i}

I'm trying to write a combinatorial proof for $$sum_ {i = 0} ^ {n-1} 2 ^ i3 ^ {n-1-i} = 3 ^ n – 2 ^ n$$

My guess is …
Let S be a set s.t S = {a, b, c}.
RHS resembles the difference between S string of length n and S / {a} string of length n. Thus, RHS counts chains S of length n such that they contain at least one a. However, they do not know how to apply it to LHS.

I've verified by induction that this equation holds for all n.

## complex analysis – related to the proof of Schottky's theorem

$$textbf {Theorem (Schottky)}$$ Let $$M> 0$$ and $$r in ( 0.1)$$ to be given. then
there is a constant $$C> 0$$ such as:
If F is holomorphic in the disk of the unit $$mathbb {D}$$, omit 0 and 1 from his range, and if $$| F (0) | ≤ M$$then $$| F (z) | ≤ C$$ for everyone $$z in mathbb {D}$$.

I'm trying to understand the proof and here is how well I've arrived: we define

$$begin {equation} A = frac { log F} {2 pi i} \ B = sqrt {A} – sqrt {A-1} \ H = log {B} end {equation}$$

We can show that these are well defined and that the two newspapers are chosen to have an argument in $$[-pi,pi]$$.
We see that the definition $$exp (H) = sqrt { frac { log F} {2 pii}} – sqrt { frac { log F} {2 pi i} -1}$$
implies that
$$frac { exp (H (z)) + exp (-H (z))} 2 = sqrt { frac { log F} {2 pi i}}.$$
From there, we get the estimate
$$| F (z) | leq exp ( pi e ^ {2 | H (z) |}).$$
In addition, it can be shown that
$$| H (z) | <= | H (0) | -130 log (1-r)$$
which shows that $$| F (z) | C$$, or $$C$$ only depends on $$| H (0) |$$ and $$r$$.

It remains to show that $$| H (0) | C_1$$ or $$C_1$$ only depends on $$M$$. We have $$| Im (H (0)) | leq pi$$ by construction of $$H$$.

We can assume that $$| F (0) | geq frac12$$ (otherwise we work with $$1-F$$).
Since $$| F (0) |$$ is bounded we get that
$$C_2 geq left | sqrt { frac { log F (0)} {2 pi i}} right | = left | frac { exp (H (0)) + exp (-H (0))} 2 right | geq sinh (Re (H (0)))$$
so that $$Re (H (0)) leq sinh ^ {- 1} (C_2)$$ for a constant $$C_2$$ it depends only on M.

To complete the proof, we need a lower limit for $$Re (H (0))$$, which is supposed to work the same way, but I do not see how. Can someone help?

## Mg.metric geometry – Proof of the basic proportionality theorem

Note: I had posted the proof for verification on math stackexchange a few days ago, but I did not reach any conclusion. The discussion is dormant. I am just curious if my approach is wrong. I'm posting my proof here.

Prove: $$frac { mathrm { overline {B}} { mathrm { overline {AB}}} = frac { mathrm { overline {B & # 39; C & # 39; C & # 39; ;}}} { mathrm { overline {BC}}} = frac { mathrm { overline {A}} { mathrm { overline {AC}}} = mathrm {K}$$, or $$mathrm {K}$$ is a constant.

Proof: in the figure above, $$Delta { mathrm {A & B} C & # 39;}$$ is registered in the largest radius circle $$mathrm {R & # 39;}$$ and $$Delta { mathrm {ABC}}$$ is registered in the smallest radius circle $$mathrm {R}$$. Right here, $$Delta { mathrm {A & B} C & # 39;}$$ is the enlarged version of $$Delta { mathrm {ABC}}$$.

A well-known result of elemental geometry is the length of the arc $$l$$ from a circle, the angle $$mathrm { theta}$$ underpinned by the bow and rays $$mathrm {R}$$ join the ends of the arc using the equation, $$l = mathrm {R cdot theta}$$.

From the figure and the result above, we see that $$mathrm {arc A = mathrm {R & # 39; theta}$$ and $$mathrm {arc AB} = mathrm {R} theta$$, or $$mathrm { theta}$$ is $$angle { mathrm {A & # 39;}}$$. By taking the ratios we get, $$frac { mathrm {arcA} { mathrm {arcAB}} = frac { mathrm {R} # { mathrm {R}}$$

Similarly, $$frac { mathrm {arcB C} { mathrm {arcBC}} = frac { mathrm {R} # { mathrm {R}}$$ and $$frac { mathrm {arcA} C {} { mathrm {arcAC}} = frac { mathrm {R} # { mathrm {R}}$$

The diameter whose length is $$mathrm {D}$$ is the longest agreement of a circle. The other chords are reduced versions of the diameter. Mathematically, $$mathrm {P} = mathrm {B} cdot mathrm {D}$$

Or $$mathrm {B}$$ is a scaling factor, whose value is between $$0$$ and $$1$$-and or $$mathrm {P}$$ is the length of the rope.

Figure 2

Let $$mathrm {P}$$ in Figure 2, the length diameter. The length of the diameter changes from $$mathrm {D}$$ at $$mathrm {D & # 39;}$$. It is obvious that the scale factor is the same ($$mathrm {B} = 1$$) in the largest and the smallest circle, the ratio of the lengths of diameter in each circle is equal to $$1$$.

figure 3

Now let $$mathrm {P}$$ is the length of an agreement that underlies an angle of $$frac { pi} {3}$$ radians in the center of a circle (see Figure 3). The length of the string is equal to the length of the radius of the circle because the string is part of an equilateral triangle. As the length of the agreement goes from $$mathrm {P}$$ at $$mathrm { P$$, it is again obvious that the scale factor is the same ($$mathrm {B} = frac {1} {2}$$) in the largest circle and the smallest circle, the length of the string in each circle is equal to half the length of the diameter.

We see a motive. The model tells us that for a fixed $$theta$$, the scale factor is the same in all concentric circles. In other words, the scale factor is independent of the radius length and is a constant for a constant. $$theta$$.

Following the pattern we have, $$mathrm {P} propto mathrm {D}$$

That implies, $$mathrm {P} propto mathrm {R}$$

This result and the result $$l = mathrm {R cdot theta}$$(for constant $$mathrm { theta}$$) together involve, $$mathrm {P} propto l$$

Returning to
figure 1 we see, $$overline { mathrm {B} = mathrm {C} cdot mathrm {arcA & # 39; B}$$

In the same sector, $$overline { mathrm {AB}} = mathrm {C} cdot mathrm {arcAB}$$

Or $$mathrm {C}$$ is a constant. By taking the ratios we get, $$frac { mathrm {arcA} B {AB}} { mathrm {arc AB}} = frac { overline { mathrm {A} B}} { overline { mathrm {AB}}}$$

Similarly, $$frac { mathrm {arcB C} { mathrm {arc BC}} = frac { overline { mathrm {B & # 39; C}} { overline { mathrm {BC}}}$$

and

$$frac { mathrm {arcA C} { mathrm {arc AC}} = frac { overline { mathrm {C }} { overline { mathrm {AC}}}$$

The results above reflect, $$frac { mathrm { overline {B }} { mathrm { overline {AB}}} = frac { mathrm { overline {B & # 39; C & # 39;}}} { mathrm { overline {BC}}} = frac { mathrm { overline {C} {{mathrm { overline {AC}}} = mathrm { frac {R & # 39;} {R}}$$

Or $$frac { mathrm {R} # { mathrm {R}} = mathrm {K}$$, a constant. This concludes the proof.

## Information Theory – Understanding the Absolutes and Inequalities in Shannon's Entropy Proof

I am another person reading Shannon's entropy proof (Appendix 2, page 28). I try to understand the unseen intermediate steps similar to this question, but it seems that he discovered more than me.

I will annotate the evidence with numbered parentheses (like (1)) and explain why I do not understand each part.

We can choose $$n$$ arbitrarily big and find a $$m$$ satisfied
$$s ^ m leq t ^ n lt s ^ {(m + 1)}.$$
So, taking logarithms and dividing by $$n log s$$,
$$frac {m} {n} leq frac { log t} { log s} leq frac {m} {n} + frac {1} {n} text {(1)} text {or} left lvert frac {m} {n} – frac { log t} { log s} right rvert lt epsilon text {(2)}$$

Therefore, dividing by $$nA (s)$$,
$$frac {m} {n} leq frac {A (t)} {A (s)} leq frac {m} {n} + frac {1} {n} text {or} left lvert frac {m} {n} – frac {A (t)} {A (s)} right rvert lt epsilon$$
$$left lvert frac {A (t)} {A (s)} – frac { log t} { log s} right rvert lt 2 epsilon text {(3)}$$

1. Why, when he applied the logarithm and divided by $$n log s$$, one of the signs of inequality change from $$lt$$ at $$leq$$?
2. Why does he absolve the result? And if he changed the sign of inequality 1, why does he come back to $$lt$$ again?
3. How did he get that?