In CLRS, an approach was given to prove the optimal substructure and the accuracy of the greedy algorithm for the activity selection problem. In the amphitheater assignment problem, we sort the classes according to their start times and allocate them amphitheatres accordingly, choosing a new amphitheater if the class schedules conflict with all existing amphitheatres. Is there a way to prove the optimal substructure in this problem? Why do we sort the courses only by their start times and not by their end times?

How to prove that there is always an optimal solution that makes the greedy choice?

# Tag: Prove

## computational complexity – Would anyone be willing to help me figure out how to prove a new conjecture on computational theory?

I invented a new theory of computation which I named "The theory of automatic reproduction machines" and I discovered a very interesting infinite family of automatic reproduction machines, as well as a new set of mathematical laws that govern these types of systems, which extends Cellular Automata and John Conways "The Game of Life" by using my new "computer virus techniques" methods to write regular non-malicious computer programs. Since I have been an inventor, I have had a lot of crazy ideas. This is really it and I'm going with it and I desperately need the help of experts, and that's the purpose of my question right now

very briefly here is my guess

Conjecture 1: The theory of self-reproducing Turing machines or the theory of self-reproducing machines is a complete Turing calculus theory which generalizes the previous theory of calculus. It is possible to solve problems in the "complexity class" NP-Complete (term that I will define in my next guess). This new computational theory involves adding a new axiom to the previous computational theory, which will violate Church-Turing's thesis and allow for an algorithm that I already wrote in javascript and tested in my debugging tool, namely that Turing Machines have a magical axiomatic function where they can replicate.

Conjecture 2: according to my new theory of calculation, the class (according to the class defined by class theory in the von neumann sense) of problems NP, can be partitioned into distinct subsets which I have just called "complexity classes" of which NP -complete is the least level,

these complexity classes can be mapped 1-1 with other problems by a type of isomoprism that I call "complexity class homomorphisms" and we can understand the different types of growth exponentially in the estimation of Big-O complexity by this type of operation.

In addition, the different types of Big-O efficiency notation used must be extended to include infinity orders according to Cantors theory of inifinite sets using a new Axiom which declares that it doesn & # 39; There is no infinite set with complexity class between integers and real numbers.

Furthermore, the equivilence relation of the types which I have just conjectured can be mapped 1-1 by a complexity class homomorphism to these infinite orders which I have just described.

In addition, the complexity class of the whole factorization problem is of the same order as the chess engine problem, in other words, what makes chess difficult is that its strategy involves the mathematical problem of main factorization.

Note 1: these conjectures are the result of 15 years of work on the theory of self-reproducing machines invented by John Von Neumann and which inspired 3 fields, computer science, biology (via the discovery of DNA that Watson and Crick were able to research because of the theory of machines with automatic reproduction) and of game theory. His method for inventing entirely new mathematical fields was to study nature, evolution and the human nervous system. It is also the basis of my new discovery which results from the analysis of energy in the work carried out by the cells of the human body as a result of cellular self-replication in the human nervous system, which has tended to show that it is possible to make an exponential gain of energy for computing purposes by understanding the computing power of the human nervous system from a physical point of view theoretical.

Note 2: the purpose of my question is not an attempt to publish a result, I will do it soon on a new website that I have for this purpose, my question is that I need it Help from an expert because I'm going over my head, so what should I do?

Note 3: I also invented Math Inspector and I have a math education channel on YouTube and I was recently featured on Mathologer. https://mathinspector.com/

Note 4: If these types of questions are not appropriate, please let me know and I will stop if you ask. If they are on the subject, let me know and I will delete note 4

## Theory nt.number – Prove that $ sum limits_ {m = 1} ^ {n} varphi ^ {- 1} ( gcd (m, k)) neq 0 $ for $ n = p_i $ for all $ k $ where $ p_i $ is a prime number

I'm trying to answer my own question here, but I'm stuck on what is most of the whole question:

Let:

$$ varphi ^ {- 1} (n) = sum limits_ {d mid n} d cdot mu (d) tag {1} $$

Prove it: $$ sum limits_ {m = 1} ^ {n} varphi ^ {- 1} ( gcd (m, k)) neq 0 tag {2} $$ for $ n = p_i $ for everyone $ k $ or $ p_i $ is a prime number.

Given this conjecture, one could prove that:

$$ text {lower bound} left ( sum _ {n = 1} ^ {N} left ( sum _ {k | n} mu (k) H_ {n / k} -1 right) right) <1-H_N tag {3} $$

$ H_N $ being a harmonic number, and that says that whatever the lower limit of expression in parentheses in $ (3) $ is, it should be less (more negative) than the right side.

## real analysis – Prove that a function is in $ L ^ p (R ^ n) $ and / or $ S (R ^ n) $

I know that a function is $ L ^ p $ if $ ( int_ {R ^ n} | f (x) | ^ p dx) ^ { frac {1} {p}} < infty $ and is in $ S (R ^ n) $ if $ sup_ {x in R ^ n} (1+ | x | ^ N) | partial ^ alpha f | < infty) $

and I have to prove it for these functions:

1) $ f (x) = e ^ {- | x ^ 2 |} $

2) $ f (x) = frac {1} {1+ | x | ^ 2} $

but I don't understand how to calculate them.

## complexity theory – Prove the indecidability of HALT_tm by reduction

Sipser in his book `introduction to the theory of computation`

provided evidence of the undecidability of $ HALT_ {TM} $. He uses a contradiction, he assumed that $ HALT_ {TM} $ is decidable, and builds a decision maker for $ A_ {TM} $, and since $ A_ {TM} $ is already proven by the digonalization method as undecidable, thus the contradiction occurs and $ HALT_ {TM} $ is undecidable. The Turing machine is simple and straightforward and I will not go into details.

What makes me confused is this sentence

We prove the undecidability of $ HALT_ {TM} $ by a reduction of $ A_ {TM} $ at $ HALT_ {TM} $

and i would like to know in which part of the proof reduction actually occurs?

From what we know of the concept of reduction, reduction $ A $ at $ B $ means: we have two problems $ A $ and $ B $ and we know how to solve $ B $ but we stayed $ A $, then if we reduce $ A $ at $ B $, it means resolving an instance of $ B $ will result in the resolution of an instance of $ A $.

back to the proof says Sipser

We prove the undecidability of $ HALT_ {TM} $ by a reduction of $ A_ {TM} $ at $ HALT_ {TM} $

Therefore $ A = A_ {TM} $ and $ B = HALT_ {TM} $. We don't know how to solve $ HALT_ {TM} $ and in fact this is the current problem, moreover the strategy of proof is based on contradiction, something which has absolutely nothing to do with the concept of reduction. So why Sipser uses the term ** reduction** in this evidence?

## group theory – Prove that if H and K are Sylow subgroups p, then H = K. Complete question in summary.

Let G be a finite abelian group and let p be a positive prime number which divides the

order of G. Show that if H and K are subgroups p of Sylow, then H = K.

I first assume that H and K are Sylow p subgroups that tell me H = p ^ n and K = p ^ n … I don't even know if it's correct haha. I try not to hate group theory, please help me (RA> GT). I just need motivation to put me in the right direction. Clearly, I know I'm going to use the fact that it's abelian to show H = K at some point haha. ðŸ™‚

## Proof of proof of proof … (Proof that you can't prove proof)

Let's say we have proof $ p_1 $ of a certain claim $ X $. But we have to check that the evidence is valid, so let $ p_2 $ be proof that the proof is valid, likewise we must verify that the proof that the proof is valid is valid, let $ p_3 $ be proof that the proof that the proof is valid is valid, but we also need to see if the proof that the proof that the proof is valid is valid is valid!, define $ p_4 $ as the proof that the proof that the proof that the proof is valid is valid is valid, as before we have to verify that the proof that the proof that the proof is valid is valid is valid is valid …

Therefore, we cannot prove any evidence.

So my question is: where is my field medal?

## functional analysis – Prove that for the $ langle defined.,. rangle $ there is $ 0

Let $ H $ to be a Hilbert space on $ mathbb {R} $ with an interior product $ (Â·, Â·) $ and the standard $ | x | = sqrt

{(x, x)} $. Let $ A $ be a strictly positive defined linear operator bounded on $ H $ with $ A ^ ast = A $.

For $ x, y in H $, let $ langle x, y rangle = (Ax, y) $ and $ | x | _ ast = sqrt { langle x, x rangle} $.

Prove it $ langle.,. rangle $ is an inner product on vector space $ H $, and that there are constants $ 0 <a le b $ such as $ a | x | le | x | _ ast â‰¤ b | x | $ for everyone $ x in H $.

$ text {My attempt} $:

- for the first part to show that it is an internal product:

1- $ langle x, x rangle = (Ax, x) ge 0 $ and $ (Axis, x) = $ so $ x = $ 0

2- $ langle x, y rangle = (Ax, y) = (x, Ax) = langle y, x rangle $

3- $ langle x + z, y rangle = (Ax + Az, y) = (Ax, y) + (Az, y) = langle x, y rangle + langle z, y rangle $

4- $ langle alpha x, y rangle = ( alpha Ax, y) = alpha (Ax, y) = alpha langle x, y rangle $

1- Since $ A $ is a positive definite linear operator $ | x | _ ast = sqrt { langle x, x rangle} = sqrt {(Ax, x)} ge sqrt { beta | x | ^ 2} = sqrt { beta} | x | equiv b | x | $

2- Either $ | A | = alpha $. Since $ A $ is symmetrical $ | x | _ ast = sqrt { langle x, x rangle} = sqrt {(Ax, x)} = sqrt {(x, Ax)} le sqrt { | A | | x | ^ 2} = sqrt { alpha} | x | equiv a | x | $

## model categories – is $ l (r (F)) $ the smallest (weak) saturated class of $ F $? How to prove it?

$ F $ is a morphism class $ widehat {A} $, back off $ A $. $ A $ is a small colimit and small category. Saturated class, which means stable under retraction, push-out and transfinite composition. I know the smallest saturated class of $ F subset l (r (F)) $ , but in another direction, I don't know how to do it.

## complexity theory – How to prove NP completeness of MOD-PARTITION

MOD-SHEET: Given a set of integers $ A = {a_1, …, a_n} $, their weights $ w: {w_1, w_2, dots, w_n } $ and the number $ k $, is there a subset $ X $ of $ A $ such as: $ ( sum_ {x in X} w (x)) mod k ; = ; ( sum_ {a in A setminus X} w (a)) mod k $?

Can some give an idea of â€‹â€‹how to prove the NP completeness of this problem? I saw the proof of the set-partition problem (using a reduction of the subset) and I guess this proof may be a small modification.