linear algebra – If $ A $ is a rotation matrix, then $ || Ax || = || x || $.

Attempt:

Let $$
A =
begin {bmatrix}
cos theta & – sin theta \
sin theta & cos theta
end {bmatrix},
x = begin {bmatrix} x_1 \ x_2 end {bmatrix}
$$

Then,
$$ Ax =
begin {bmatrix}
cos theta & – sin theta \
sin theta & cos theta
end {bmatrix} cdot begin {bmatrix} x_1 \ x_2 end {bmatrix} = begin {bmatrix} cos theta cdot x_1 – sin theta cdot x_2 \ sin theta cdot x_1 + cos theta cdot x_2 end {bmatrix} $$

$$
begin {align}
|| Axis || & = ( cos theta cdot x_1 – sin theta cdot x_2) ^ 2 + ( sin theta cdot x_1 + cos theta cdot x_2) ^ 2 \
& = ( cos ^ 2 theta cdot x ^ 2_1 -2 cos theta cdot x_1 sin theta cdot x_2 + sin ^ 2 theta cdot x ^ 2_2) + ( sin ^ 2 theta cdot x ^ 2_1 +2 sin theta cdot x_1 cos theta cdot x_2 + cos ^ 2 theta cdot x ^ 2_2) \
& = x ^ 2_1 + x ^ 2_2
end {align}
$$


However, my text seems to suggest that this is only true for $ 0 leq theta leq pi $ right here:

enter the description of the image here

So, where in my attempt was I wrong?

Linear Algebra – Find more than one base for a column space

Let's say that matrix A is defined as follows:
$$ A = begin {pmatrix} 6 & 3 & -1 & 0 \ 1 & 1 & 0 & 4 \ -2 & 5 & 0 & 2 end {pmatrix} $$

Whose RRE form is:
$$ A_ {RRE} = begin {pmatrix} 1 & 0 & 0 & * \ 0 & 1 & 0 & * \ 0 & 0 & 1 & * end {pmatrix} $$

Then I can write a base for the column space of matrix A using the three linearly independent vectors as such:
$$ B_1 = left { begin {pmatrix} 6 \ 1 \ -2 end {pmatrix}, begin {pmatrix} 3 \ 1 \ 5 end {pmatrix}, begin {pmatrix} -1 \ 0 \ 0 end {pmatrix} right } $$

However, if I want to find a different base for colA in addition to the one above, so that no vector in the base is a scalar multiple of any of the vectors of B1? Is this method valid?
1. Rearrange the columns of the matrix, without changing the extent of the columns:
$$ A & # 39; = begin {pmatrix} 0 & 6 & 3 & -1 \ 4 & 1 & 1 & 0 \ 2 & -2 & 5 & 0 0 end {pmatrix} $ $
2. Put in the form of RRE:
$$ A {RRE} = begin {pmatrix} 1 & 0 & 0 & * \ 0 & 1 & 0 & * \ 0 & 0 & 1 & * end {pmatrix} $$

3. Preliminary base:
$$ B <2 = left { begin {pmatrix} 0 \ 4 \ 2 end {pmatrix}, begin {pmatrix} 6 \ 1 \ -2 end {pmatrix}, begin {pmatrix} 3 \ 1 \ 5 end {pmatrix} right } $$

4. Take linear combinations of the above base:
$$ B_2 = left { begin {pmatrix} 0 \ 4 \ 2 end {pmatrix}, begin {pmatrix} 6 \ 5 \ 0 end {pmatrix}, begin {pmatrix} 9 \ 2 \ 3 end {pmatrix} right } $$

I know that a better method would simply be to take linear combinations of the existing base B1, but I want to know if this method works and if not, why.

pr.probability – Linear combination of random variables independent of Bernoulli

assume $ x_1, x_2, ldots, x_n $ are i.i.d. Random variables of Bernoulli with probability of success $ p $, C is $ x_i = $ 1 with probability $ p $ and $ x_i = 0 $ with probability $ 1-p $ for each $ i $. Let $ a = (a_1, a_2, ldots, a_n) in mathbb {R} ^ n $ to be a fixed vector. Does the quantity $ sum_ {i = 1} ^ na_ix_i $ has a known (or easy to derive) distribution?

linear algebra – A spectral theorem for complex symmetric matrices

Let M be a complex square matrix that satisfies $ M = M ^ T $ (right here $ M ^ T $ is the transpose of $ M $). This is not necessarily self-adjoint or normal.

It is true that there is a complex orthogonal matrix $ O $ (non-unitary) and a diagonal matrix $ D $ such as $ M = ODO ^ T $ ?

assignment problem – Explain this graph match resolved in linear time

I would like a better explanation of the Calculation of Optimal Linear Time Assignments document for approximate graph matching.

The distance of graphic editing is approximated by assignments in linear time.

In summary, there is an integration of optimal allocation costs into a Manhattan metric: $ φ_c (A) = (A_ {uv} ^ ← · w (uv)) _ {uv∈E (T)} $. The Manhattan distance between these vectors is equal to the optimal allocation costs between the sets.

The problem is: it's not completely explained how I find $ A_ {uv} ^ ← $ and how I use Weisfeiler-Lehman to label the vertices of a tree in the following figure:

enter the description of the image here

Please, explain how I find $ A_ {uv} ^ ← $ and how do I tag this tree.

At.algebraic topology – Homotopic type of linear isometric auto-isomorphisms of $ R ^ infty $

In the study "Orbispaces, Orthogonal Spaces and the Universal Compact Lie Group" by Stefan Schwede, he studies (spaces with an action of) the topological monoid. $ mathbf {L} ( mathbb R ^ infty, mathbb R ^ infty) $ of the linear isometric self-nesting of $ { mathbb R} ^ infty $ equipped with the subset topology of $ operatorname {maps} ( mathbb R ^ infty, mathbb R ^ infty) $ with compact open topology. The underlying space of this monoid is contractable (Note A.12). What is known about the homotopic type of the subgroup of invertible elements of this monoid, ie the isometric linear isomorphisms of $ { mathbb R} ^ infty $? In particular, is the underlying space always contractible?

Intersection of some linear ideals of $ K[[X_1,ldots,X_{np}]]$ for $ { mathrm {ch}} (K) = p> 0 $

assume $ { mathrm {ch}} (K) = p> 0 $ and we consider the ring of the formal power series $ K ((X_1, ldots, X_ {np})) $ more than K $ in $ np $ the variables $ X_1, ldots, X_ {np} $. Let $ Lambda $ be set as follows$ colon $
begin {align *}
And
Lambda colon ! = { Mathrm {all ~ sets ~}} {(i_1, ldots, i_p), (i_ {p + 1}, ldots, i_ {2p}), ldots, (i_ {(n-1) p + 1}, ldots, i_ {np}) }, \
And
{ mathrm {where}}, ~ {1, 2, 3, 4, ldots } = {i_1, i_2, i_3, i_4, ldots } phantom {I} { mathrm {st}} phantom {I} i_k not = i_l phantom {I} { mathrm {for}} phantom {i} k not = l.
end {align *}

To know, $ Lambda $ is the whole of the divisions of $ (1, ldots, np) $ in $ n $ $ “ p $-tuples & # 39;

For $ lambda = {(i_1, ldots, i_p), (i_ {p + 1}, ldots, i_ {2p}), ldots, (i _ {(n-1) p + 1}, ldots, i_ {np}) } in Lambda $we will associate the following ideal $ I _ { lambda} $ of $ A _ { infty} $$ colon $
begin {equation *}
I _ { lambda} colon ! = (X_ {i_1} + ldots + X_ {i_p}, X_ {i_ {p + 1}} + ldots + X_ {i_ {2p}}, ldots, X_ {(n-1) p + 1} + ldots + X_ {np}).
end {equation *}

We will define the ideal $ S_n $ of the ring $ K ((X_1, ldots, X_ {np})) $ by the following$ colon $
begin {equation *}
S_n colon = subset { lambda in Lambda} { bigcap} I _ { lambda}.
end {equation *}

In addition, we will specify the generators of $ S_n $ as following$ colon $
begin {equation *}
S_n = ( theta, s_2, ldots, s_ {m (n)}),
end {equation *}

or $ theta colon = X_1 + ldots + X_ {np} $.

Conjecture. Degrees $ { mathrm {deg}} (s_2), ldots, { mathrm {deg}} (s_ {m (n)}) $ diverge when $ n to infty $.

linear algebra – Angles between normal vectors and angles between faces

I have to prove two different formulas, but I do not understand why the values ​​of the formulas are different.

This is a tetrahedron problem, especially the angles.

At first, I'm supposed to prove

$ A ^ 2 = B ^ 2 + C ^ 2 + D ^ 2 + 2BCcos∠ (b, c) + 2CD cos∠ (c, d) + 2GDcos∠ (g, d) $

with A, B, C, D being the faces of a tetrahedron and a, b, c, d being their normal vectors, respectively. So I have to prove

$ A ^ 2 = B ^ 2 + C ^ 2 + D ^ 2 – 2BCcos∠ (B, C) – 2CD cos∠ (C, D) – 2GDcos∠ (G, D) $

What is the difference between calculating the angles between the normal vectors and the faces of a tetrahedron?

Thank you.

Linear algebra – the inverse of the sum of an identity and a product of Kronecker after adding a column or deleting a line

Let $ Q = alpha mathbb {I} + (S otimes S) ^ T (S otimes) = alpha mathbb {I} + S ^ TS otimes S ^ TS $, or $ mathbb {I} $ is $ n ^ 2 times n ^ 2 $ matrix, $ S $ is a $ m times n $ binary matrix and $ otimes $ is a Kronecker product. I have some questions:

First, if we can write $ S ^ TS = sum_ {i = 1} ^ m s_i ^ Ts_i $ which would allow us to use the first-tier properties of update, what would be the update of $ Q ^ {- 1} $if we remove the $ i $-the row of $ S $? Can we extend the Sherman-Morrison formula in this case? For example if we have $ M = ( alpha mathbb {I} + S ^ TS) ^ {- 1} $ then the reverse updated after removing the $ i $-the row would
$$ M _ {- i} = M- frac {Ms_i ^ Ts_i M} {s_iMs_i ^ T-1} $$

Second, what about the addition of a column to the matrix $ S $, how does the opposite of $ Q $ change? Again, can we use a similar property of the partitioned matrix?

algebraic geometry – How to prove that the automorphisms of $ mathbb P ^ n $ come from linear maps in $ mathbb A ^ {n + 1} $?

Let $ U_n $ either the space obtained by removing the origin of $ mathbb A ^ {n + 1} $. Of course, $ k ^ star $ acts multiplicatively on $ U_n $and space orbit $ U_n / k ^ star $ is the usual projective space $ mathbb P ^ n $.

I would like to say that every automorphism $ varphi: mathbb P ^ n to mathbb P ^ n $ follows from an automorphism $ tilde varphi: U_n to U_n $ which respects the orbits. If I worked with beautiful Hausdorff varieties, I could say something like:

Note that $ U_n $ is simply connected because it retracts into a sphere. So, if we get up $ varphi $ to an automorphism of some space covering of $ U_n $, this blanket space turns out to be $ U_n $ himself.

But in countries with algebraic geometry, our spaces are topologically horrible, so it does not work. Is there a space theory covering for starters? What can I do?