Let $ e_i coin e_j (i <j) $ to be a basis for the $ mathbb Z $-module $ wedge ^ 2 Gamma $, or $ Gamma = mathbb Z ^ n $.

Clearly $ S_n $ acts temporarily on the module $ wedge ^ 2 Gamma $ via

$$ pi (e_i wedge e_j) = e _ { pi (i)} wedge e _ { pi (j)} forall pi in S_n. $$

Which (non-trivial) cyclic subgroups of $ S_n $ have a maximum number of orbits in this action.

The answer seems to be the subgroups generated by the transpositions $ pi = (ij) $. But can there be other permutations $ pi $ which are not transpositions but with the same number of orbits?

# Tag: symmetrical

## matrix – Something like Outer, but symmetrical?

(I simplify for an easy explanation, in reality I work with much larger vectors).

If I have a vector $ A = begin {pmatrix} a \ b end {pmatrix} $ and a vector $ B = begin {pmatrix} c \ d end {pmatrix} $, and I have a function with two inputs, I want to be able to use a function that generates the matrix

$$

begin {pmatrix}

f (a, c) and f (a, d) \

f (d, a) and f (b, d)

end {pmatrix}

$$

At first I tried to use Outer, but that gives me

$$

begin {pmatrix}

f (a, c) and f (a, d) \

f (b, c) and f (b, d)

end {pmatrix}

$$

Is there a built-in function that can do this, or do I have to work on writing mine? I'm still getting used to Mathematica, so if I have to write mine, any additional help on how I could get there would be great. Ideally, this should be able to accommodate large vectors (50 unit vectors or 100 unit vectors) of unequal sizes.

## differential geometry – Symmetrical version of the eikonal equation for the pairwise distance function?

Assume $ mathcal M subseteq mathbb R ^ n $ is a sub-collector of $ mathbb R ^ n $. To take $ d ( cdot, cdot): mathcal M times mathcal M to mathbb R _ + $ be the function of geodesic distance per pair, so $ d (x, y) $ is the shortest path length since $ x in mathcal M $ at $ y in mathcal M $.

Maintain a fixed coordinate and vary the other, everywhere $ d ( cdot, cdot) $ is differentiable, it satisfies the eikonal equation $ | nabla f (x) | _2 = $ 1. In particular, we can write two conditions fulfilled by $ d ( cdot, cdot) $ almost everywhere: $$ | nabla_x d (x, y) | _2 = 1 qquad textrm {and} qquad | nabla_y d (x, y) | _2 = 1. $$

In a sense, these two conditions are redundant. Yes $ d ( cdot, cdot) $ satisfies the first condition, it "resembles" a distance function of $ y $ at all other points $ x $ and the other condition follows by symmetry of $ d $.

**Is there a single, more symmetrical PDE, satisfied by $ d ( cdot, cdot) $ according to the product collector $ mathcal M times mathcal M $?**

In other words, if the eikonal equation is the PDE behind the single source geodesic distance problem for all destinations, is there a different canonical PDE which governs the paired geodesic distance problem? I hope to identify a condition that does not require the application of the eikonal condition in the $ x $ and $ y $ coordinate individually.

## ag.algebraic geometry – Regarding $ mathbb {C} (s_1, s_2, s_3, y) = mathbb {C} (x, y) $, where $ s_1, s_2, s_3 $ are symmetrical

Maybe the following question is not at the MO question level, but it has not received any comments in MSE, so I ask it here too:

Let $ beta: mathbb {C} (x, y) to mathbb {C} (x, y) $ be the involution on $ mathbb {C} (x, y) $

Defined by $ (x, y) mapsto (x, -y) $.

Let $ s_1, s_2, s_3 in mathbb {C} (x, y) $ to be three symmetrical elements with respect to $ beta $.

It is not difficult to see that a symmetrical element w.r.t. $ beta $ has the following form:

$ a_ {2n} y ^ {2n} + a_ {2n-2} y ^ {2n-2} + cdots + a_2y ^ 2 + a_0 $, or $ a_ {2d} in mathbb {C} (x) $.

Suppose that the following two conditions are met:

**(1)** Each of the two $ {s_1, s_2, s_3 } $ are algebraically independent over $ mathbb {C} $.

Note that the three $ s_1, s_2, s_3 $ depend algebraically $ mathbb {C} $, since the degree of transcendence $ mathbb {C} (x, y) $ more than $ mathbb {C} $ is two.

**(2)** $ mathbb {C} (s_1, s_2, s_3, y) = mathbb {C} (x, y) $; this notation means that the fraction fields of $ mathbb {C} (s_1, s_2, s_3, y) $ and $ mathbb {C} (x, y) $, respectively.

**Example:**

$ s_1 = x ^ 2 + x ^ 5 + A (y), s_2 = x ^ 5y ^ 2 + B (y), s_3 = x ^ 3y ^ 2 + C (y) $, or $ A (y), B (y), C (y) in mathbb {C} (y ^ 2) $.

Question 1:Is it possible to find a "specific" form of at least $ {s_1, s_2, s_3 } $?A plausible answer may be: $ {s_1, s_2, s_3 } $ is in the form

$ lambda x ^ ny ^ {2m} + D (y) $ for some people $ D (y) in mathbb {C} (y ^ 2) $, $ lambda in mathbb {C} ^ { times} $, $ n geq 1 $, $ m geq 0 $; is it possible to find a counterexample to my plausible answer?

Perhaps it is best to consider two (easier) questions replacing the conditions first **(1)** and **(2)** through:

**(1 & # 39;)** $ {s_1, s_2 } $ are algebraically independent over $ mathbb {C} $ +

**(2 & # 39;)** $ mathbb {C} (s_1, s_2, y) = mathbb {C} (x, y) $; call it **Question 1 & # 39;**.

**(1 & # 39; & # 39;)** $ s_1 neq 0 $ +

**(2 & # 39; & # 39;)** $ mathbb {C} (s_1, y) = mathbb {C} (x, y) $; call it **Question 1 & # 39; & # 39;**.

I guess the answer to question 1 & # 39; & # 39; East: $ s_1 = lambda xE (y) + F (y) $, or

$ lambda in mathbb {C} ^ { times} $ and $ E (y), F (y) in mathbb {C} (y ^ 2) $.

**Notes:**

**(I)** In the example above, we have already

$ mathbb {C} (s_2, s_3, y) = mathbb {C} (x, y) $ and $ mathbb {C} (s_1, s_2, y) = mathbb {C} (x, y) $.

**(ii)** We can write $ x = frac {u (s_1, s_2, s_3, y)} {v (s_1, s_2, s_3, y)} $ for some people $ u, v in mathbb {C} (X, Y, Z, W) $. Then, if I'm not mistaken, taking $ y = 0 $ (if possible?) we get that $ x = frac {u (s_1 (x, 0), s_2 (x, 0), s_3 (x, 0))} {v (s_1 (x, 0), s_2 (x, 0), s_3 (x , 0))} $, Therefore $ mathbb {C} (s_1 (x, 0), s_2 (x, 0), s_3 (x, 0)) = mathbb {C} (x) $.

Question 2:Is there an example where all $ s_1, s_2, s_3 $ are required to obtain $ mathbb {C} (s_1, s_2, s_3, y) = mathbb {C} (x, y) $? Namely, it is not possible to omit one of the $ {s_1, s_2, s_3 } $ and always $ mathbb {C} (x, y) $.

I guess the answer is yes.

Thank you so much!

## reference request – Invariants of symmetrical forms with respect to the symplectic group

Take a 6-dimensional vector space $ V $ (for simplicity, $ mathbb {C} $) and play the following game (for example, using the Lie program online): consider the 21-dimensional space $ S ^ 2V ^ * $ of two symmetrical shapes on $ V $ and break down the space $ S ^ k (S ^ 2V) $ diploma$ k $ homogeneous polynomials on $ S ^ 2V ^ * $ irreducible $ mathsf {SL} _6 $-modules and simultaneously irreducible $ mathsf {Sp} _6 $-module, with k $ = $ 1,2,3,4,5,6. The number of one-dimensional components you will get is as follows:

- For $ mathsf {SL} _6 $ there is a unique one-dimensional constituent $ langle d rangle $, which appears when $ k = $ 6;
- For $ mathsf {Sp} _6 $ the first one-dimensional constituent $ langle p rangle $ appears with $ k = $ 2, then a second $ langle q rangle $ with $ k = $ 4, along with $ langle p ^ 2 rangle $, and finally for $ k = $ 6, there are three one-dimensional components: $ langle p ^ 3 rangle $, $ langle p q rangle $ and $ langle d rangle $.

Now, it is well known that $ d $ is the determinant.

QUESTION:what about the $ mathsf {Sp} _6 $-invariants $ p $ and $ q $ from a symmetrical shape to two shapes $ alpha $ sure $ V $? Can we read them from the characteristic polynomial of an appropriate endomorphism of $ V $ relative to $ alpha $? Does anyone know where specifically in the literature this is discussed? (Should be classic.)

In particular, I am interested in the **normal shapes** d & # 39; elements $ alpha in S ^ 2V ^ * $ **with perspective to the symplectic group**: in the case of the linear group, the normal form of $ alpha $ is simply a diagonal matrix with as many 1s on the diagonal as the rank of $ alpha $ – but if the group is smaller, I expect a more involved result.

## Linear algebra – Counterexample of Lagrange's theorem on symmetrical bilinear forms

I am learning linear algebra and today I am reading the proof of the following result:

**Theorem:** Let $ xi: V times V at mathbb {k} $ to be a symmetrical bilinear function. Yes $ text {char} ( mathbb {k}) neq 2 $ then there is a canonical basis in $ V $ of $ xi $, that is to say the matrix of $ xi $ in this base is diagonal.

This is a classic result of linear algebra which is called the Lagrange theorem.

And the evidence certainly uses the fact that the characteristic of the field $ mathbb {k} $ East $ neq 2 $, that is to say. $ 2 neq 0 $. But I spent a bit of time and found the following counterexample:

Consider the card $ f: V times V at mathbb {Z} _2 $ with $ dim V = 2 $ which is defined as $ f ( mathbf {x}, mathbf {y}) = x_1y_2 + x_2y_1 $ or $ mathbf {x} = (x_1, x_2), mathbf {y} = (y_1, y_2) $. It is easy to verify that $ f $ is a symmetric bilinear function with matrix $$ A = begin {pmatrix}

0 & 1 \

ten

end {pmatrix}. $$

Suppose there is a basis for $ V $ such as the matrix of $ f $ in this base is diagonal $ A & # 39; = begin {pmatrix}

lambda & 0 \

0 & mu

end {pmatrix} $. In matrix notation, this means that $ A & # 39; = C ^ {T} AC $ or $ C = begin {pmatrix}

a B \

c & d

end {pmatrix} $ with $ ad-bc neq 0 $, that is to say. $ ad-bc = $ 1. Comparison of an element $ (A & # 39;) _ {12} $ and $ (C ^ TAC) _ {12} $ on both sides it follows that: $ bc + ad = 0 $, that is to say. $ bc = -ad = ad $ because we are $ mathbb {Z} _2 $ but that contradicts $ ad-bc neq 0 $.

Is my reasoning correct?

Would be very grateful for any comments!

## co.combinatorics – On the conjecture of permanent dominance for the symmetrical group

Lieb's permanent dominance conjecture declares that the expression $$ frac {d _ { chi} ^ HA} { chi (e)} le per (A) $$ valid for all positive semi-defined matrices $ A $, or $ d _ { chi} ^ HA = sum_ limits { sigma in H} chi ( sigma) prod_ limits {i = 1} ^ na_ {i sigma (i)} , {a_ {ij} } = A , H le S_n $ is the immanent function of the matrix $ A $ and $ by (A) $ is the permanent of $ A $.

Now i think $ H = S_n $, the symmetric group $ n $, it suffices to show that $ chi (e) $, the irreducible character corresponding to the trivial conjugation class, or the dimension of the representation, dominates all the other irreducible characters; that is to say $ chi (e) ge chi ( sigma) $ for all $ sigma in S_n $. Now, $ chi (e) $ is given directly by the hook length formula. The other irreducible characters can be found by the Murnaghan-Nakayama rule, or the determining formula. Since the Murnaghan-Nakayama formula involves the recurrence of the number of rim hooks (Border-strip Table) and the length of each of them does not exceed the appropriate hook in the hook formula, it therefore seems intuitive to think the guess could be proven in the case $ H = S_n $. Is it possible to rigorously examine the difference between counting characters in the Murnagahan-Nakayama rule and the formula for the hook length? Thanks in advance.

## What is behind this symmetrical coma effect, and literally a strange bokeh, with a triplet lens?

The example image, taken with a vintage triplet (Staeble Kata 45mm / f2.8, wide open, on APS-C, no additional cropping), shows a curious shape of coma-like pattern in the sculptures on the right and on the left – symmetrical, very different from the double-gauss / oblique spherical coma (which tends to be more abrasive than me), and it could be useful visually.

The bokeh is also … a bit strange with the light center in a trioplane net outer circle (cropped image of the landscape) …

What is the actual combination of aberrations at play here, and what construction details to look for when looking for a triplet with this type of character (but perhaps a better overall IQ than this sample … )?

## ct.category Theory – Algebra Trousers $ M_n $ as a special symmetrical Frobenius algebra in the shape of a dagger

I'm looking at the paper *Categorical quantum mechanics II: classical-quantum interaction* by Coecke and Kissinger (arxiv link), and I'm having trouble with one aspect in particular.

Throughout the document, quantum wires are defined as "doubled" wires, whereas the classical world is represented by single wires. In particular, quantum spiders are simply doubled spiders. My understanding of this is that we use the canonical $ dagger $-Algebra of Frobenius on $ A otimes A ^ * $given $ dagger $-Algebras of Frobenius on $ A $ and $ A ^ * $, to generate quantum spiders on systems of the form $ A otimes A ^ * $. This seems sufficient to treat the classical and quantum operations "on the same footing", ie each object in our category may have an associated Frobenius algebra, from those specified on the ground objects, and if a process is doubled, it is quantum. Coding / decoding cards allow conversion between the two. I do not understand that there is a real difference between what happens at the classical and quantum levels, if it is that the quantum is doubled – that we thus interpret a particular morphism as a simple "thick / doubled" quantum wire or two classic wires is out of place.

However, the definition 3.20 indicates that there is a second canonical algebraic structure of Frobenius on any object of the form. $ A otimes A ^ * $, namely the "pants" algebra $ M_n $. In addition, the following paragraph – and my readings on the $ CP ^ * $ construction – seems to suggest that the correct insertion of fully positive quantum processes into the category $ CP ^ * $ mixed classical / quantum processes is actually given by considering quantum processes as acting on objects of the form $ A otimes A ^ * $ with the algebra of the pants as the associated Frobenius algebra – while each component $ A $ or $ A ^ * $ may have associated a completely independent Frobenius algebra structure. If this is correct, does this imply that there are actually two Frobenius algebra structures associated with $ A otimes A ^ * $ – the algebra of the pants, and the other canonical "doubled" Frobenius algebra, described above? Does this mean that the algebra of Frobenius "doubled" on $ A otimes A ^ * $ actually represents classical communication, but the algebra of pants on the same object is quantum communication?

I must have been pretty confused here, because these two "understandings" can not be correct!

## linear algebra – Find the symmetrical bilinear form f corresponding to each q. Find the matrix in the standard order. Indicate non-degenerate forms.

The following expressions define quadratic forms q on $ R ^ 2 $. Find the symmetrical

bilinear form f corresponding to each q. Find the matrix in the standard order. Indicate non-degenerate forms.I)$ ax ^ 2 $

ii) $ 2x ^ 2- frac13xy $

I) $$ q = begin {bmatrix} x & y end {bmatrix} begin {bmatrix} a & 0 \ 0 & 0 \ end {bmatrix} begin {bmatrix} x_1 \ y_1 end {bmatrix } = axx_1 = f_A (X, Y) neq0 $$

And the matrix in the standard ordered base is as follows: begin {bmatrix} a & 0 \ 0 & 0 \ end {bmatrix} and he's degenerated because $ b ^ 2-4ac = 0 $

ii)$$ q = begin {bmatrix} x & y end {bmatrix} begin {bmatrix} 2 & frac {-1} {6} \ frac {-1} {6} & 0 \ end {bmatrix} begin {bmatrix} x_1 \ y_1 end {bmatrix} = 2xx_1- frac {1} {6} x_1y- frac16xy_1 = f_A (X, Y) neq0 $$

And the matrix in the standard ordered base is as follows: begin {bmatrix} 2 & frac {-1} {6} \ frac {-1} {6} & 0 \ end {bmatrix} and he is non-degenerate because $ b ^ 2-4ac neq0 $

I have doubts and I'm glad to see if what I've done is right