linear algebra – Solving a system of equations with boolean variables in Z3

I have a problem that reduces to a system of 2n+1 inequalities on 4n variables all in $mathbb{Z}_3$, where the variables are constrained to be in {0,1}, ie:

$x_1 + x_2 + … + x_k equiv a ; mod ; 3$

$x_2 + x_3 + … + x_l equiv b ; mod ; 3$

$x_m + … + x_{4n} equiv c ; mod ; 3$

The right-hand side (a,b,c…) is known, but not necessarily in {0,1} but in {0,1,2}.

I have found no better way of solving this than a brute-force approach, because doing Gaussian elimination I can get a an upper triangular system, but that would only give me a solution if my variables were in {0,1,2}.

So my question is whether there is a more efficient way to solve this system that I may have missed ?

linear algebra – Eigenvalues of block matrix

Given a positive definite matrix $A in mathbb{R}^{ntimes n}$ and a general matrix $B in mathbb{R}^{mtimes n}$, can I say somehing about the eigenvalues of

$$T = begin{bmatrix} alpha A & alpha B^T \ beta B & 0 end{bmatrix} $$,

with $alpha, beta in mathbb{R}$? Can I maybe give bounds of the eigenvalues of $T$ as a function of $alpha, beta$?

abstract algebra – Solve the equation in $mathbb{Z}_{7}$

The problem is taken from my textbook in algebra and is given as:

Solve $x+4=3$ in $mathbb{Z}_{7}$

From the euclidean algorithm I found that $2=4^{-1}bmod(7)$

Now I’m not completly sure how to continue but I tried:
$$x+4+4^{-1}=3+4^{-1} $$
$$x=3+4^{-1}=3+2=5 $$

However, the correct answer is $x=6$ and I’m not sure were I went wrong since the book don’t have any examples so I would appreciate some help!

linear algebra – Trying to determine the determinant of an abstract matrix

I’m trying to write about linear homogeneous recurrence relations and I’ve come up on the following matrix :$$A=begin{pmatrix}
c_1 & c_2 & cdots & c_{k-1} & c_k\
1 & 0 & cdots & 0 & 0\
0 & 1 & ddots & 0 & 0\
vdots & vdots & ddots & ddots & vdots\
0 & 0 & cdots & 1 & 0
end{pmatrix},$$
where $c_1,cdots c_k$ is a real number. I need to find the eigenvalues of this matrix. So far I’ve got that $$c_A(r)=begin{vmatrix}
c_1-t & c_2 & cdots & c_{k-1} & c_k\
1 & -t & cdots & 0 & 0\
0 & 1 & ddots & 0 & 0\
vdots & vdots & vdots & ddots & vdots\
0 & 0 & cdots & 1 & -t
end{vmatrix},$$
I need to express it in its polynomial form but I can’t. I’ve done it for $k=3$ and found $c_A=-r^3+c_1r^2+c_2r +c_3.$ I want to show that $c_A(r)= r^k – sum_{i=1}^{k-1}c_{i}r^{k-i}$.
Any help ?

linear algebra – Pricipal minors for a matrix where all off-diagonal entries are negative

Let $n$ be a positive integer and $Ainmathbb{R}^{ntimes n}$ be a matrix that satisfies:

  1. All the leading principal minors are positive;
  2. All the off-diagonal entries are negative.

Prove that all the principal minors of $A$ are positive.

I have a very indirect solution to this problem, and I’m still looking for a direct approach that works.

linear algebra – Spearman correlation

This is the scaling factor necessary to obtain a distance that can be directly converted to the Spearman correlation. You could of course just divide by the standard deviations, but then you would have to do the scaling downstream, so it is done here instead.

How do we derive that? Show it,thank you.

1/(n-1)*sum(x-mean(x))^2, (x-u)/σ

linear algebra – Practical use of $Gcdot vec(widetilde{A})leq d$

I have no idea how the author of this thesis has applied the condition in the title, taken by the Pachamanova’s Lemma (see pag. 29, Theorem 3.4.1), to construct the inequality matrix (4.2) at page 34.
Indeed this inequality will be necessary to pass from the robust counterpart (4.14) to the matrix inequality in (4.15) (see pages 36-37).

What I know is that Pachamanova, in her Theorem, supposes to define an uncertainty set for $widetilde{A}$ such that $U=begin{Bmatrix}
widetilde{A}|underline{A}leq widetilde{A}leq overline{A}
end{Bmatrix}Rightarrow P^A=begin{Bmatrix}
vec(widetilde{A})|Gcdot vec(widetilde{A})leq d
end{Bmatrix}$
.

So, as far as I understood, the author of this thesis consider the problem (4.14) like a dual that can be transformed (for the two-sided relationship between primal and dual problem) in a primal with constraints $Gcdot vec(widetilde{A})leq d$ but I don’t understand how to apply practically this formula in the passage (4.14)$rightarrow$(4.15). For example, I really don’t understand how to construct the matrix $G$. Pachamanova says, I quote, in the Theorem 3.4.1 that $widehat{x}_iin mathbb{R}^{(m times n)times1}$ contains $widehat{x}$ in entries $(i-1)cdot n+1$ through $i times n$ and $0$ everywhere else” (see also here, page 26). Well, what does it means? And how should I use this fact to construct my matrix $G$ as well in (4.2) as in (4.15)?

In the end, I’ve understood the concept in theory but I can’t put it into practice. Could you please explain me, or even show me, the passages to construct the (4.15)? I’m really stuck.

Thanks in advance for any help!

linear algebra – Are there any results in generalizing matrix theory to multidimensional arrays?

In matrix theory(2-dimensional arrays), we can define addition, multiplication, rank and determination etc. I’m working on generalizing these properties to multidimensional arrays as many as possible. Are there any results in this way? I’d really appreciate it if you could provide some references.

Lie algebra cohomology and De Rham cohomology of a compact Lie group

I well know that another question with the same statement has already been published, but I want to ask somethings else.

In particular, here Background for Lie Algebra cohomology and de Rham cohomology of compact Lie Groups explain how to prove that the cohomology of left invariant form on a compact and connected Lie group $G$ is isomorphic to the De Rham cohomology on its. So if we denote by $mathfrak{g}$ the Lie algebra of $G$,  by the isomorphism between the complex of left invariant forms and the exterior algebra on $mathfrak{g}$, we can study the cohomology of $G$ using $mathfrak{g}$. I would understand last statement, how to pass the problem to the Lie algebra.

My principal goal is prove the characterization of compact and connected semisimple Lie groups by the first De Rham cohomology group. On Weibel the Corollary 7.8.6 solves the problem if we use the Lie algebra cohomology. Why this is isomorphic to De Rham?

Maybe I don’t understand well the construction of Lie algebra cohomology.
If we consider the Lie algebra $mathfrak{g}$ of $G$, $M$ a $mathfrak{g}$-module and the left exact functor $-^mathfrak{g}$ that sends $M$ in $M^mathfrak{g}={min M: xm=0quad forall xin mathfrak{g}}$, we can define the cohomology $H^n(mathfrak{g}, M)$ as the right derived functor of $-^mathfrak{g}$. So it seems that it depends to $M$ and not only depends to Lie algebra or $G$. What I am wrong? How this cohomology can be isomorphic to the exterior algebra on $mathfrak{g}$?

Thanks all for answer or any consideration.

linear algebra – Rewriting $max log det (I+ X + Y + Y^T)$ as max-det problem

The following optimization is convex:
begin{align}
max_{Y,Xsucc0} & log det (I + X +Y + Y^T)\
& text{s.t.} begin{pmatrix} X& Y \ Y^T & Z end{pmatrix}succeq 0, mathbf{Tr}(X)le P
end{align}

where $Zsucc0$ and $P>0$ are given.

My question is whether we can convert the decision variable $Y$ to be positive semi-definite. My motivation is to write the optimization as a standard max-det problem or in a nicer form in order to show that the maximizer is unique.