## linear algebra – Does the eigenvalue equality hold for my expression?

Let

$$g(boldsymbol{theta},boldsymbol{theta_0}) = trace ( boldsymbol{Omega{(boldsymbol{theta})}}^{-1} boldsymbol{Omega{(boldsymbol{theta_0})}})-ln(det(boldsymbol{Omega{(boldsymbol{theta_0})}})/det(boldsymbol{Omega{(boldsymbol{theta})}}))-N$$

where $$boldsymbol{theta} in boldsymbol{Theta}$$ with $$boldsymbol{Theta}$$ a compact subset of $$R^{n}$$, $$n$$ and $$N$$ are fixed numbers, and $$boldsymbol{theta_0}$$ belongs to the interior of $$boldsymbol{Theta}$$.

Denote the eigenvalues of the symmetric matrices $$boldsymbol{Omega{(boldsymbol{theta_0})}}$$ and $$boldsymbol{Omega{(boldsymbol{theta})}}$$ by $$lambda_{0s}$$ and $$lambda_s$$ $$(s=1,2,…,N)$$ respectively, where $$lambda_{0s}>0$$ and $$lambda_s>0$$ for all $$s$$.

Given the above, does the following hold or is a further condition required, and if so which one?

$$g(boldsymbol{theta},boldsymbol{theta_0}) = sum_{s=1}^N ((lambda_{0s}/lambda_{s})-ln(lambda_{0s}/lambda_{s})-1) ?$$

## linear algebra – Find the slope of the line described and in the attached figure.

enter image description here

While doing my quarantine package,a came across the following question and tried to find area of the triangle using the determinant formula of triangle but it didn’t work.So I would like to have your help.The question is:

``````Suppose that k is greater than 0 and that the line with equation y=3kx+4k^2 intersects with parabola y=x^2 at point P and Q as shown.If O is the origin and area of triangle OPQ is 80,find the slope of the line y=3kx+4k^2.
``````

The answer is among 4,3,15/4,6,and 21/4.

## abstract algebra – What is the structure or metric of this space? Patchwork

Imagine a “patchwork”. Pieces of the patchwork have colors (red, orange, yellow, green… and shades) and some other properties.

Green piece A and blue piece B may have similar properties, but still be far away from each other on the patchwork.

But if it is the case, then there must be another green shade piece close to the blue piece B OR (vice versa) another blue shade piece close to the green piece A… OR maybe another two pieces of shades green and blue close together

enter image description here

So there’s two distance metrics (distance on the patchwork and distance in the color space) and they are somehow entwined

Do you know any spaces with similar structure/metric?

Can you describe that as a category? for example:

If obj. A has similar properties to obj. B — there’s a morphism from A to B (morphism f)

If obj. C are close to obj. B — there’s a morphism from B to C (morphism g)

If obj. A has a color similar to obj. C’s color — there’s a morphism from A to C (but that’s also the composition of two previous morphisms! voila!)

May it have something to do with kernel methods?

## linear algebra – Jordan matrix form and polynomial proof.

let $$fin F(x)$$ be polynomial. and prove that the matrix $$fleft(J_{n}left(lambdaright)right)$$ satisfies

$$(fleft(J_{n}left(lambdaright)right))_{ij}=begin{cases} frac{1}{left(j-iright)!}f^{(j-i)}left(lambdaright) & 1leq ileq jleq n\ 0 & else end{cases}$$

when $$f^(j-i)$$ is the (j-i) deriviative of $$f$$.

Here’s what i tried:

step 1:
I proved that

$$(left(J_{n}left(0right)right)^{k})=begin{cases} 1 & j=i+k\ 0 & else end{cases}$$

step 2:
Using the binom formula, I proved that

$$left(J_{n}left(lambdaright)right)^{k}=sum_{i=0}^{k}binom{k}{i}lambda^{k-i}left(J_{n}left(0right)^{i}right)$$

Now assume $$fleft(xright)=sum_{j=0}^{k}a_{j}x^{j}$$ then,

$$fleft(J_{n}left(lambdaright)right)=sum_{j=0}^{k}a_{j}left(J_{n}left(lambdaright)^{j}right)=sum_{j=0}^{k}a_{j}sum_{i=0}^{j}binom{j}{i}lambda^{j-i}left(J_{n}left(0right)right)^{i}=sum_{j=0}^{k}sum_{i=0}^{j}a_{j}lambda^{j-i}left(J_{n}left(0right)right)^{i}$$

Im not sure how to recognize the (j-i) deriviative out of the expression. And I’m not sure how to continue. Any thoughts will help.

## abstract algebra – Every irreducibe is product of irreducibles

I’m studying some commutative algebra and learning some of the bases. While taking a look at the proof of Theorem 1, proposition 2, i’ve stumbled upon the following: “Suppose $$d$$ is not a product of irreducible elements. Hence, $$d$$ isn’t irreducible”. I’m trying to understand this statement, and it seems pretty trivial, but my head is just not working anymore. If anyone could help, I’d appreciate.

Theorem 1: given $$D$$ an integral domain.

1. If $$D$$ is and UFD, and $$pi in D$$ is an element such that $$pi neq 0$$ and $$pi notin D^{times}$$, then $$pi$$ is prime if, and only if, $$pi$$ is irreducible;
2. Suppose that

(i) Every ideal of $$D$$ is finitely generated (i.e. $$D$$ is Noetherian);

(ii) Every irreducible element is a prime element.

Then, $$D$$ is an UFD.

## linear algebra – Transform matrix into block form

Consider a self-adjoint matrix of the following form

$$T=begin{pmatrix} 0 & k & a & b\ k^* & 0 & c & d\ d^* & c^* & 0 & k\ b^* & a^* & k^* & 0 end{pmatrix}.$$

I would like to know. Does there exist an invertible matrix S such that

$$S^{-1}TS = begin{pmatrix} 0 & A \ B & 0 end{pmatrix}$$

where $$A,B$$, and $$0$$ are $$2×2$$ block matrices.

## linear algebra – In any vector space, \$(-1+1) ; cdot ; vec{v} ; = ; 0 ; cdot ; vec{v} ; = ; vec{0} \$?

My work:

Let us consider “$$+$$” to be “$$#$$“, and “$$cdot$$” to be “$$*$$“, so that I can use you them to represent the normal addition and multiplication.

$$a ; # ; b = 2cdot a ; + ;b$$

$$a ; * ; b = a^2 ; + ;b^2$$

According to the book (Linear algebra by Jim Hefferon
Third edition), item $$(2)$$:

$$(-1 cdot vec{v}) + vec{v} = color{red}{(-1+1)} cdot vec{v} = 0 cdot vec{v} = vec{0}$$

But $$(-1 ; # ; 1) = 2cdot (-1) ; + ;1 = -2 ; + ;1 ; = ; -1$$. So it is not zero!?

## linear algebra – Help with vectors and matrices problem

I have a very specific question, hope someone can help me. We are given a matrix $$A in mathbb{R}^{ntimes d}$$. Suppose we have two collections of pairwise orthogonal unit vectors $${a_{1},…,a_{k}}$$ and $${b_{1},…,b_{k}}$$ such that $$span({a_{1},…,a_{k}})=span({b_{1},…,b_{k}})$$. Show that:
$$sum_{i=1}^k||A_{a_{i}}||^2=sum_{i=1}^k||A_{b_{i}}||^2$$
I have really no idea how to answer this, any help is accepted.

## linear algebra – Taylor Expansion of Logarithm of Determinant near Identity for Non-Diagonalizable Matrix

I have been working on a problem where I need to Taylor expand an expression of the form $$log det(I-A)$$ in terms of traces of the matrices $$A^m$$ for $$m in mathbb N$$, where $$A$$ is a general $$n times n$$ matrix.

I did notice that if the eigenvalues of $$A$$ are $$lambda_1, cdots , lambda_n$$ then those of $$I-A$$ are exactly $$1 – lambda_1, cdots , 1- lambda_n$$, so we may write
$$log det(I-A) = sum_{i=1}^n log (1 – lambda_i) = sum_{i=1}^n sum_{m=1}^infty frac{(-1)^m lambda_i^m}{m} = sum_{m=1}^infty frac{(-1)^m}{m} sum_{i=1}^n lambda_i^m hspace{10mm} cdots (1)$$

At this point, I noticed that if $$A$$ were diagonalizable then I could say that $$P^{-1}AP = diag{lambda_1, cdots , lambda_n}$$ (the diagonal matrix with entries $$lambda_1, cdots , lambda_n$$ along the principal diagonal), so for every $$m geq 1$$, I could write $$P^{-1}A^mP = diag{lambda_1^m, cdots , lambda_n^m }$$ and $$tr(A^m) = tr(P^{-1}A^mP) = sum_{i=1}^n lambda_i^m$$ and (1) would then give us
$$log det(I-A) = sum_{m=1}^infty frac{(-1)^m}{m} tr(A^m)$$

which is what I want. But I couldn’t get around the case when $$A$$ was non-diagonalizable. I was wondering what happens in that case. Can we still give the same (or maybe similar) expansions? Would the Smith Normal Form, Rattional Canonical Form etc. be of any help?

P.S.: I didn’t find any standard reference containing the kind of expansion I wanted. I would appreciate if I would come to know what is the best thing one can say in the diagonalizable case, and/or if i were pointed out to some reference containing with these kinds of result(s).

## simplifying expressions – step by step evaluation of rearranging algebra to desired form

Are there any functions on Mathematica that can rearrange equations to a desired form and show steps. It would output the working/steps required to change one equation to a desired form

for example:

After implicit differentiating this:

I get this, (where y=f(x)):

I would like Mathematica to convert the previous form to this form while showing steps:

Is there any function that would show a step by step algebra rearrangement from one form to another desired form?