## linear algebra – Proof of an Identity using Bilinear Maps

I am attempting to prove the following statement: if $$phi$$ is a bilinear map which takes $$V_1 times V_2$$ to $$W$$ where $$V_1$$ and $$V_2$$ are vector spaces of dimension $$l_1$$ and $$l_2$$ respectively and $$phi(v_1,v_2) in W$$ is non-zero for every $$v_1 in V_1$$ and $$v_2 in V_2$$, then the image of $$phi$$ spans a subspace of $$W$$ with dimension greater than $$l_1 – l_2 -1$$.

My idea to prove this is to consider that tensors of rank $$1$$ $${ v_1 otimes v_2 }$$ form a subvariety of dimension $$l_1 + l_2 – 1$$ in $$V_1 otimes V_2$$, and then the kernel of $$phi: V_1 otimes V_2 rightarrow W$$ only intersects this subvariety at $$0$$.

## linear algebra – Does the eigenvalue equality hold for my expression?

Let

$$g(boldsymbol{theta},boldsymbol{theta_0}) = trace ( boldsymbol{Omega{(boldsymbol{theta})}}^{-1} boldsymbol{Omega{(boldsymbol{theta_0})}})-ln(det(boldsymbol{Omega{(boldsymbol{theta_0})}})/det(boldsymbol{Omega{(boldsymbol{theta})}}))-N$$

where $$boldsymbol{theta} in boldsymbol{Theta}$$ with $$boldsymbol{Theta}$$ a compact subset of $$R^{n}$$, $$n$$ and $$N$$ are fixed numbers, and $$boldsymbol{theta_0}$$ belongs to the interior of $$boldsymbol{Theta}$$.

Denote the eigenvalues of the symmetric matrices $$boldsymbol{Omega{(boldsymbol{theta_0})}}$$ and $$boldsymbol{Omega{(boldsymbol{theta})}}$$ by $$lambda_{0s}$$ and $$lambda_s$$ $$(s=1,2,…,N)$$ respectively, where $$lambda_{0s}>0$$ and $$lambda_s>0$$ for all $$s$$.

Given the above, does the following hold or is a further condition required, and if so which one?

$$g(boldsymbol{theta},boldsymbol{theta_0}) = sum_{s=1}^N ((lambda_{0s}/lambda_{s})-ln(lambda_{0s}/lambda_{s})-1) ?$$

## machine learning – O(m+n) Algorithm for Linear Interpolation

Given data consisting of $$n$$ coordinates $$left((x_1, y_1), (x_2, y_2), ldots, (x_n, y_n)right)$$ sorted by their $$x$$-values, and $$m$$ query points $$(q_1, q_2, ldots, q_m)$$, find the linearly interpolated values of the query points according to the data. We assume $$q_i in (min_j x_j, max_j x_j)$$

I heard off-hand that this problem could be solved in $$O(m+n)$$ time but can only think of an $$O(m log n)$$ algorithm. I can’t seem to find this particular problem in any of the algorithm textbooks.

``````interpolated = ()
for q in qs:
find x(i) such that x(i) <= q <= x(i+1) with binary search
t = (q - x(i)) / (x(i+1) - x(i))
interpolated.append(y(i) * (1-t) + y(i+1) * t)
``````

This gives us a runtime of $$O(m log n)$$, it’s unclear to me how to get this down to $$O(m + n)$$ as the search for $$x_i$$ must be done for every query point.

## linear algebra – Find the slope of the line described and in the attached figure.

enter image description here

While doing my quarantine package,a came across the following question and tried to find area of the triangle using the determinant formula of triangle but it didn’t work.So I would like to have your help.The question is:

``````Suppose that k is greater than 0 and that the line with equation y=3kx+4k^2 intersects with parabola y=x^2 at point P and Q as shown.If O is the origin and area of triangle OPQ is 80,find the slope of the line y=3kx+4k^2.
``````

The answer is among 4,3,15/4,6,and 21/4.

## linear algebra – Jordan matrix form and polynomial proof.

let $$fin F(x)$$ be polynomial. and prove that the matrix $$fleft(J_{n}left(lambdaright)right)$$ satisfies

$$(fleft(J_{n}left(lambdaright)right))_{ij}=begin{cases} frac{1}{left(j-iright)!}f^{(j-i)}left(lambdaright) & 1leq ileq jleq n\ 0 & else end{cases}$$

when $$f^(j-i)$$ is the (j-i) deriviative of $$f$$.

Here’s what i tried:

step 1:
I proved that

$$(left(J_{n}left(0right)right)^{k})=begin{cases} 1 & j=i+k\ 0 & else end{cases}$$

step 2:
Using the binom formula, I proved that

$$left(J_{n}left(lambdaright)right)^{k}=sum_{i=0}^{k}binom{k}{i}lambda^{k-i}left(J_{n}left(0right)^{i}right)$$

Now assume $$fleft(xright)=sum_{j=0}^{k}a_{j}x^{j}$$ then,

$$fleft(J_{n}left(lambdaright)right)=sum_{j=0}^{k}a_{j}left(J_{n}left(lambdaright)^{j}right)=sum_{j=0}^{k}a_{j}sum_{i=0}^{j}binom{j}{i}lambda^{j-i}left(J_{n}left(0right)right)^{i}=sum_{j=0}^{k}sum_{i=0}^{j}a_{j}lambda^{j-i}left(J_{n}left(0right)right)^{i}$$

Im not sure how to recognize the (j-i) deriviative out of the expression. And I’m not sure how to continue. Any thoughts will help.
Thanks in advance.

## linear algebra – Transform matrix into block form

Consider a self-adjoint matrix of the following form

$$T=begin{pmatrix} 0 & k & a & b\ k^* & 0 & c & d\ d^* & c^* & 0 & k\ b^* & a^* & k^* & 0 end{pmatrix}.$$

I would like to know. Does there exist an invertible matrix S such that

$$S^{-1}TS = begin{pmatrix} 0 & A \ B & 0 end{pmatrix}$$

where $$A,B$$, and $$0$$ are $$2×2$$ block matrices.

## linear algebra – In any vector space, \$(-1+1) ; cdot ; vec{v} ; = ; 0 ; cdot ; vec{v} ; = ; vec{0} \$?

My work:

Let us consider “$$+$$” to be “$$#$$“, and “$$cdot$$” to be “$$*$$“, so that I can use you them to represent the normal addition and multiplication.

$$a ; # ; b = 2cdot a ; + ;b$$

$$a ; * ; b = a^2 ; + ;b^2$$

According to the book (Linear algebra by Jim Hefferon
Third edition), item $$(2)$$:

$$(-1 cdot vec{v}) + vec{v} = color{red}{(-1+1)} cdot vec{v} = 0 cdot vec{v} = vec{0}$$

But $$(-1 ; # ; 1) = 2cdot (-1) ; + ;1 = -2 ; + ;1 ; = ; -1$$. So it is not zero!?

## unity – How to convert linear lerp to non-linear using animation curve?

I have some code I run when I fell a tree in my game. Basically it picks a random direction and then falls. However atm it linear and doesnt look very realistic. What Id want is for it to start slow, then speed up the more it’s fallen.

I’ve been experimenting with animation curves but none of the material I can find works with my code:

``````public void FellTree()
{
// pick a random point on the circle to match the up vector
Vector2 pointOnCircle = UnityEngine.Random.insideUnitCircle * transform.localScale.y;

// find the fall point, assuming the pivot of the object is at the bottom
Vector3 fallPoint = transform.position +
pointOnCircle.x * transform.right +
pointOnCircle.y * transform.forward;

// find the target up vector
Vector3 updatedUpVector = Vector3.Normalize(fallPoint - transform.position);

// Start the coroutine to tilt the up vector to the desired target
StartCoroutine(UpdateUpVector(updatedUpVector, 1, 0.001f));
}

IEnumerator UpdateUpVector(Vector3 upVector, float speed, float threshold = 0.001f)
{
// the target vector and up vector would get closer to each other until the threshold is hit
while (Vector3.Distance(upVector, transform.up) > threshold)
{
transform.up = Vector3.Lerp(transform.up, upVector, speed * Time.deltaTime);
yield return new WaitForEndOfFrame();
}
}
``````

Using `animationCurve.Evaluate(float t)` in the lerp should be what I need to do, but not working. How can I add an animation curve to my lerp?

## linear algebra – Help with vectors and matrices problem

I have a very specific question, hope someone can help me. We are given a matrix $$A in mathbb{R}^{ntimes d}$$. Suppose we have two collections of pairwise orthogonal unit vectors $${a_{1},…,a_{k}}$$ and $${b_{1},…,b_{k}}$$ such that $$span({a_{1},…,a_{k}})=span({b_{1},…,b_{k}})$$. Show that:
$$sum_{i=1}^k||A_{a_{i}}||^2=sum_{i=1}^k||A_{b_{i}}||^2$$
I have really no idea how to answer this, any help is accepted.

## linear algebra – Taylor Expansion of Logarithm of Determinant near Identity for Non-Diagonalizable Matrix

I have been working on a problem where I need to Taylor expand an expression of the form $$log det(I-A)$$ in terms of traces of the matrices $$A^m$$ for $$m in mathbb N$$, where $$A$$ is a general $$n times n$$ matrix.

I did notice that if the eigenvalues of $$A$$ are $$lambda_1, cdots , lambda_n$$ then those of $$I-A$$ are exactly $$1 – lambda_1, cdots , 1- lambda_n$$, so we may write
$$log det(I-A) = sum_{i=1}^n log (1 – lambda_i) = sum_{i=1}^n sum_{m=1}^infty frac{(-1)^m lambda_i^m}{m} = sum_{m=1}^infty frac{(-1)^m}{m} sum_{i=1}^n lambda_i^m hspace{10mm} cdots (1)$$

At this point, I noticed that if $$A$$ were diagonalizable then I could say that $$P^{-1}AP = diag{lambda_1, cdots , lambda_n}$$ (the diagonal matrix with entries $$lambda_1, cdots , lambda_n$$ along the principal diagonal), so for every $$m geq 1$$, I could write $$P^{-1}A^mP = diag{lambda_1^m, cdots , lambda_n^m }$$ and $$tr(A^m) = tr(P^{-1}A^mP) = sum_{i=1}^n lambda_i^m$$ and (1) would then give us
$$log det(I-A) = sum_{m=1}^infty frac{(-1)^m}{m} tr(A^m)$$

which is what I want. But I couldn’t get around the case when $$A$$ was non-diagonalizable. I was wondering what happens in that case. Can we still give the same (or maybe similar) expansions? Would the Smith Normal Form, Rattional Canonical Form etc. be of any help?

P.S.: I didn’t find any standard reference containing the kind of expansion I wanted. I would appreciate if I would come to know what is the best thing one can say in the diagonalizable case, and/or if i were pointed out to some reference containing with these kinds of result(s).