graphs – Power of adjacency matrix

Let $G$ be a weighted graph with a weight function $w longrightarrow mathbb{R}^{+}$. Let $G’$ denotes the weighted matrix with adjacency matrix

$$A_{G’} = sum_{i=0}^{k} (xA)^{i}$$

where $k$ is integer and $x$ is a variable.

I am not getting what is $A_{G’}$ matrix? Is it contains all walks of length $k$ or is it something else?

How to get non-zero entries of a sparse 3D matrix?

I have a sparse 3D matrix and I want to get the non-zero values. Is there any way to project it into some other space, where it would be easy to get those non-zero entries? Any answers or some source to read from is highly appreciated.

plotting – I want to solve a differential equation in matrix form

I tried something like that. Z is my hamiltonian

n = 5;

σ2 = 0.1;

RR = RandomReal[{-Sqrt[3*σ2], Sqrt[3*σ2]}, n];

Z = Table[
    KroneckerDelta[i - j + 1] + KroneckerDelta[i - j - 1], {i, 1, 
     n}, {j, 1, n}] + DiagonalMatrix[RR];

usol = NDSolveValue[{I D[ψ[x, t], t] == 
    Z.ψ[x, t], ψ[0, t] == 0, ψ[n, t] == 0}, ψ, {t,
    0, 1}]

I solve the problem finding the eigenstates and eigenvalues, then I choose a one site and evolve in time but now I want to solve the differential equation directly to check my results.

linear algebra – A problem about determinant and matrix

Suppose $a_{0},a_{1},a_{2}inmathbb{Q}$, such that the following determinant is zero, i.e.

left |begin{array}{cccc}\
a_{0} &a_{1} & a_{2} \
a_{2} &a_{0}+a_{1} & a_{1}+a_{2} \
a_{1} &a_{2} & a_{0}+a_{1}\

Show that $a_{0}=a_{1}=a_{2}=0$

I think it’s equivalent to show that the rank of the matrix is 0, and it’s easy to show the rank cannot be 1.

But I have no idea how to show that the case of rank 2 is impossible. So is there any better idea? Thanks.

linear algebra – Matrix for rotation around a vector without using rodrigues rotation formula

for example how can I write the rotation matrix around the vector $vec v = (1,1,1)$ with the angle 90°.

I searched all the other questions and they all say that rodrigues rotation formula is the way to go but I’m wondering if there’s an easy way for me.


linear algebra – How can I find the inverse of a matrix with undetermined variables?

all. I am going to find the inverse of the matrix after the derivatives. But, the system terminates the calculation during the process. So, what is the problem inside? How can I fix it? Many thanks!

The code can be found from the following link.

If you cannot download the code, please see the attached screen-shot of the code.

Inverse of matrix

matrix – Invert parent transform (doesn’t work for combination of rotation and scale)

My problem

I’m working with Qt3D and my problem is almost exactly like this one:

Suggested solution

A solution is suggested here:

Understanding the solution

I have a problem understanding the suggested solution. Specifically:

The problem is that the QTransform node does not store the transformation as a general 4x4 matrix. Rather is decomposes the matrix into a 3 transformations that are applied in fixed order:

S – a diagonal scaling matrix

R – the rotation matrix

T – translation

and then applies it in the order T * R * S * X to a point X.

So when the transformation on the parent is M = T * R * S, then the inverse on the child will be M^-1 = S^-1 * R^-1 * T^-1. Setting the inverse on the child QTransform will attempt to decompose it in the same way:

M^-1 = T_i * R_i * S_i = S^-1 * R^-1 * T^-1

That doesn’t work, because particularly S and R don’t commute like this.

I don’t understand the above assertions. Can anyone explain them to me. Just help me realize.

matrices – Applying repeated doubling to an update Matrix

Given a rule to obtain a new $x,y$ positions from an initial $x_0,y_0$

The rule is
$ binom{x’}{y’} = begin{pmatrix}
v_x & -v_y \
v_y & v_x

$x’ = v_x x – v_y y$

$y’ = v_y x + v_x y$

$f(x,y) = (v_x x – v_y y, v_y x + v_x y)$

Now let’s say I wanted to use this rule to update a position $x_0,y_0$

How can I use repeated doubling to calculate the position after updating it $n$ times without having to manually update the position $n$ times

For instance lets say we started at position $(-10, 0) $ and we wanted to update this position 5 times and our given value for $(u_x,u_y) = (frac{1}{2},frac{1}{2})$

after the first update(n=1) the position would be $(-5,-5)$ and then the 5th update(n=5) the position would be $( frac{5}{4}, frac{5}{4}) $

How could I calculate the position for $( frac{5}{4}, frac{5}{4}) $ without having to calculate n = 2 or n= 3 or n= 4 or simply just using the concept of repeated doubling

linear algebra – Eigenvalues of scaling of matrix

Let $A$ be a (real or complex) square matrix, let $alpha neq 0$.

Is it true that $lambda$ is an eigenvalue of $A$ if and only if and only if $alpha lambda$ is an eigenvalue of $alpha A$?

I think yes, here is why I suppose so: $lambda$ is an eigenvalue of $A ifflambda I- A$ is non injective $iff alpha lambda I – alpha A$ is non injective $iff alpha lambda$ is an eigenvalue of $alpha A$.

Is this correct?

pr.probability – Distribution and Expectation of Inverse of a Random Bernoulli Matrix

This question cropped up as a part of my research. Let us assume a $ntimes n$ random matrix $mathbf{M}$ with elements iid distributed to a Bernoulli distribution that takes values ${0,1}$ with probability $ p = 1/2$.

What I want to know is that what sort of distribution would $mathbf{M}^{-1}$ have? and what could its possible expectation be?

My guess so far has been that the distribution remains the same. I have posted this question on math.stackexchange too.