## linear algebra – Jordan matrix form and polynomial proof.

let $$fin F(x)$$ be polynomial. and prove that the matrix $$fleft(J_{n}left(lambdaright)right)$$ satisfies

$$(fleft(J_{n}left(lambdaright)right))_{ij}=begin{cases} frac{1}{left(j-iright)!}f^{(j-i)}left(lambdaright) & 1leq ileq jleq n\ 0 & else end{cases}$$

when $$f^(j-i)$$ is the (j-i) deriviative of $$f$$.

Here’s what i tried:

step 1:
I proved that

$$(left(J_{n}left(0right)right)^{k})=begin{cases} 1 & j=i+k\ 0 & else end{cases}$$

step 2:
Using the binom formula, I proved that

$$left(J_{n}left(lambdaright)right)^{k}=sum_{i=0}^{k}binom{k}{i}lambda^{k-i}left(J_{n}left(0right)^{i}right)$$

Now assume $$fleft(xright)=sum_{j=0}^{k}a_{j}x^{j}$$ then,

$$fleft(J_{n}left(lambdaright)right)=sum_{j=0}^{k}a_{j}left(J_{n}left(lambdaright)^{j}right)=sum_{j=0}^{k}a_{j}sum_{i=0}^{j}binom{j}{i}lambda^{j-i}left(J_{n}left(0right)right)^{i}=sum_{j=0}^{k}sum_{i=0}^{j}a_{j}lambda^{j-i}left(J_{n}left(0right)right)^{i}$$

Im not sure how to recognize the (j-i) deriviative out of the expression. And I’m not sure how to continue. Any thoughts will help.

## performance tuning – Efficiently populate sparse matrix with band structure

I’m trying to efficiently populate elements of a very large (2^20 x 2^20) symmetric matrix with 1s – luckily the matrix is very sparse, <0.1% filling. Further, the matrix has a very well defined periodic banded structure, as shown here:

.

In reality, this matrix is the result of a series of KroneckerProducts of 2×2 matrices, which is what gives it that characteristic banded structure – I’m hoping to find a way to speed up the generation without using kronecker products, because even with sparse matrices, the computation can take several seconds or minutes depending on the final dimensionality.

My first question relates to creating this sparse matrix efficiently. I’ve toyed with lots of different ways of generating even simple bands for the sparse array. For simply populating on the diagonal, the quickest method clearly seems to be to use the {i_,i_} notation, as shown here:

dim = 15;

aa = SparseArray({i_, i_} -> 1, {2^dim, 2^dim}) // RepeatedTiming;
bb = SparseArray(Band({1, 1}) -> 1, {2^dim, 2^dim}) // RepeatedTiming;
cc = SparseArray(Table({ii, ii} -> 1, {ii, 2^dim}), {2^dim, 2^dim}) //RepeatedTiming;
dd = SparseArray(Normal(AssociationThread(Table({ii, ii}, {ii, 2^dim}) -> Table(1, {ii, 2^dim}))), {2^dim,2^dim}) // RepeatedTiming;

Column({aa((1)), bb((1)), cc((1)), dd((1))})

aa((2)) == bb((2)) == cc((2)) == dd((2))

0.000309
0.031
0.081
0.054

True


However, when we try to do off-diagonal entries, this gets much worse, presumably because the condition has to be continually checked:

dim = 15;

aa = SparseArray({i_, j_} /; j - i == 1 -> 1., {2^dim, 2^dim}) // RepeatedTiming;
bb = SparseArray(Band({1, 2}) -> 1, {2^dim, 2^dim}) // RepeatedTiming;
cc = SparseArray(Table({ii, ii + 1} -> 1, {ii, 2^dim - 1}), {2^dim, 2^dim}) // RepeatedTiming;
dd = SparseArray(Normal(AssociationThread(Table({ii, ii + 1}, {ii, 2^dim - 1}) -> Table(1, {ii, 2^dim - 1}))), {2^dim, 2^dim}) // RepeatedTiming;

Column({aa((1)), bb((1)), cc((1)), dd((1))})

aa((2)) == bb((2)) == cc((2)) == dd((2))

0.185
0.031
0.095
0.052

True


From those two examples then it seems like Band is our best choice, but Band is still painfully slow, especially when compared to the {i_,i_} for the diagonal. Further, this is more frustrating, because in MATLAB the same problem can be accomplished an order of magnitude faster (this took ~1.4 ms):

But the fact that the original {i_,i_} case for the diagonal was so fast suggests that there is a more efficient way to do this.

So then my first question is: given all of that, is there a more efficient way to populate the bands of our sparse matrix, so that the speed can at least rival the equivalent in MATLAB?

And my second question, a bit predicated on the first: with whatever method is the most efficient, what is the best way to generate the periodic banding structure present in the final matrix (see above). You can accomplish it with Band by manually inserting spaces with 0s, but doing so can’t be the most efficient way.

Finally, because of that period-2 banded structure of the final matrix, where each quadrant is a recursive block of ever smaller diagonal matrices with side length smaller by a factor of 2, maybe you could generate all the smaller diagonal blocks, and then just place them in the final matrix – I’m not sure how this would be accomplished however. Of course, remember that the matrix is symmetric, so I would think that would help with efficient generation because really just one triangle has to be generated and then flipped.

## matrix – Set: in part assignment is not a symbol

This is probably a basic question but I am new to Mathematica, so please help me out here.

I have the following piece of code:

lmax = 2;
arrlen = lmax + 1;
xarr[x_] = Array[x^# &, arrlen, 0];
parr[p_] = Array[p^# &, arrlen, 0];
arr[x_, p_] = Join[xarr[x], parr[p]];
mboot[x_, p_] := Outer[NonCommutativeMultiply, arr[x, p], arr[x, p]]


Now, I want to set a few particular elements of mboot[x,p] to 0. But when I try, for example:

mboot[x,p][[1,1]]=0


I get an error saying Set: mboot[x,p] in the part assignment is not a symbol.

It would be great if someone could help me figure out what is going wrong and how I can fix it.

## functions – How to construct this matrix in Mathematica

I create a symbolic matrix using the following:

mat = ToExpression[Table[StringJoin[{"s", ToString[i], ToString[j]}], {i, 1, d}, {j, 1, d}]]


for arbitrary $$d$$. Notice that for $$d=2$$, this gives

$$begin{pmatrix} s11 & s12 \ s21 &s22end{pmatrix}.$$

I do this since an analytic expression is important. However, I would like to get numerics. How can I update the above define for mat to get a function of the form:

matNEW[s11_, s12_, s21_, s22_]


such that I can evaluate the matrix later in the script for specific values of the parameters.

## linear algebra – Transform matrix into block form

Consider a self-adjoint matrix of the following form

$$T=begin{pmatrix} 0 & k & a & b\ k^* & 0 & c & d\ d^* & c^* & 0 & k\ b^* & a^* & k^* & 0 end{pmatrix}.$$

I would like to know. Does there exist an invertible matrix S such that

$$S^{-1}TS = begin{pmatrix} 0 & A \ B & 0 end{pmatrix}$$

where $$A,B$$, and $$0$$ are $$2×2$$ block matrices.

## matrix – Pade approximation of vector or operator functions

PadeApproximant is a very useful function of MA that starts with a truncated Taylor series

$$f(x)approxsum_{k=0}^{l} c_k (x-x_0)^k,$$

and represents them in a rational form

$$f(x)approxfrac{sum_{i=1}^{m} a_i (x-x_0)^i}{sum_{j=1}^{n} b_j (x-x_0)^j}.$$

Thus, Padé approximantion: $$c_k rightarrow (a_i,b_j)$$.

Recently, I faced a problem to perform such approximation for vector (or even operator) functions $$hat f(x)$$. The requirement is, however, that there is a common denominator for all components:

$$hat f(x)approxfrac{sum_{i=1}^{m} hat a_i (x-x_0)^i}{sum_{j=1}^{n} b_j (x-x_0)^j}.$$

In other words, the $$b_j$$ are scalars. Because of this, the PadeApproximate function cannot be used. I know that such methods exist and used, for instance, to speed up computations of the matrix exponents. In fact, the whole section 8 in Padé approximants by Baker and Graves-Morris is devoted to this problem. However, the book is written in too formalized way that I cannot even figure out working formulas. I hope that someone here has an experience with this method.

The basic example in the documentation is

PadeApproximant(Exp(x), {x, 0, {2, 3}})


Here the desired syntax is

MatrixPadeApproximant(MatrixExp({{1,x},{x,x^2}}), {x, 0, {2, 3}})


Thus, for matrix Padé approximantion: $$hat c_k rightarrow (hat a_i,b_j)$$ is required.

## dg.differential geometry – Compute the connection matrix

The (pseudo)Riemannian metric and the connection in $$R^3$$ with coordinates $$x=u_1$$,$$y=u_2$$,$$z=u_3$$ are given on the basic vector fields by

$$(partial_{u_i}, partial_{u_j})=frac{partial^3f}{partial_{u_1}partial_{u_i}partial_{u_j}}$$
$$(nabla_{partial_{u_i}}, partial_{u_j}, partial_{u_k})=frac{partial^3f}{partial_{u_i}partial_{u_j}partial_{u_k}}$$
where $$f=0.5(x^2z+xy^2) +ay^2z^2+bz^5$$

How to compute connection matrix?
I need find relations between the constants $$a, b$$ because of which the connection is flat.

## algorithms – Maximum sum path in a matrix

As Pal have commented, this is a question in dynamic programming. I will now propose the solution using dynamic programming to the question, but I highly recommend trying it on your own first, and checking this later.

The algorithm:

1. Create a new empty matrix $$V$$ of size N by N
2. for $$1 le i le N:$$
1. for $$1 le j le N:$$
1. Set $$V(i, j) = M(i,j) + max(V(i-1, j), V(i, j-1))$$ (note that $$V(-1,j) = V(i,-1) = 0$$)
3. Find $$m = max_i,_j(V(i,j))$$
4. Count the number of times we see $$m$$ in $$V$$
5. You can return the maximum sum $$m$$ and the number of times it occurs (as you have counted it)

Complexity:

The algorithm takes $$O(N^2)$$ time (which is $$O(k)$$ if we let $$k$$ be the input size) since it calculates a new matrix of size N by N, doing $$O(1)$$ operations per cell in it.

Notive that the algorithm’s space complexity is also $$O(N^2)$$ as we were required to create a whole new matrix.

## matrix – Are My Answers to This Hamming Code Example Correct?

Thanks for contributing an answer to Computer Science Stack Exchange!

But avoid

• Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

## matrix – Multiplying two complex SparseArray objects, yielding empty SparseArray crashes kernel

When I multiply together two sparse matrices that should give back the 0 matrix, where at least one element among the two is complex and at least one is not an exact number, the kernel crashes unexpectedly with no messages generated.

test1 = SparseArray(DiagonalMatrix({1., 0}))
test2 = SparseArray(DiagonalMatrix({0, I}))
test1.test2 (* Crashes kernel with no messages generated *)


Note that at least one element must be complex, at least one must not be an exact number and the final result must have no non-zero elements.

Can anyone reproduce this behavior? Even better, anyone have a workaround? This problem shows up for me deep inside a complex differential equation of $$64times64$$ very sparse matrices. Using non-sparse operations gives a $$sim 20$$x slowdown.

I’ll report to Wolfram as well, thanks!

Version: 12.0.0 for Linux x86 (64-bit) (April 7, 2019)