linear algebra – Efficiently populate a Sparsearray for a set of rules for a constrained basis

I want to populate quite a large SparseArray(10^6 x10^6) efficiently. It is basically a spin system Hamiltonian with a constrained Hilbert space. Unlike the examples I have looked at in this forum this introduces a problem. On use of BitXor to emulate terms like $sigma_x$ , in many cases it would yield a value which is not in the Hilbert space, which means I have to do a search in the basis to find the position it corresponds to before storing it in the sparsearray. Let me first show the code which I have written.

First to generate the constrained Hilbert space. I am not going to explain the constraint in detail because that may be confusing, but the following code generates it efficiently enough for me so one can assume this is the Hilbert space.

N1 =24; 
X1 = Select(Range(0, 2^N1 - 1), BitAnd(#1, BitShiftLeft(#1, 1)) == 0 & ); 
X1 = Select(X1, BitGet(#1, 0)*BitGet(#1, N1 - 1) == 0 & ); 
len=Length(X1);

X1 stores the allowed basis vectors and len stores the length of X1. Now the Hamiltonian term which I want to calculate is $sigma_x$ or in terms of creation and annihilation operators is $tau sum_l^{N_1} (d_l^{dagger}+d_l)$.
To generate this I wrote the following code,

SetAttributes(f, Listable); 
Needs("Combinatorica`"); 
g(j_,k_) := BinarySearch(X1, BitXor(X1((j)), 2^k)); 
f(i_) := (s1 = Mod(i, len) + 1; s2 = IntegerPart(i/len); k1 = g(s1, s2); 
    If(IntegerQ(k1), {s1, k1} -> (Tau))); 
Print(AbsoluteTiming(Q1 = SparseArray(DeleteCases(f(Range(0, len*N1 - 1)), 
        Null)); )); 

This takes approximately 217 seconds in my i7 machine. However a similar code written in normal loop constructs in FORTRAN takes a second to evaluate. Now my experience tells me while Mathematica will take more than a second of course because it is not a low level programming language, but the time taken being almost two orders of magnitude higher indicates there is a faster way of doing this. Can anyone help me find it?

I actually need to go to sizes an order of magnitude higher than in the example, hence I need a speedup, plus while I can generate this data via FORTRAN and import to MATHEMATICA from a file, I want to do that as a last resort after exhausting all possibilities of speedup here.

Edit:- In the slow code snippet I want to do the following, I take a basis vector, then operate a BitXor gate one by one on all the bit positions, i.e. from 0 to N1-1. Each would give me a new integer, which may or may not be present in the Hilbert space. I check whether it is present and also its position in the basis vector array by using BinarySearch, once I do that I create a list like {x1,x2}->tau where x1 is position the basis vector which I started with and x2 is position of the basis vector I get after the operation. If it is not present a “Null” is returned which I delete from the SparseArray. This operation is tried to be made listable using the function “f”. Where using “Range” I try to generate both the basis vector positions as well as the bit position using one value to vectorized the code. The “Mod” and “IntegerPart” operation then splits it up to basis vector position and bit position inside the function f.

linear algebra – Symmetry constraints

Suppose we have $n$ points on the unit sphere,

$X = (pmb{r}_{1}, pmb{r}_{2}, …, pmb{r}_{n})$.

I am interested in knowing to what extent the rotational symmetry of $X$ is determined by the following constraint. Every triple of points is related to at least one other triple of points by some rotation about the origin,

$(pmb{r}_{j_{1}}, pmb{r}_{j_{2}}, pmb{r}_{j_{3}}) = (Rpmb{r}_{l_{1}}, Rpmb{r}_{l_{2}}, Rpmb{r}_{l_{3}})$,

where $R$ is a rotation (not the identity) depending on $j_{1}, j_{2}, j_{3}$.

Any advice would be much appreciated.

ag.algebraic geometry – A question regarding base change of a smooth algebra via completion

Let $(R,m)$ be an excellent Noetherian local ring. Let $S$ be a smooth Noetherian $R$-algebra. Let $T$ be the $m S$-adic completion of $S$. Then by the universality of the tensor product construction, there is a natural map $hat{R} otimes_R S rightarrow T$. My question is: Does this map have to be flat?

function spaces – Has this Banach algebra been studied?

Given $Omega$ as $[0,1]^n$ or the closed unit ball in $mathbb{R}^n$, we can consider the algebra of complex valued polynomials and its closure with respect to the norm

$$
|p| = |p|_infty + |nabla p|_1.
$$

I am wondering if this Banach algebra has been studied. If so, does it have a common name, and what are some resources that I can read about it?

linear algebra – A problem about determinant and matrix

Suppose $a_{0},a_{1},a_{2}inmathbb{Q}$, such that the following determinant is zero, i.e.

$
left |begin{array}{cccc}\
a_{0} &a_{1} & a_{2} \
\
a_{2} &a_{0}+a_{1} & a_{1}+a_{2} \
\
a_{1} &a_{2} & a_{0}+a_{1}\
end{array}right|
=0$

Show that $a_{0}=a_{1}=a_{2}=0$

I think it’s equivalent to show that the rank of the matrix is 0, and it’s easy to show the rank cannot be 1.

But I have no idea how to show that the case of rank 2 is impossible. So is there any better idea? Thanks.

abstract algebra – Suppose $F ⊂ K$ are fields. Let $f(x) ∈ F[x] ⊂ K[x]$. Suppose that $f(x)$ is irreducible in $K[x]$. Prove that $f(x)$ is also irreducible in $F[x]$.

Suppose $F ⊂ K$ are both fields. Let $f(x) ∈ F(x) ⊂ K(x)$. Suppose that $f(x)$ is irreducible in $K(x)$.

$a)$ Prove that $f(x)$ is also irreducible in $F(x)$.

$b)$ Is it true that if $f(x)$ is irreducible in $F(x)$, then it is irreducible in $K(x)$, if not, give an example.

My attempt:

$a)$ Since $f(x)$ is irreducible over $K$, then $K(x)/(f(x))$ is a field (I have previously proven this).

But since $F ⊂ K$, then $F(x) ⊂ K(x)$ and thus, $F(x)/(f(x))⊂K(x)/(f(x))$.

Since $F(x)/(f(x))$ is a subfield, then it is a field, and so $f(x)$ is irreducible over $F$.

Is my attempt correct?

And I don’t think this still holds if $f(x)$ is irreducible over $F$. Can someone please clarify part $(b)$ and give an example? Thank you

linear algebra – Matrix for rotation around a vector without using rodrigues rotation formula

for example how can I write the rotation matrix around the vector $vec v = (1,1,1)$ with the angle 90°.

I searched all the other questions and they all say that rodrigues rotation formula is the way to go but I’m wondering if there’s an easy way for me.

thanks.

linear algebra – How can I find the inverse of a matrix with undetermined variables?

all. I am going to find the inverse of the matrix after the derivatives. But, the system terminates the calculation during the process. So, what is the problem inside? How can I fix it? Many thanks!

The code can be found from the following link.
https://www.wolframcloud.com/download/ellgan101/Published/InverseOfMatrix.nb

If you cannot download the code, please see the attached screen-shot of the code.

Inverse of matrix

linear algebra – Zero is eigenvalue => so there exists a non-zero vector b, such that Ab = 0b = 0

f: V->V linear function with A as transformation matrix over the field K.

If 0 is eigenvalue of a matrix A, can’t we conclude that every vector is an eigenvector, since every vector b of V satisfies Ab = 0b = 0?
I just read a proof where they made clear there exists one non-zero vector that satisfies it, but what if V is the {0} vector space?

linear algebra – Does the derivative of a function affect to its linearity?

I have a (maybe too simple) question: I have to prove if different sets of equations are or aren’t lineally independent. The case which is making trouble is ${sinx, cosx, 1}$. I have found out about the Wronskian, so I guess this implies that if a combination of functions (i.e. $asinx+bcosx+c=0$) is lineally independent, then so will be its derivative? Is that approach right? My idea was to derivate twice, and then try to solve the equations system for $a, b, c$ to prove that $a=b=c=0$, and thus the three functions are lineally independent. Is this also right?
Thank you very much.