matrices – Constructive proof that a linear relation between columns implies a linear relation between rows

Let $M = left(M_{ij}right)$ be a matrix and let $v = (v_{i}) neq 0$ be such that $Mv = 0$. This can be rephrased into requiring that $sum_{i} v_{i} M_{i} = 0$, where ${M_{i}}$ are the column vectors in $M$, i.e. $(v_{j})$ gives a linear relation on the columns of $M$. I can think of multiple proofs that the existence of $v$ implies the existence of a $w=(w_{j})$ such that $M^{t}w = 0$, but none of them are constructive.

How can we write done the coordinates of such a $w$ in terms of the coordinates of $v$? In other words, how can we translate a linear relation of columns into a linear relation of rows?

machine learning – Learn a system of linear inequalities given solutions

Instead of finding a solution to a system of linear inequalities (Ax + b >= 0), I want to find any system of linear inequalities that satisfy a set of feasible points and does not satisfy a set of infeasible points. As the image shows, I am given points (either a bunch together or one by one) labelled with in/out, yes/no, or similar binary attribute to know if it should be within the system or not, and I want to find lines that separate these.

enter image description here

There should be some inductive method to fit any number of linear inequalities to satisfy these points. It’s important that the output is a system of linear inequalities so it can be used as input into linear programming solvers to find new solutions. I’ve looked into SVM’s but the model seams just to be a hyperplane. Maybe it is for use. A one-layer neural network should do the trick but then the number of lines must be set from start.

linear algebra – Performing matrix tensor product and getting a sum of its negative entries

If it is only about the sums of negative entries, then yes.

This is how we can do it without even building the tensor/Kronecker products:

M = RandomReal({-1, 1}, {3, 3});
ClearAll(sumPositive);
ClearAll(sumNegative);
sumPositive(1) = Total(Ramp(M), 2);
sumNegative(1) = Total(Ramp(-M), 2);
sumNegative(n_) := sumNegative(n) = Plus(
    sumPositive(n - 1) sumNegative(1),
    sumNegative(n - 1) sumPositive(1)
    );
sumPositive(n_) := sumPositive(n) = Plus(
    sumPositive(n - 1) sumPositive(1),
    sumNegative(n - 1) sumNegative(1)
    );

Test for n = 8:

n = 8;
result1 = sumNegative(n); // MaxMemoryUsed // AbsoluteTiming
result2 = Total(Ramp(-KroneckerProduct @@ ConstantArray(M, n)), 2); // MaxMemoryUsed // AbsoluteTiming

result1 == result2

{0.00007, 4080}

{1.63739, 688748008}

True

Notice that sumNegative(n) needs $O(n)$ memory and FLOPs while the brute force approach needs $O(9^n)$ of both.

The key idea that the sum of all entries of a tensor product is just product of the sums:

Total(KroneckerProduct(A, A), 2) == Total(A, 2)^2

Now we only have onlt to keep track of the negative and the nonnegative parts of the tensors… Combined with recursion and memoization, we are lead to sumPositive and sumNegative.

The definition of a rational representation of a linear algebraic group

I’m reading Concini-Procesi’s “the invariant theory of matrices”. My question regards their setup in the introduction, and its relation/translation into the language of modern algebraic geometry.

The setup

Here is the setup they use in $S$2.3:

Let $F$ be an algebraically closed field. We view $GL_n(F)$ as the affine subvariety of $F^{n^2+1} = M_n(F)times k$ defined by the pairs $(A,u)$ satisfying $det(A)u = 1$. Thus its coordinate ring is $F(x_{i,j})(det^{-1})$ (polynomial ring in $n^2$ coordinates with the determinant inverted).

They define a linear algebraic group $G$ over $F$ to be a subgroup of $GL_n(F)$ cut out by polynomial equations. It’s coordinate ring is defined to be the restriction of the coordinate ring of $GL_n(F)$ to the subvariety $G$, denoted $A(G)$. Thus $A(G)$ is the quotient of $F(x_{i,j})(det^{-1})$ by some ideal.

They define an $N$-dimensional rational representation of $G$ to be a homomorphism
$$rho : Glongrightarrow GL_N(F)$$
such that the matrix elements $rho(g)_{h,k}$ for $h,k = 1,ldots,N$ belong to $A(G)$.

My question

Here, I assume they mean to take $rho(g)_{h,k}$ as functions on $G$? Is this the same as saying that $rho$ defines a morphism of affine group schemes?

Wikipedia defines a representation to be rational if $rho$ defines a rational map of algebraic groups $Grightarrow GL_N$, so e.g. the morphism needs only be defined on a dense open.

Are these two definitions equivalent?

linear algebra – How to eliminate « a priori » all vectors in a list of vectors whose scalar product with a given vector is zero without calculating the product

How to eliminate « a priori » all vectors in a list of vectors, whose scalar product with a given vector is zero, without actually calculating the product ?

One solution would be to store the components (as rows) in a relational database table and the to discard all vectors (with a SELECT statement) whose components are NULL in the columns where the components of the given vector are not NULL.

However that’s probably overkill.

I was thinking of associating with each array a second array of bits to make a sort of mask, in order to be able to quickly compare the vector’s « masks » to implement a kind of pre-selection.

Is there a known straightforward solution to this problem?

linear algebra – $A$ upper triangular $nxn$ over $mathbb{R}$. Show that $I-A$ is invertible and express inverse of $I-A$ as a function of $A$.

Question: Let $A$ be a strictly upper triangular $nxn$ matrix with real entries. Show that $I-A$ is invertible and express the inverse of $I-A$ as a function of $A$.

Attempt (and thoughts): $A$ is upper triangular, so the diagonal is all $0$, everything below the diagonal is all $0$, and then we have entries in the upper part, say, $a_{1,2}$, as the entry in row $1$ column $2$. If we consider $I-A$, then $I-A$ has $1$ in every entry of the diagonal, only $0$ below the diagonal, and negative whatever the entry of $A$ was in the upper part. So, det$(I-A)=1n=1$, so $I-A$ is invertible. Consider $X=I+A+A^2+dots +A^{n-1}$. Then, I want to show that $(I-A)X=X(I-A)=I$. I can see why this “might” work, since we would have $I$ and then add all the powers of $A$, then subtract all the powers of $A$… but are there any unexpected issues I should be careful of here? Thank you much!

terminology – What does “linear” in Linear Temporal Logic refer to?

Consider the term linear temporal logic (in the meaning of linear-time temporal logic). In linear temporal logic, what does linear refer to:

  1. to temporal or

  2. to logic?

If I interpret http://en.wikipedia.org/wiki/Linear_logic and http://en.wikipedia.org/wiki/Linear_temporal_logic correctly, LTL is not an instance of the linear logic and not related to it in any way (except that both are, well, logics), is it? So, linear must refer to temporal. Shouldn’t we write then

linear-temporal logic

(i.e., with a hyphen) then to avoid misinterpretation?

Showing that the functional integration is a linear functional as follows

Let $C((a,b))$ be a set of all continous function on $(a,b)$. Let $J:C((a,b)) to Bbb R$ be a functional. Then, the integral
begin{equation*}
J(h) = int_a^b h(x) dx
end{equation*}

defines a linear functional on $C((a,b))$ for all function $h(x) in C((a,b))$.

Attempts:

  1. Let $alpha in Bbb R$ be an arbitrary. Then,
    begin{equation*}
    J(alpha h) = int_a^b (alpha h)(x) dx = alpha int_a^b h(x) dx = alpha J(h).
    end{equation*}
  2. For any $h_1,h_2 in C((a,b))$,
    begin{equation*}
    J(h_1+h_2) = int_a^b (h_1+h_2)(x) dx = int_a^b h_1(x) + h_2(x) dx = int_a^b h_1(x) dx + int_a^b h_2(x) dx = J(h_1) + J(h_2).
    end{equation*}
  3. Want to show: $J(h)$ is continuous at $hat{h} in C((a,b))$.
    Given any $varepsilon > 0$. Choose $delta = min{1,frac{varepsilon}{(b-a)}}$ such that for all $h in C((a,b))$ with $|| h – hat{h} || < delta$,
    begin{align*}
    |J(h) – J(hat{h})| &= |int_a^b h(x) dx – int_a^b hat{h}(x) dx \
    &= |int_a^b (h(x) – hat{h}(x)) dx | \
    &le int_a^b |h-hat{h}| dx \
    &le int_a^b max_{ale x le b} |h-hat{h}| dx \
    &= int_a^b || h-hat{h} || dx \
    &< varepsilon.
    end{align*}

Hence, by definition, the above integral defines a linear functional on $C((a,b))$.

My question is, does the third part correct? I still have a little bit confusing on choosing $delta$. When we use the minimum, and when we use directly, say $delta = frac{varepsilon}{b-a}$ ?

Thanks in advanced.

linear algebra – Is there an efficient algorithm to project a vector onto the eigenbasis of a symmetric matrix?

Let $H$ be a symmetric matrix over $mathbb R^n$. Given some vector $u$, I would like to express $u$ in the eigenbasis for $H$. Can this be done efficiently, perhaps using some kind of iterative method? I know there are iterative methods for computing the eigenbasis itself, but computing the entire eigenbasis would represent quadratically more data than I actually need ($n^2$ reals rather than only $n$), so I was hoping I might be able to get a speed up by using a more targeted method.

I only have implicit access to $H$ – i.e. I can compute matrix-vector products $Hx$. I would like to avoid having to actually compute $H$ if possible, although I would be happy with anything faster than computing $H$ and then diagonalizing it directly.

linear algebra – I want to know the relation between worldly phrase and mathematical symbolic phrase

Explanation:
Some example

example #1,

Speed of car = 60km/h

Speed of car in 1 hour = 60km

I know first one is correct but what about second one is it correct or not if yes why and if not why?

Another example,

Price of 6 bottles = 6 dollars

Price of 6 bottles = 6 dollars/bottle

What about the second statement in example above.

I want to know the relation between units of word phrase and symbolic phrase when to write only one unit and when to write second unit as well.

Another example is of mass in chemistry.

Mass of 1 mole of carbon 12 = 12 gram

Molar mass of carbon 12 = 12gram/mol

Is Molar mass also not saying mass of one mole? Then why ​
Different units?