## nonlinear optimization – Confusion about the optimal parameter value non linear least squares

The normal equation for the nonlinear least squares is denoted as

$$boldsymbol{Delta} boldsymbol{beta}=left(mathbf{J}^{mathrm{T}} mathbf{J}right)^{-1}mathbf{J}^{mathbf{T}} boldsymbol{Delta} mathbf{y}$$

Where

$$beta_{j} approx beta_{j}^{k+1}=beta_{j}^{k}+Delta beta_{j}$$

In order to perform the nls I have to predetermine starting values of $$beta$$.

However, once I have done that and calculate $$Deltabeta$$ how do I obtain the optimal value for the parameter?

I do not fully understand the GaussNewton Method. Could anyone please explain the procedure for finding the optimal parameter?

## Background

I have a set $$S$$ (that is possibly infinite) and a correspondence between functions $$c:S^3tomathbb{R}$$ (I will write $$c(i,j,k)$$ as $$c_{ijk}$$) and matrices $$M$$ with rows indexed by $$(i,j,k)in S^3$$ and columns indexed by $$(m,n)in S^2$$ defined as follows:

$$M_{(i,j,k),(m,n)}=delta_{im}c_{njk}+delta_{jm}c_{nik}-delta_{ln}c_{ijm}$$

where $$delta_{ij}$$ is the kronecker delta function.

## Question

I want to determine some necessary and sufficient conditions on $$c$$ so that the above matrix has trivial null space.

Any references, ideas, techniques, etc. would be appreciated. Of course, I’m not looking for a complete solution to the problem, I just don’t know what sort of methods might even be used to attack this (despite it appearing at first to be a simple linear algebra problem).

## Progress

For finite $$S$$ with cardinality $$N$$, this is a $$N^3times N^2$$ size matrix, so it should be true that most choices of $$c$$ result in a matrix with trivial null space. However, I’m still not sure what “most” would even mean in this context.

I’ve also proven the result for $$c_{ijk}=delta_{ijk}$$ (that is $$c_{iii}=1$$ and equals zero otherwise), but otherwise have no further results.

## linear algebra – Visualize the geometric interpretation of the matrix power of a matrix with complex eigenvalues

I can understand the geometric meaning of $$A^n$$ (here $$A in R^{n times n}$$) when the eigenvalues of $$A$$ are all real. Basically, you scale up the each eigenvector $$v_i$$ along its direction by $$lambda_i$$.

But what happens if the eigenvalues are complex? In that case, the eigenvectors will have complex elements too. I think I can sort to guess, $$lambda_i$$ here does a bit of the rotation. But how do I define the direction of an eigenvector, with complex elements?

## linear algebra – Using dimension formula to prove subspace dimension formula

I’ve always suspected the formulas $$dim F+G = dim F + dim G – dim Fcap G$$ and $$dim V = dim mathrm{Ker} T + dim mathrm{Im} T$$ were related someway. So i tried proving the former using the latter.

It is easy to see that the direct outer product $$Ftimes G$$ has dimension the sum of dimensions of $$F$$ and $$G$$. Define $$pi: Ftimes G twoheadrightarrow F+G$$ by $$pi(f,g) = f+g$$. It is clearly surjective. I claim that the kernel of $$pi$$ is the subspace $$H$$ of $$(Fcap G)^2$$ of elements of form $$(f, -f)$$, and therefore of dimension equal to the one of $$Fcap G$$. Therefore the former result results from applying the dimension formula to $$pi$$.

Is this correct?

## linear algebra – Demonstrating that vectors S1 and S2 are base of R3 and their base change matrix

Could someone explain and show me how you would demonstrate that the vectors S1 ={(1,1,1),(2,4,3),(-3,2,3)}. and S2 ={(1,2,1),(-1,-1,0),(2,9,8)}. are base of R3.

Secondly how would you determine de base change matrix from B1 to B2.

Thank you really much.

## Operators "building" linear independant sets

Let $$E$$ be a separable Banach space and let $$Tin L(E,E)$$.

Is there a condition on $$T$$ ensuring that:
$$mbox{{x_n}_{n=1}^Nsubseteq E is linearly independent} Rightarrow {T(x_n)}_{n=1}^Ncup {x_n}_{n=1}^N mbox{ is a independent in E}?$$

Is $$T$$ being chaotic or mixing enough for this?

## A question about Functional Analysis : The linear operator is surjective

The Question says:

Let be $$varphi: X rightarrow mathbb{C}$$ linear. If $$varphi$$ isn’t null then $$varphi$$ is surjectvive.

I have no idea how to do this.
The only thing that I know is that ‘ how $$varphi$$ isn’t null, then exists a $$xi in X$$ such that $$varphi (xi ) = a+ i b neq 0.$$
But I can’t see how this implies that for all $$a+ib in mathbb{C}$$ exists $$x in X$$ satisfying $$varphi (x ) = a+ i b.$$

## nonlinear – System of Non Linear Equations

I have a system with 7 equations and 7 unknowns with 11 parameters. I wish to get closed form solutions for each of those 7 equations. However, the system is non linear in the parameters which makes the solution even more complicated. Is there any way to solve this? Or should I attempt to linearize the system of equations? If linearizing is the option how should I do it in Mathematica?

## linear algebra – Show T is an isomorphism

Define $$T:V→ W$$ by $$T( a₁ +a₂t +a₃ t²+a₄ t³ ) =left(begin{array}{l}frac{1}{√{1} }a₁ & frac{1}{√{2} }a₂ \frac{1}{√{4} }a₄ &frac{1}{√ {3} }a₃ end{array}right)$$ . Show that T is an isomorphism such that for any $$p( t ) ,q( t ) ∈ V ⟨ Tp( t ) ,Tq ( t ) ⟩_{w}=⟨ p( t ) ,q( t ) ⟩_v$$ . $$Where ⟨ , ⟩ _{w}$$ is the Frobinious inner product on W , i.e., $$⟨ A,B ⟩ _{w} = tr( A^TB )$$.

This is a homework given to us and I know that a linear transformation is isomorphic if its kernel only contains the zero vector. I’m not entirely sure how to proceed with this specific problem.

## linear algebra – Computing the exponential of a $2 times 2$ matrix using trace $0$ matrices

It is an easily proved fact that for a $$2times 2$$ traceless matrix $$A$$,

$$e^A = cosleft(sqrt{det(A)}right)I + frac{sinleft(sqrt{det(A)}right)}{sqrt{det(A)}}A$$

Problem 2.7 of Lie Groups, Lie Algebras, and Representations by Bryan Hall asks to use this fact to compute $$exp(X)$$, where

$$X = begin{pmatrix} 4 & 3\ -1 & 2 end{pmatrix}$$

In other words, I have to write $$X$$ in terms of traceless matrices, and employ the above fact. My question is: is there a systematic way to do this?

My idea to solve this problem is to write $$X = X_1 + X_2$$, where $$X_1$$ is traceless, $$X_2$$ is diagonal or nilpotent, and $$(X_1, X_2) = 0$$, and compute the exponent using $$e^{X_1 + X_2} = e^{X_1}e^{X_2}$$. For example, I tried the most obvious thing:
$$X = begin{pmatrix} -2 & 3\-1 & 2end{pmatrix} + begin{pmatrix} 6 & 0\0 & 0end{pmatrix},$$

but the two matrices above do not commute.