# linear algebra – Why is the square root of an invertible complex symmetric matrix a complex symmetric matrix?

Chapter XI Theorem 3 from here implicitly claims that the square root of an invertible complex symmetric matrix is complex symmetric. Why is this true?

It’s clear that a square root exists, by appealing to the Jordan Normal Form and the fact that the matrix is invertible. But why should this also be symmetric?

Some thoughts: By adapting the proof of the Spectral Theorem to complex symmetric matrices, one gets that “most” complex symmetric matrices are diagonalisable by complex orthogonal matrices. There is an obstruction though, which is that some vectors $$v$$ satisfy $$v^T v = 0$$. Call such a vector a null vector. If a matrix has a null eigenvector then the spectral theorem breaks down and it may not be diagonalisable via orthogonal matrices (for example, take $$left(begin{matrix}1 + i & 1\1 & 1 – iend{matrix}right)$$). But returning to the square root problem, this shows that “most” complex symmetric matrices have a complex symmetric square root. I’m tempted therefore to extend this argument to all symmetric matrices using some kind of density argument, but I’m not sure how that would work.