referral request – higher order domestic products orthonormal base

Let $ pi $ to be a measure of probability on a space $ mathcal {X} $and leave $ Phi = { phi_k } _ {k geqslant 0} $ be an orthonormal base (possibly with complex value) for $ L ^ 2 ( pi) $, with $ phi_0 equiv 1 $. Let $ f in L ^ 2 ( pi) $ be expressible on this basis as $ f = sum_ {k geqslant 0} f_k phi_k $.

In some calculations, it became relevant for me to process the quantities (and linked) of the form

begin {align}
Q ^ k = int pi (dx) | f (x) | ^ 2 | phi_k (x) | ^ 2 quad text {for} k geqslant 1,
end {align}

and ideally I would like a form limit $ Q ^ k leqslant c ^ k sum_ {k geqslant 0} | f_k | ^ $ 2 for an explicit sequence $ {c ^ k } _ {k geqslant 0} $.

In principle, $ Q ^ k $ is a quadratic form in the $ {f_k } _ {k geqslant 0} $, so it should be useful. However, the nature of this quadratic form is generally somewhat mysterious; it should end up involving quantities like

begin {align}
Q ^ k_ {ij} & = int pi (dx) phi_i (x) overline { phi} _j (x) | phi_k (x) | ^ 2 \
& = int pi (dx) left ( phi_i (x) overline { phi} _k (x) right) cdot overline { left ( phi_j (x) overline { phi} _k (x) right)} quad text {for} i, j geqslant 0
end {align}

and these should depend quite strongly on the properties of the base $ Phi $.

In the end, I think that a solution to this problem will only be possible once a base has been set, and that it will be something relatively treatable. In the case where $ Phi $ is a Fourier basis, for example, things are pretty good, and we can take $ c ^ k equiv 1 $. I think it could also be possible in other cases where $ phi_a overline { phi_b} $ can be written as linear combinations of others $ phi_c $; beyond that, it could be quite tricky.

My question is: are there other orthonormal bases for which the limits of this form should be traceable? I would be particularly happy if there were families of classical orthogonal polynomials for which this is possible, but I don't know where to look for such results. All relevant references will also be received with pleasure.

orthogonality – Show that a finite set $ B = left {1, x, x ^ 2 right } $ is an orthonormal system with respect to the interior product

Here's my problem:

Show that a finite set $ B = left {1, x, x ^ 2 right } $ is a system orthonormal to the inner product $ left langle f, g right rangle = int _ {- 1} ^ 1 : f left (t right) cdot g left (t right) dt $ for all $ f, g in L ^ 2 left (-1, 1 right) $.

And then there is a clue: assess $ left langle 1, x right rangle, left langle 1, x ^ 2 right rangle and left langle x, x ^ 2 right rangle $. As far as I'm concerned, these are the indicators of orthogonality (if they are all zero then the whole is orthogonal). My results are consecutive $ 2x $, $ 2x ^ 2 $, and $ 2x ^ 3 $.

I don't understand – they are only worth zero if $ x = $ 0. Does this mean that the whole is not orthogonal – and therefore cannot be orthonormal? How to proceed with this problem?

Math – Find a quaternion of rotation from an orthonormal basis?

Given three unitary 3D vectors $ a $, $ b $, $ c $ such as:

$ a times b = c $

$ b times c = a $

$ c times a = b $

(that is to say $ a, b, c $ for men orthonormal basis)

How do you calculate a unit quartile $ q $ so that the product of the sand width (i.e. rotation) of $ q $ the (1,0,0) is $ a $, (0,1,0) is $ b $ and (0,0,1) is $ c $ ?

functional analysis – Orthonormal basis in the spectral theorem of compact operators in Hilbert spaces

I started to study compact operators on Hilbert spaces. I read the proof of the spectral theorem (taken from page 12 of https://www.iith.ac.in/~rameshg/NITKworkshop.pdf) and wondered if it was possible to complete by proving that the system of orthonormal vectors which appear is, in fact, an orthonormal basis.

Could someone help me? I do not see how.

linear algebra – Composition of the simetric matrix, an orthonormal basis of V is $ {XD ^ {^ { frac {-1} {2}}} e_ {1}, …, XD ^ {^ { frac {-1} {2}}} e_ {n} } $?

I saw this question

Let $ V = mathbb {R} ^ {n} $ vector space, $ Q $ is a definitive symmetric and positive matrix, it breaks down $ Q = XDX ^ {T} $ gives an orthonormal basis for $ v $ given by the columns put on the scale of $ X $to know $ {XD ^ {^ { frac {-1} {2}}} e_ {1}, …, XD ^ {^ { frac {-1} {2}}} e_ {n} } $.

My attempt:
I know if $ Q $ is symmetrical then diagonalisable, there is a base consisting of eigenvectors of $ Q $, but I do not know what it looks like$ big ( {XD ^ {^ { frac {-1} {2}}} e_ {1}, …, XD ^ {^ { frac {-1} {2}}} e_ {n } } big) $it is very general. Any help with the proof, thanks in advance

fa.functional analysis – Reproduction of the kernel and orthonormal basis of a multidimensional Sobolev space of different orders

Let $ Omega $ to be an open subset of $ mathbb {R} ^ d $. In regular conditions, we know that the $ s $order of Sobolev $ H ^ s ( Omega) $ with $ s geq d / 2 $ is a Hilbert space with reproductive core. On the other hand, $ H ^ s ( Omega) $ with $ s <d / 2 $ is only a Hilbert space without the reproduction property.

My question concerns the construction of the orthonormal basis of $ H ^ s ( Omega) $.

For $ s geq d / 2 $the proper function of the reproductive nucleus gives us an orthonormal basis which, up to a resizing of the quantities, is also an orthonormal basis of $ L ^ 2 ( Omega) $ (which can be written in an equivalent way as $ H ^ 0 ( Omega) $).

For $ s <d / 2 $how can we build an orthonormal basis in the same way, since there is no longer a reproducible core? In addition, for $ 0 leq s_1, s_2 <d / 2 $, is it possible to align the orthonormal bases of $ H ^ {s_1} ( Omega) $ and $ H ^ {s_2} ( Omega) $ so that they differ only until scaling up the magnitudes?