**The context**

As a way to understand the growth of the structure in the universe,

I am interested in characterizing the curvature of random fields such as this one:

To do this, I start with a PDF of the eigenvalues of the second derivative of the field (measurement of the local curvature). This PDF file contains a Gaussian random field

```
Pdf1 = 2 Sqrt[2/[Pi]](x1-x2) Exp[1/2 (-x1 (3 x1-x2)-x2 (3 x2-x1))]
```

and looks like this:

```
rg = {{x1, -Infinity, Infinity}, {x2, -Infinity, x1}};
ContourPlot[Pdf1, Sequence @@ rgn // Evaluate, ImageSize -> Small]
```

whereas when the field becomes non-Gaussian, the PDF can be very different:

My goal is to use orthogonal polynomials to represent this non-Gaussian PDF file.

**Attempt**

I have defined a scalar product

```
Clear[int]; int[a_, b_] : =
To integrate[ a b Pdf1, Sequence @@ rg // Evaluate]
```

and a digital integration version of it

```
Clear[nint]; nint[a_, b_] : =
NIntegrate[ a b Pdf1, Sequence @@ rg // Evaluate]
```

I define my 2D polynomial

```
p = 2; pol0 =
Table[Table[x1^i x2^(p1 - i), {i, 0, p1}] // Flatten // Union,
{p1, 0, p}]// Flatten // Join
```

{1, x1, x2, x1 ^ 2, x1 x2, x2}

```
pol = Orthogonalize[pol0, int[#1, #2] &]
```

They look like this:

```
Map[ContourPlot[# Pdf1 , Sequence @@ rgn // Evaluate,
PlotPoints -> 15, PlotRange -> All] &, pol]
```

If I do the same thing numerically

```
pol = Orthogonalize[pol0, intn[#1, #2] &, Method -> "Reorthogonalisation"];
```

I receive the same answer.

**But**

If I try to find **higher order** polynomials numerically

```
p = 6; pol0 =
Table[Table[x1^i x2^(p1 - i), {i, 0, p1}] // Flatten // Union,
{p1, 0, p}]// Flatten // Join;
pol = Orthogonalize[pol0, intn[#1, #2] &, Method -> "Reorthogonalisation"];
```

I understood

(* {1., 1.81473 x1-0.804133, -1.53835 x1 + 2.37903 x2 + 1.73585,2.33148 x1 ^ 2-2.23663 x1 + 0.170409 x2-0.0991482, -2.84065 x1 ^ 2 + 4.36636 x1 x2 + 4.97902 x4.66.66., 1.84722 x1 ^ 2-5.47403 x1 x2-4.89841 x1 + 4.11307 x2 ^ 2 + 6.90647 x2 + 2.25076,0,3.15107 x1 ^ 2 x2 + 1.83935 x1 ^ 2-3.44839 x1 x2-3.23326 x1 + 0.212753 x2 0.219.13 x2 + 0.676981,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0} *)

In other words, the *loss of precision* in the orthogonalization leads to

*zero* polynomials of higher order.

I've also tried Gramm Schmitt by hand (somewhere on SE)

```
gs[vecs_, ip___] : = Module[{ovecs = vecs},
Do[ovecs[[i]]- = projection[ovecs[[i]], ovecs[[j]], ip]{i, 2,
Length[vecs]}, {j, 1, i - 1}
; ovecs];
pol1 = gs[pol0, Function[
NIntegrate[ ## Pdf1, Sequence @@ rg // Evaluate]]];
```

but that gives the same *loss of precision*.

I have also tried the eigenvectors of the matrix of scalar products

```
mat = ParallelTable[int[i, j], {i, pol0}, {j, pol0}
;
eigs = Eigensystem[mat // N];
```

But the orthogonal polynomials are not of increasing order:

```
Map[ContourPlot[# Pdf1, Sequence @@ rgn // Evaluate,
PlotPoints -> 15, PlotRange -> All] And
pol = eigs[[2]].pol0 / Sqrt[eigs[[1]]]// Chop]
```

Note that although orthogonal, they are not orthogonal "in the same direction".

**Question**

How can I calculate orthogonal polynomials of higher order

precisely?

**Annex question**

One option would be to stick to the symbolic evaluation of the scalar product,

but it seems like it takes forever for higher order polynomials.

Is it possible to say

`orthogonalizing`

do not standardize the polynomials which, symbolically, seem to take the most time?