pr.probability – Change variables in Gaussian integral over subspace $S$

I have been thinking about a problem and I have an intuition about it but I don’t seem to know how to properly address it mathematically, so I’m sharing it with you hoping to get help. Suppose I have two $ntimes n$ real matrices $C$ and $M$ and consider the Gaussian integral:
$$I = Nint e^{-frac{1}{2}ilangle x, C^{-1} xrangle} e^{langle x, M xrangle}dx$$
where $N$ is a normalizig constant and I’m writting:
$$langle x, A x rangle = sum_{i,j}x_{i}A_{ij}x_{j}$$
the inner product of $x$ and $Ax$ on $mathbb{R}^{n}$. $C$ is the covariance of the Gaussian measure; moreover, suppose $M$ is not invertible and has $1 le k < n$ linearly independent eigenvectors associated to the eigenvalue $lambda = 0$. All other eigenvectors of $M$ are also linearly independent, but associated to different nonzero eigenvalues.

This is my problem. I’d like to know how does the formula for the Gaussian integral $I$ changes if I was to integrate over the subspace $S$ spanned by the eigenvectors $v_{1},…,v_{k}$ associated to $lambda = 0$. Intuitively, this integral wouldn’t have the $e^{langle x, Mx rangle}$ factor because $Mequiv 0$ in this subspace. In addition, since $S$ is a $k$-dimensional subspace, I’d expect this integral would become some sort of Gaussian integral over a new variable $y$ which has now $k$ entries.

I would like to know if my intuition is correct and if it is possible to explicitly write this new integral over $S$, which I was not able to do by myself. Thanks!

pr.probability – Finding a function that convert distribution A into distribution B

Variable $x$ is from distribution $p(x)$.

And variable $y$ is from distribution $q(y)$.

The objective is to find function $f$ which is $f(y)=x$.

If there a are set of samples $x$ and a sample $y$, how can I find $f$ that can convert $y$ to the domain of $x$?

Or without having explicit function $f$, can I find the result of $f(y)$ which could be translated as mapping a value of $y$ into distribution $p(x)$?

pr.probability – Contiguity of uniform random regular graphs and uniform random regular graphs which have a perfect matching

Let us consider $cal{G}_{_{n,d}}$ as the uniform probability space of d-regular graphs
on the n vertices ${1, ldots, n }$ (where $dn$ is even). We say that an event $H_{_{n}}$ occurs a.a.s. (asymptotically almost surely) if $mathbf{P}_{_{cal{G}}}(H_{_{n}}) longrightarrow 1$ as $n ⟶ infty$.

Also, suppose $(cal{G}_{_{n}})_{_{n ≥ 1}}$ and $(cal{hat{G}}_{_{n}})_{_{n ≥ 1}}$ are two sequences of probability spaces such that $cal{G}_{_{n}}$ and $cal{hat{G}}_{_{n}}$ are differ only in the probabilities. We say that these sequences are contiguous if a sequence of events $A_{_{n}}$ is a.a.s. true in $cal{hat{G}}_{_{n}}$ if and only if it is true in $cal{hat{G}}_{_{n}}$, in which case we write
$$cal{G}_{_{n}} approx cal{hat{G}}_{_{n}}.$$

Thorem. (Bollobas) For any fixed $d geq 3$, $G_{_{n}} ∈ cal{G}_{_{n,d}}$ a.a.s has a perfect matching.

Using $cal{G}^p_{_{n,d}}$ to denote the uniform probability space of d-regular graphs which have a perfect matching on the n vertices ${1, ldots, n }$, is it true to conclude from the above theorem that $cal{G}_{_{n,d}} approx cal{G}^p_{_{n,d}}$?

pr.probability – Bounds for the beta CDF

This question is closely related to a previous question that I asked here:
An inequality involving the beta distribution

Let $a,b$ be strictly positive integers, and let $F_{a,b}(x)$ denote the CDF for a Beta distribution with parameters $a$ and $b$: $$F_{a,b}(x) = frac{Gamma(a+b)}{Gamma(a)Gamma(b)}int_0^x frac{ t^{a-1} (1-t)^{b-1} }{B(a,b)} dt $$
I am trying to understand the behavior of the function $$g(x)=int_0^xsqrt{F_{a,b}(t)} ~dt$$
Since there is not an algebraic expression for the integrand, I am wondering if there are any existing bounds for $F_{a,b}(cdot)$ that would help me bound $g(x)$, for all $a$, $b$, and $x$?
In my previous question, it was answered that $$frac{b}{a+b}leq g(1)leqfrac{2b}{a+b}$$
As one example, an easy observation is that if we fix $b=pa$ for some constant $p$ and let $a$ become large, then $F_{a,b}$ looks like a step function that jumps from $0$ to $1$ at the point $t = a/(a+b)$, and my integral inherits this same behavior.

pr.probability – consequence of “the best coupling” of two SDEs with different diffusion matrices

My question comes form a potion of the long review paper, which is attached below
enter image description here

In the set-up, $sigma_1$ and $sigma_2$ are possibly different, constant diffusion matrices. To my knowledge, if we take the (“cheap”) synchronous coupling $B^1_t = B^2_t$, then we can estimate $mathbb{E}(|X^1_t – X^2_t|^2)$ using for instance Ito isometry or BDG inequality. However, what will be the error estimate $mathbb{E}(|X^1_t – X^2_t|^2)$ under this new (“the best”) coupling? The reference paper FH16 published on The Annals of Probability is overwhelming long and it is also too technical for me. Thanks for any help!

pr.probability – Conditional Independence in Measure theoretic terms

Let $Omega$ be a compact Hausdorff space in $mathbb{C}^n$. Let $sigma_Omega$ be the Borel sigma algebra on $Omega$. Let $zeta: Omegalongrightarrowpartial mathbb{D}$ be a non constant continuous function. Let $sigma_{partial mathbb{D}}$ be the Borel sigma algebra on $partial mathbb{D}$(Unit circle on the complex plane). Now consider the sigma algebra $sigma_zeta={{zeta}^{-1}(A): ;Ain sigma_{partial mathbb{D}}}subset sigma_Omega$.

Now let $fin L^1(Omega, sigma_Omega, mu)$ and lets define a new measure $f_mu$ on $(Omega,sigma_zeta)$ as $f_{mu}(A)=int_A f dmu$. It is easy to see that for $Ain sigma_zeta $, ${mu}(A)=0$ implies $f_{mu}(A)=0$, i.e $f_{mu}(A)$ is absolutely continuous with the restriction of $mu$ to $sigma_zeta$, so by the Radon Nikodym theorem there exists a $gin L^1 (Omega, sigma_zeta, mu)$ such that
$int_A f dmu =int_A g dmu$ for every $Ain sigma_zeta$. Lets call this $g$ as the conditional expectation of $f$ and denote it as $E(f|sigma_zeta)$.

Can anyone explain me conditional independence of any two functions $h,kin L^1(Omega, sigma_Omega, mu)$ given $zeta$.

I need to understand this result in measure theoretic sense. or any reference for the same will be really appreciated.

pr.probability – To prove a relation involving a probability distribution

I’m reading a book and have encountered a relation which seems to me to be impossible to prove, I would like to be sure if this is the case. The author gives a probability function as
$$p_n = frac{e^{-c_1 n – c_2/n}}{Z},$$
where $c_1$ and $c_2$ are constants and Z is a normalization factor and $n geq 3$. Then by defining $alpha$ as $alpha = sum_{n = 3}^{infty} p_n (n – 6)^2$, the author claims one can show that

alpha + p_6 = 1, quad quad quad 0.66 < p_6 < 1,

alpha p_6^2 = 1 / 2 pi, quad quad quad 0.34 < p_6 < 0.66.

How is such a thing possible in the first place as these relations are not even dependent on $c_1$ and $c_2$?

pr.probability – Existence of measures with given 1d marginals

This is a question about marginals of probability measures, which seems unrelated to previous questions.

Let $mathbb{S}^{d-1}subset mathbb{R}^d$ be the unit sphere. Assume that for each $thetain mathbb{S}^{d-1}$ there is an associated probability measure $mu_theta$ on $mathbb{R}$.

Question: Under what conditions does there exist a probability measure $mu$ on $mathbb{R}^d$ such that

$$mbox{if }Xsim mu,mbox{ then }langle theta,Xrangle sim mu_thetambox{ for all }thetain mathbb{S}^{d-1}.$$

(Here $langlecdot,cdotcdotrangle$ is the Euclidean inner product and $Asim nu$ means object $A$ has prob. law $nu$.)

A necessary condition is that the map $thetamapsto mu_theta$ be continuous under the weak topology on probability measures on $mathbb{R}$. Is this condition sufficient?

(Motivation comes from the study of the Sliced Wasserstein distance. If the answer to the above question is “yes”, then in principle one can “easily compute” barycenters for SW.)

pr.probability – Summation of all subsets of Laplacian sampled variables

Say I have a set defined: $L = (L_1, … L_N)$, where $L_i sim operatorname{Laplace}(0, b)$. I am interested in the two sums:

  1. The summation of all $L$: $sum_{i=1}^{i=N} L_i$
  2. The summation: $sum_{k=1}^{N} (-1)^{k+1}sum_{1 leq i_1 cdots i_k leq N} sum_{j in {i_1 cdots i_k}} L_i$.

I am specifically interested in what these summations will look like as $N$ approaches infinity, or just any really large $N$.

pr.probability – What does a barycenter look like on an injective metric space?

In page 13 of these notes it is said that it $X$ is an injective metric space, in the sense of Isbell, then there is a $1$-Lipschitz map $beta: P_1(X)rightarrow X$ which is a left-inverse of the canonical isometric embedding $xmapsto delta_x$. Is there an explicit description of such a $beta$?

For comparison, what I mean by “explicit” is as follows. For a large class of similar “barycentric” metric spaces, explicit descriptions of $beta$ are known. For example:

  • Banach spaces such a $beta$ sends any $mu in P_1(X)$ to its Bochner integral $beta(mu)=int_{x in X} xdmu(x)$
  • If $X$ is a Cartan-Hadamard manifolds then $beta(mu)=operatorname{argmin}_{xin X} int_{zin X}d^2(x,z)dmu(z)$ is the Fréchet mean.