pr.probability – Why does the three points follow by making the two assumptions about the conditioned intensity function?

The intensity function is defined as

$$lambda^*(t)=frac{f(t|H_{t_n})}{1-F(t|H_{t_n})}$$
where $f$ is the density function and $F$ is the distribution function, and $H_{t_n}$ is the history of all the previous points of $t$ up to $t_n$.
Moreover, there is proven that $F(t|H_{t_n})$ is also given as:

$$F(t|H_{t_n})=1-e^{-int_t^{t_n}lambda^*(s)ds}$$.

An assumption is then made, saying that

  1. $lambda^*(t)$ is non-negative and is integrable on every interval after $t_n$.

  2. $int_t^{t_n}lambda^*(s)ds to 1$ for $t to infty$.

It is then said that hence the three points follows:

  1. $0 leq F(t|H_{t_n}) leq 1$
  2. $F(t|H_{t_n})$ is a non-decreasing function of $t$.
  3. $F(t|H_{t_n}) to 1$ for $t to infty$

Can someone explain to me how these three points follow given the two assumptions above? Thank you.

pr.probability – Convergence of the expectation of a random variable when it is conditioned to its sum with another, independent but not identically distributed

Suppose that for all $ n in mathbf {N} $, $ X_n $ and $ Y_n $ are independent random variables with
$$ X_n sim mathtt {Binomial} (n, 1-q), $$
and
$$ Y_n sim mathtt {Poisson} (n (q + epsilon_n)), $$
or $ q in (0,1) $, and $ ( epsilon_n) $ is a deterministic sequence such that $ epsilon_n to 0 $ as $ n to infty $.


Goal:

I'm looking for a way to solve the following "signal extraction / estimation" problem, namely:

For a sequence $ s_n geq 0 $ with $ n s_n in mathbf {N} $, show that $ n to infty $,

$$ frac { mathbf {E} (X_n mid X_n + Y_n = n s_n)} {n} = 1 – q + O (| s_n – 1 |) + O ( epsilon_n). $$


Heuristic:

Here's why I think it's true. We know that $ n ^ {- 1} X_n $ and $ n ^ {- 1} Y_n $ are both approximately Gaussian and moreover, if $ Z_1, Z_2 $ are independent gaussians with means $ mu_1 $ and $ mu_2 $and deviations $ sigma_1 ^ 2 $ and $ sigma_2 ^ 2 $ respectively, $ Z_1 mid Z_1 + Z_2 = s $ is also Gaussian and
$$ mathbf {E} (Z_1 mid Z_1 + Z_2 = s) = mu_1 + frac { sigma_1 ^ 2} { sigma_1 ^ 2 + sigma_2 ^ 2} (s – mu_1 – mu_2), $$
that is to say that we distribute the difference between the expectation of the sum and the statistic observed according to the ratio of variances.

If we naively assume that this property can be carried over to the boundaries of $ n ^ {- 1} X_n $ and $ n ^ {- 1} Y_n $, then we can believe that

begin {align}
frac { mathbf {E} (X_n mid X_n + Y_n = n s_n)} {n} & = mathbf {E} (n ^ {- 1} X_n mid n ^ {- 1} X_n + n ^ {-1} Y_n = s_n) \
& about 1 – q + frac {q (1-q)} {q (1-q) + q + epsilon_n} (s_n – (1-q) – (q + epsilon_n) \
& = 1 – q + O (| s_n – 1 |) + O ( epsilon_n).
end {align}


Attempts):

  1. Feat local limit theorem: my main attempt was a brute force approach, trying to prove it directly by approximating the probability mass functions of $ X_n $ and $ Y_n $ by Gaussian densities using the local limit theorem, that is, we can write
    $$ frac { mathbf {E} (X_n mid X_n + Y_n = n s_n)} {n} = frac {1} {n} sum_ {k = 0} ^ nk frac { mathbf {P } (X_n = k) mathbf {P} (Y_n = ns_n – k)} { mathbf {P} (X_n + Y_n = ns_n)}. $$
    Each of the probabilities in the sum can be approximated by a Gaussian density with an error term which is $ O (n ^ {- 1/2}) $ evenly in $ k $. The realization of this operation is however extremely complicated, and it will be necessary to be extremely careful during the precision of the approximation of the sums of Riemann which will appear with their corresponding integrals.

  2. Try to find relevant tips / results under the theme "signal extraction / estimation": essentially, the problem here is to estimate / extract a signal from an observation with independent additive noise (and approximately Gaussian). It seems to me that this would be a well-studied problem but the permutation searches of my question above give the standard undergraduate results for the sums of the random variables iid.


Specific questions:

  1. Is it possible that there is an intelligent way to use the approximate Gaussian behavior of $ X_n $ and $ Y_n $ prove this result without the brute force approach of the local limit theorem?
  2. Are there keywords that can lead me to similar results in the signal extraction / estimation literature?

Conditional probability conditioned by several random variables

East $ P (X = x | Y, Z_1, Z_2) $ simplified as $ P (X = x, Y = y | Z_1, Z_2) / P (Y = y | Z_1, Z_2) $ ? or $ P (X = x, Y = y, Z_1, Z_2) / P (Y = y, Z_1, Z_2) $ ? I have seen the old technique used in a document but I don't know how $ Y $ was "moved" from being conditioned to a joint probability $ X = x, Y = y $?

Can anyone help explain how the conditional probability with conditioning on many random variables is simplified? I am trying to understand it for the conditional waiting formulas.

Linear algebra – How to know if a matrix is ​​poorly conditioned or singular using the system function of Eigens (or composition of LUD)?

I use the Eigensystem function and I try to determine if it is singular or poorly conditioned. I use the function as follows:

Electronic system[A]
Composition of LUD[A]

And it returns a list of eigenvalues ​​and eigenvectors, as well as the condition number last. Should the number of conditions be high or low so that we can consider that the corresponding matrix is ​​badly conditioned?

On a matrix, the condition number is $ infty $I'm sure this is badly packaged, but the other numbers are something like 14.555555, and 120.4, etc.

stochastic processes – Strike time of conditioned broadcasts

I have a question of conditioned diffusion process.
This question is somewhat related to an argument that appears in this article: P.

Let $ D = {z = (x, y) in mathbb {R} ^ 2 mid | y | <1 } $ and $ K = {(x, y) in D mid x <1 } $. We note $ X = (X_t, P_z) $ by the Brownian absorbent motion on $ D $ conditioned to hit K $. We leave $ T_ {K} = inf {t ge 0 mid X_t in K } $.

My question

We set $ S = inf {t ge 0 mid text {the second coordinate of} X_ {t} = – 1/2 } $.

Can we prove the following ?:
begin {align *}
(1) quad lim_ {x to + infty} inf_ {z = (x, y) in D \ text {with} -1/2 <y <1} P_ {z} (T_ {K} <S) = 0.
end {align *}

An allegation similar to (1) should apply for more general conditioned broadcasts on $ D $. Can we prove (1) with a universal argument?