pr.probability – What is the Bruss-Yor concept of non-information?

A few years ago, a question related to an article by Thomas Bruss and Marc Yor about the so-called problem of "last arrival" caught the attention of this forum.

What I would like to know now, is:

What are the assumptions underlying the theorem 5.1 of Bruss-Yor's paper?

In the last problem of arrival, as I had first thought, an unknown number $ N $ articles arrive at independent moments selected from the uniform distribution over the interval $[0,1]$, and a selector (which sees arrival times in ascending order) tries to choose "online" the very last.

The way I first thought of the problem (in the context of a variant of the secretary's problem), there is no a priori random distribution of $ N $, the number of elements. Instead, the task is to find a policy that maximizes the probability of success in the worst case (contradictory hypothesis). In other words, we want a policy that succeeds with (at least) a fixed positive probability for each $ N $.

For example, a policy that admits a fairly simple analysis is to wait until the first time, half of the time remaining, runs out without any new items happening, then to accept the next one. This policy succeeds for $ N $ items with probability exactly $$ prod_ {k = 1} ^ N left (1- frac1 {2 ^ k} right), $$ which is bounded by the bottom by its limit of about $ 0.288788.

In a 2011 article, I showed (with a somewhat awkward calculation) that there is an optimal policy that wins for the selector with a probability around 0.3529170002.

But the problem of the last arrival can also be taken into account in other contexts. We could just have a one-time process on the interval $[0,1]$. A type of punctual process envisioned by Bruss and Yor is what he calls a process of proportional incrementsThis is a process in which, as soon as we have seen an item, the expected rate of future items will (at any time) be equal to the frequency of the items we have already seen.

Since the selector will not have to make a decision before seeing the first element, the parameter p.i. This assumption seems natural in this context. When I saw the Bruss-Yor newspaper for the first time, I was delighted to find that they seemed to have obtained an explicit explicit policy for the selector for a p.i. process. Their suggested policy (Theorem 5.1) is to accept the k $: e article happens after the time $$ 1- frac1 {k + 1}. $$
At least, it emerges from their conclusion 3 in section 2.2 that what they envision, even in section 5, are p.i.

But when I took a closer look, it seemed that the "Bruss-Yor" policy could not be optimal under the assumption of p.i.

Suppose for example that we have seen exactly one element at a time $ 1/2. Then the probability of no longer article should be $ 1/2: If we discretize by cutting the unit interval in $ 2m slots and then after seeing an item in the first $ m $ slots, the probability that there is no new element in the next slot is about $ m / (m + 1) $and conditioning to that, the probability that no new item is in the next box is about $ (m + 1) / (m + 2) $ And so on $$ frac {m} {m + 1} cdot frac {m + 1} {m + 2} cdots frac {2m-1} {2m} = frac12. $$

So it seems that if the first article arrives at the hour $ 1/2, the selector succeeds with probability $ 1/2 if they accept it. But that also means that they can not possibly succeed with the probability $ 1/2 they do not do it. If they refuse, the likelihood of seeing another element is $ 1/2but there may be more than one and they always have the non-trivial task of choosing the right one.

Therefore, it seems like an optimal policy in p.i. the setting must accept the first item even if it happens a little before the time $ 1/2.

I have not been able to follow the derivation of Theorem 5.1, but what I would like to know is:

What is the model according to which the authors claim that the $ (1- frac1 {k + 1}) $-politics is optimal?

If we look again at Section 2.2, it seems that in Conclusion 1 and the discussion around it there is also an assumption that the process, subordinated to a $ N $ behave as if the arrival times were independent and uniform $[0,1]$.

I have not been able to know if that is what the authors presume later, but it seems to me that this would be inconsistent with the principle of p.i. assumption already for $ N = $ 1: If the first article arrives at the hour $ T_1 $, assuming p.i., the probability that there is no more element is equal to $ T_1 $. For $ T_1 $ to be uniform on $[0,1]$ after conditioning $ N = $ 1, so we should have a density at $ T_1 = t $ proportional to $ 1 / t, which is impossible.