Statistical inference – A basic question about a randomized test involving Type I error.

I have a fundamental question in the context of the verification of statistical assumptions, specifically randomized tests. Suposse that I have two actions (alternatives) on a certain unknown parameter $ theta in Theta $: the null ($ H_0 $) and alternative hypotheses ($ H_1 $).

In this case, the sample space is $ (0,15) subset mathbb {R} $. We know that the critical function is given by
$$ phi (x) = P (reject , , H_0 , , | , , x , , observed) $$

I do not know exactly if this definition really implies a conditional probability. Suposse I have the following critical function

$$
phi (x) =
begin {cases}
0, quad x in (0,2) \
p, quad x in (2,10) \
1, quad x in (10,15) \
end {cases}
$$

I can understand why

$$ P (reject , , H_0 , , , , H_0 , , is , , true) , = 0 times P (x in (0,2)) + p times P (x in (2,10)) + 1 times P (x in (10,15)) $$

The right side looks a lot like a wait. But I can not understand.

variation inference

source: http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf
enter the description of the image here

  1. how to prove: Eq[log p(z|θ)] = N Σ n = 1 k Σ i = 1 φni Ψ (γi) -Ψ k j = 1 γ, Eq[log p(w| z,β)], Eq[logq(θ)]

note: I find p (z | θ) ~ mliti (θ), so E (p (z (i) | θ)) = θ (i) => Eq[log p(z|θ)] = log (θ (i)), which seems to be wrong. (Page 1019, Annex A.3)

I've attached the paper and made the screen capture of the piece. Please let me know if there is any other information that I can provide. I'm waiting for your answer.

type inference – projection operator for constraints

I was looking at the HM (X) framework, Hindley-Milner parameterized by an X constraint system, and I had a hard time understanding what the projection operator is doing $ exists alpha $ made for a constraint. Also, how is this projection operator linked to the existential boolean quantizer?

proof techniques – How to prove the temporal complexity of this simple problem of probabilistic inference on a Bayesian network?

Maybe a fairly trivial question, but I try to refresh the methods of evidence in CS …

Suppose we have a simple Bayesian network with two rows of nodes: $ x_1, x_2, ldots, x_n $ and $ y_1, y_2, ldots, y_n $. Each node $ x_k $ takes a state of 0 or 1 with equal probability. Each node $ y_k $ takes state 1 with probability $ p_k $ if $ x_k $ is state 1 and the probability $ q_k $ if $ x_k $ is the state 0.

Is an exponential time needed to calculate the probability that all $ y_k $ are 1, and if so, what is the appropriate CS evidence of this?

Can we statically assign types to polymorphic lambdas using Hindley-type inference?

I play with an implicitly typed functional language and have implemented type checking with the help of a Hindley Milner style system. In order to guide code generation, I want to tag each term with its type when inferring type.

Of course, my language uses lambda expressions and the implicit typing should allow a pleasant polymorphous use of these lambdas. Unfortunately, this goal conflicts with static type markup because each lambda can be tagged only once, which specializes it for a particular set of argument and closure types. It is also impossible to statically copy the lambdas for each use, as this requires dynamic knowledge.

I thought about dynamically copying and tagging lambdas at runtime, but that would involve quite a bit of work and perhaps a down time. Is there a standard solution for this kind of problem?

formal languages ​​- If you have a smaller grammar approximation, do you immediately have a CFG inference algorithm?

The smallest grammar problem is to find a single-chain CFG. So, given a finite list of language samples, known to all in some CFGs, can we, using the smaller (approximate) grammars of each respective sample, calculate an approximate CFG for the language?

I want to create a parser generator that automatically detects a grammar from some programming language examples.