Problem of Bayesian inference

Let $ X, Y $ be an independent random variable.
$ R ( pi, X) $ to be a posterior distribution for a distribution a priori $ pi $ and sample $ X $. CA watch $$ R ( pi_0, (X, Y)) = R (R ( pi_0, X), Y) $$

My work up here

Firstly, write:

$ R ( pi_0, (X, Y)) = frac {P ((X, Y) | pi_0) cdot P ( pi_0)} {P (X) P (Y)} = frac {P (X cap pi_0) P (Y cap pi_0) P ( pi_0)} {P (X) P (Y)} $. I used the fact that $ X, Y $ are therefore dependent $ P (X, Y) = P (X) P (Y) $

Now the second expression:

$ R ( pi_0, X) = frac {P (X | pi_0) P ( pi_0)} {P (X)} $

$$ R (R ( pi_0, X), Y) = frac {P (Y | frac {P (X | pi_0 cdot P ( pi_0)} {P (X)}) cdot ( Frac {P (X | pi_0) cdot P ( pi_0)} {P (X)})} {P (Y)} $$.

So, if the equality is true, I have to show that $$ frac {P (Y | frac {P (X | pi_0 cdot P ( pi_0)} {P (X)}) cdot ( frac {P (X | pi_0) cdot P ( pi_0)} {P (X)})} {P (Y)} = frac {P (X cap pi_0) P (Y cap pi_0) P ( pi_0)} {P (X) P (Y)} Rightarrow $$

$$ P (Y | frac {P (X cap pi_0)} {P (X)}) cdot P (X | pi_0) = P (X cap pi_0) P (Y cap pi_0 ) $$.

So, the left side of equality is: $$ P (Y | P ( pi_0 | X)) = P (Y) P ( pi_0 | X) P (X | pi_0) = P (Y) P ( pi_0 cap X) $$ But $ P (Y) neq P (Y cap pi_0) $.

Can you please show me an error in my justification?

discrete mathematics – Resolution using the laws of inference

so I have this question of inference rule that I do not quite understand the job. The question is :

(1) p∨q
(2) q → r
(3) p∧s → t
(4) ¬r
(5) ¬q → us
Conclusion: t

The solution I had was:

(1) q → r (Premise 2)
(2) ¬r (premise 4)
(3) ¬q (MT of 1 & 2)
(4) p∨q (premise 1)
(5) ¬q (from 3)
(6) p (DS of 4 and 5)
(7) ¬q → u∧s (premise 5)
(8) ¬q (from 3)
(9) deputies (deputies of 7 and 8)
(10) s (conjunctive simplification)
(11) p (from 6)
(12) p∧s (conjunctive addition)
(13) p∧s → t (Premise 3)
(14) t (MP of 12 and 13)

I have two parts that muddle me:
1.Why is the value of (3) used? twice in (5) and (8).
2. For (10) If I use the simplification rule, do I split the information into u, or can I just take one and ignore the other?

Really sorry if this question is everywhere 🙁

First-order logic – It is possible (and how to do) to chain (inference) in Prolog (standard)?

Direct chaining (inference) is a simple process that attempts to derive interesting consequences from a set of axioms / rules / premises. The difficult thing is to focus on the path of deduction that yields results of any value, one can deduce many uninteresting results along the many possible paths of deduction and that can lead to a waste of resources. The article http://www.isle.org/~langley/papers/icarus.rrl.ilp05.pdf uses a symbolic reinforcement (relational) learning method to learn the deduction paths to the 39, an interesting set of consequences. So, there is a theory on linking ahead.

My question is: is it possible and how to chain with Prolog the first complete order? Prolog is intended for backward linking, but perhaps it is possible to also use it for web forwarding?

I read https://content.sciendo.com/view/journals/ausi/8/1/article-p41.xml and this can give rise to response ideas.

Statistical inference – A basic question about a randomized test involving Type I error.

I have a fundamental question in the context of the verification of statistical assumptions, specifically randomized tests. Suposse that I have two actions (alternatives) on a certain unknown parameter $ theta in Theta $: the null ($ H_0 $) and alternative hypotheses ($ H_1 $).

In this case, the sample space is $ (0,15) subset mathbb {R} $. We know that the critical function is given by
$$ phi (x) = P (reject , , H_0 , , | , , x , , observed) $$

I do not know exactly if this definition really implies a conditional probability. Suposse I have the following critical function

$$
phi (x) =
begin {cases}
0, quad x in (0,2) \
p, quad x in (2,10) \
1, quad x in (10,15) \
end {cases}
$$

I can understand why

$$ P (reject , , H_0 , , , , H_0 , , is , , true) , = 0 times P (x in (0,2)) + p times P (x in (2,10)) + 1 times P (x in (10,15)) $$

The right side looks a lot like a wait. But I can not understand.

variation inference

source: http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf
enter the description of the image here

  1. how to prove: Eq[log p(z|θ)] = N Σ n = 1 k Σ i = 1 φni Ψ (γi) -Ψ k j = 1 γ, Eq[log p(w| z,β)], Eq[logq(θ)]

note: I find p (z | θ) ~ mliti (θ), so E (p (z (i) | θ)) = θ (i) => Eq[log p(z|θ)] = log (θ (i)), which seems to be wrong. (Page 1019, Annex A.3)

I've attached the paper and made the screen capture of the piece. Please let me know if there is any other information that I can provide. I'm waiting for your answer.

type inference – projection operator for constraints

I was looking at the HM (X) framework, Hindley-Milner parameterized by an X constraint system, and I had a hard time understanding what the projection operator is doing $ exists alpha $ made for a constraint. Also, how is this projection operator linked to the existential boolean quantizer?

proof techniques – How to prove the temporal complexity of this simple problem of probabilistic inference on a Bayesian network?

Maybe a fairly trivial question, but I try to refresh the methods of evidence in CS …

Suppose we have a simple Bayesian network with two rows of nodes: $ x_1, x_2, ldots, x_n $ and $ y_1, y_2, ldots, y_n $. Each node $ x_k $ takes a state of 0 or 1 with equal probability. Each node $ y_k $ takes state 1 with probability $ p_k $ if $ x_k $ is state 1 and the probability $ q_k $ if $ x_k $ is the state 0.

Is an exponential time needed to calculate the probability that all $ y_k $ are 1, and if so, what is the appropriate CS evidence of this?

Can we statically assign types to polymorphic lambdas using Hindley-type inference?

I play with an implicitly typed functional language and have implemented type checking with the help of a Hindley Milner style system. In order to guide code generation, I want to tag each term with its type when inferring type.

Of course, my language uses lambda expressions and the implicit typing should allow a pleasant polymorphous use of these lambdas. Unfortunately, this goal conflicts with static type markup because each lambda can be tagged only once, which specializes it for a particular set of argument and closure types. It is also impossible to statically copy the lambdas for each use, as this requires dynamic knowledge.

I thought about dynamically copying and tagging lambdas at runtime, but that would involve quite a bit of work and perhaps a down time. Is there a standard solution for this kind of problem?

formal languages ​​- If you have a smaller grammar approximation, do you immediately have a CFG inference algorithm?

The smallest grammar problem is to find a single-chain CFG. So, given a finite list of language samples, known to all in some CFGs, can we, using the smaller (approximate) grammars of each respective sample, calculate an approximate CFG for the language?

I want to create a parser generator that automatically detects a grammar from some programming language examples.