In general Bayesian updating refers to the process of getting the posterior from a prior belief distribution. Method b uses the posterior output as input prior to calculate the next posterior. To calculate $Pr(F HH) $ , we a) continue using P(Fair)=0.5 $Pr(FHH) = \frac = \frac \quad\quad (2)$ $P(HHF)= \theta^(1\theta)^ = 0.5^(0.5)^= 0.25$ $P(HH)= P(HHF) \cdot P(F) P(HHBiased) \cdot P(Biased)=(0.25 \cdot 0.5) (1 \cdot 0.5) = 0.625$ Hence, plugging into (2), $Pr(FHH) =\frac = \frac = 0.2$ Alternatively, what if we calculate $Pr(F HH) $ by using b) our updated belief P(Fair)=0.33 which we got from Pr(FH) in the first step In this case, $P(HHF)= \theta^(1\theta)^ = 0.33^(10.33)^= 0.1089$ $P(HH)= P(HHF) \cdot P(F) P(HHBiased) \cdot P(Biased)=(0.1089 \cdot 0.33) (1 \cdot 0.67) = 0.705937$ Hence, plugging into (2), $Pr(FHH) =\frac = \frac = 0.05091$ Usually a biased coin just means that it's not fair, but it could have any bias. Alternatively one could understand the term as using the posterior of the first step as prior input for further calculation. You should make it clear in your question that you're only considering the possibilities that either the coin is perfectly fair, or else it always comes up heads. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability". In the table, the values w, x, y and z give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. P(AB) = Bayesian inference derives the posterior probability as a consequence of two antecedents, a prior probability and a "likelihood function" derived from a statistical model for the observed data.
