The posterior probability is the probability an event will occur in any case evidence or background statistics have been taken into consideration. It is intently associated with prior probability, that’s the probability an event will manifest before you take any new proof into consideration. Here, in this article, we will discuss the posterior probability in detail.
Definition of Posterior Probability
A posterior probability is the updated probability of some event occurring after accounting for new data. As an instance, we might be inquisitive about locating the probability of a few event “A” going on once we account for a few event “B” that has simply taken place.
The formula for calculating posterior probability
We should calculate this posterior probability via the usage of the following formulation :
P(A|B) = P(A) * P(B|A) / P(B)
Where,
P(A|B) = the probability of event A happening, given that event B has occurred. Note that “|” refers to “given.”
P(A) = the probability of event A that occurred.
P(B) = the probability of event B that occurred.
P(B|A) = the probability of event B happening, given that event A has happened.
OR
Posterior probability = prior probability + new evidence (called likelihood).
Examples of posterior probability
Suppose there is a college having 60% boys and 40% women as college students. The women put on trousers or skirts in the same numbers; all boys put on trousers. An observer sees a (random) student from a distance; all the observer can see is this student is sporting trousers. What’s the probability this scholar is a lady? The best answer can be computed with the usage of Bayes’ theorem.
The event G is that the student observed is a girl, and the event T is that the scholar located is wearing trousers. To compute the posterior probability P(G|T), we first need to realize:
P(G), or the probability that the pupil is a woman irrespective of any other information. Because the observer sees a random scholar, which means that all college students have the same probability of being discovered, and the proportion of girls among the various college students is 40%, this possibility equals 0.4.
P(B), or the chance that the student isn’t always a female (i.e. a boy) irrespective of another record (B is the complementary event to G). that is 60%, or 0.6
P(T|G), or the probability of the pupil carrying trousers for the reason that the student is a girl. As they are as possible to wear skirts as trousers, this is 0.5.
P(T|B), or the probability of the pupil wearing trousers given that the pupil is a boy. This is given as 1.
P(T), or the probability of a (randomly selected) student sporting trousers irrespective of another record. Considering that P(T)=P(T|G)P(G)+P(T|B)P(B) (via the regulation of general probability), that is P(T)=0.5 × 0.4+1 × 0.6=0.8
Given all these statistics, the posterior probability of the observer having noticed a lady given that the found pupil is carrying trousers can be computed by using substituting these values in the formula –
P(G|T) = P(T|G) P(G)/ P(T) = 0.5 × 0.4/ 0.8
= 0.25
An intuitive way to clear up this is to anticipate the faculty has N college students. Range of boys = 0.6N and quantity of girls = 0.4N. If N is satisfactorily huge, the general variety of trouser wearers = 0.6N+ 50% of 0.4N. And a number of women trouser wearers = 50% of 0.4N. therefore, within the population of trousers, ladies are (50% of 0.4N)/(0.6N+ 50% of 0.4N) = 25%. In different words, if you separated out the institution of trouser wearers, 1 / 4 of that group will be girls. Consequently, in case you see trousers, the maximum you could deduce is that you are searching at a single sample from a subset of students where 25% are girls. And with the aid of definition, the chance of this random pupil being a girl is 25%. Each Bayes theorem hassle can be solved in this manner.
The posterior probability is important as it will help in calculating the interval estimates for the parameters, prediction inference for future data, and point estimates for the parameter.
The computational challenge is that joint distribution cannot be marginalized as the integral sum cannot be traced. In that case, the method is used by drawing the Monte Carlo samples from the posterior probability. There are multiple methods to find the posterior distribution. The method is called the vibrational inference method. The posterior distribution consists of two different sources of information. They are – information produced before observing the data and information which is provided by the data.
Conclusion
A posterior probability, in Bayesian data, is the revised or updated possibility of an event occurring after taking into account new records. The posterior probability is calculated by updating the prior possibility with the use of Bayes’ theorem. Here, we have discussed the formula for calculating the posterior probability. The discussion of the posterior probability has been discussed with examples for a better understanding of the topic. We have seen the rule of probability in the posterior probability.