Total probability in statistics can be defined as a fundamental rule intrinsic to marginal probabilities as well as conditional probabilities. Compound probability mathematically represents the likelihood of consecutive events happening at a time. It denotes the probability of the first event which is multiplied by the probability of any second particular event. The article will therefore focus on explaining what total and compound probabilities are, it will discuss the total probability theorem, and theorems of total and compound probability. The article will also show the law of total and compound probability and also provide total and compound probability theorems ppt.
Total Probability Theorem
The Total Probability Theorem is very useful in Decision Tree algorithms. The decision tree is a very convenient and simple visualizing tool that can help see the working of the total probability theorem in simple but distinct patterns. Through a DT which is determined with the help of the Total Probability Theorem, it will also be possible for calculating conditional probabilities.
For example, if there is Event A, Event B, and Event C, then if we consider that while B and C are quite similar to one another but A is intersecting these two Events we do not yet know what is the probability of Event A. But the conditions under which Events B and C are likely to take place are known to us. Then, through the use of the total probability theorem, the probability of A can be mathematically figured out.
Law of Total and Compound Probability
The law of total and compound probability is represented in the following sections. The law of probability theorem states that if {Bn: n = 1,2,3,…} where it is a finite partition in a sample space, and Bn is measurable, then for event A the probability space.
The total of the equation is a weighted average and therefore can be used to determine marginal probability which is also represented as P (A). The Law of total probability can also be implemented in events that are generated by a series of random continuous variables.
The Law of compound probability implies the likelihood of consecutive events happening at a time. It denotes the probability of the first event which is multiplied by the probability of any second particular event.
Independent Variables of Compound Probability | Dependent Variables of Compound Probability |
Independent variables of compound probability can be gained from multiplying individual probabilities of the particular events | Conditional Probability is used in the case of calculating the likelihood of dependent variables. that is done through Bayes’ Theorem |
Independent variables are solely calculated using compound probability | Dependent Variables are calculated through the use of conditional and compound probabilities |
Table 1 shows the difference between Independent and Dependent Variables of Compound Probability
Bayes’ Theorem
According to Bayes’ Theorem, the conditional probability of A is so that B has also occurred. Through modification, values of particular random numbers of variables will be multiplied along with the corresponding values of the probability numbers. It could also either be infinite or finite.
xi | 0 | 1 | 2 |
pi | 1/4 | 2/4 i.e., 1/2 | 1/4 |
Table 2 shows mathematical table representation of dependent Compound Probability
Theorems of Total and Compound Probability
From the law of total and compound probability, the theorems of total and compound probability can be understood. There have also been provided information on certain useful total and compound probability theorems ppt. The Total Probability Theorem is very useful in Decision Tree algorithms. The decision tree is also a very convenient and simple visualization tool that can help see the working of the total probability theorem in simple but distinct patterns.
Conclusion
The law of total and compound probability, as well as the total probability theorem, is important as it helps in calculating the likelihood of an event occurring. It gives us insightful information about certain patterns and sequences which can then be used to feed data to certain algorithms like Decision Tree, Random Forest, and the like which are then employed in various interdisciplinary fields. It helps in condensing data, and definite information, and facilitating comparison.