The likelihood of an event occurring related to one or more events is conditional probability. The Bayes theorem is a formula used to calculate conditional probability. For example, suppose we want to find the probability of finding a parking spot at a conference. The time and place of parking and the conferences going on at a given time will all affect the chances of finding a parking spot. In a nutshell, Bayes’ theorem uses test data to compute the real chance of an event.
“Events” are not the same as “tests.” For example, tests for liver disease exist, but it is distinct from the occurrence of having a liver illness.
Tests have flaws; a positive result does not guarantee disease occurrence. There is a high rate of false positives in many tests. Rare occurrences have a higher rate of false positives than the more common ones, and this is not only about medical examinations. Spam filtering, for example, has a high proportion of false positives. The Bayes theorem calculates the actual chance of detecting the event based on the test results.
Understanding Bayes’ theorem
The theorem has many applications that are not confined to finance. For example, Bayes’ theorem can estimate the accuracy of medical examination findings by considering how probable any specific person is to have a condition and the test’s overall accuracy. Bayes’ theorem incorporates prior probability distributions to derive posterior probabilities. In Bayesian statistical inference, prior probability is the likelihood of an event occurring before obtaining new data. Before an experiment, this is the best rational judgment of the probability of an event based on current knowledge. The revised chance of an event happening after additional information is considered posterior probability. Using Bayes’ theorem, the posterior probability is calculated by revising the prior probability. In statistical terminology, posterior probability is the likelihood of event A occurring after event B.
The Bayes’ theorem calculates the likelihood of an event based on that information that is or could be related to it. The formula can also see how hypothetical new information affects the probability of an event occurring, assuming the new information is confirmed. For example, consider a single card picked from a deck of 52. The card’s chances of being a king are four divided by 52, or 1/13, or around 7.69 percent. Keep in mind that there are four kings on the deck. Suppose the chosen card turns out to be a face card. Because there are 12 face cards in a deck, the probability that the picked card is a king is four divided by 12, or nearly 33.3 percent.
The Bayes’ theorem formula
The Bayes’ theorem (sometimes called Bayes’ Rule) is a straightforward method for calculating conditional probability. It gets its name from Thomas Bayes, an English mathematician (1701-1761). The rule’s formal definition is as follows:
Bayes’s theorem
P (A|B) x P (B) = P (B|A) x P (A)
P (A|B) = P(AB)P(B)
P (B|A) = P(AB)P(A)
We need to determine the “tests” and “events” in most circumstances before entering values into an equation. Bayes’ theorem calculates p(A|B) and p(B|A) for two events, A and B.
Terms associated with the Bayes’ theorem
Let us comprehend the definitions of a few phrases linked to the notion that is used in the Bayes’ theorem.
Conditional likelihood: The possibility of an event A occurring based on the existence of another event B. P(A|B) denotes the likelihood of A occurring provided that event B already has occurred.
Random variables: A random variable is a real-valued variable decided by chance.
Experimental probability: It refers to the probability of certain variables.
Posterior probability: A posterior probability is a measure of an event estimated after all relevant data has been taken into account. Conditional probability is another name for it.
Prior probability: It is the probability of an event estimated before additional information is taken into account. Before the experiment, the probability of a particular outcome is calculated based on current information.
Bayes’ theorem examples
Consider a drug test that is 98% accurate, meaning it produces an actual positive outcome for someone who is using the drug 98% of the time and a genuinely negative outcome for non-addicts of the drug 98% of the time. Assume that 0.5% of the population uses the drug. If a person randomly chosen tested positive for the substance, the below calculation can determine the likelihood of a drug user.
(0.98 x 0.005) / [(0.98 x 0.005) + ((1- 0.98) x (1 – 0.005)))] = 0.0049 / (0.0049 + 0.0199) = 19.76%
Even if a person tests positive in this circumstance, Bayes’ theorem shows that the person is considerably more likely not to be a drug user.
Applications of the Bayes’ theorem
Bayesian inference, a specific technique to statistical inference, is one of several Bayes’ theorem applications. Bayesian inference has been used in medicine, sciences, philosophy, engineering, sports and law. By evaluating how probable each specific person is to have an illness and the test’s overall accuracy, we may apply Bayes’ theorem to define the accuracy of medical test results. Bayes’ theorem relies on combining prior probability distributions to derive posterior probabilities. Pre-probability is an event’s probability before additional data is collected in Bayesian statistical reasoning.
Conclusion
In machine learning, Bayes’ theorem provides a method for calculating the probability of a hypothesis based on circumstances by employing the data-hypothesis relationship. Data science classification problems and using the Naive Bayes classifier are also the initial steps in learning true positive, false positive, true negative and false negative notions.