Sign up now
to enroll in courses, follow best educators, interact with the community and track your progress.
Download
Probability Theory Introduction (in Hindi)
640 plays

More
Probability theory and Experiment, sample space and event.

Saumya Singh is teaching live on Unacademy Plus

Saumya Singh
Qualified NET 2018 and 2019☆Pursuing Phd☆Follow me for courses on UGC NET☆2 years Teaching experience* Youtuber @ Sisters Academy

U
Unacademy user
*Revised National Tuberculosis Control Program was launched in 1993.
hlo mam...is this course helpful for commerce aspirants too
Saumya Singh
4 months ago
Yes chandni
Chandni sharma
4 months ago
thanks for the reply mam:)
Chandni sharma
4 months ago
thanks for the reply mam:)
Chandni sharma
4 months ago
thanks for the reply mam:)
Chandni sharma
4 months ago
thanks for the reply mam:)
mam management ki sabhi topic kra dijiye mam u r a good explainer
Saumya Singh
7 months ago
Yes .. i am trying my best to cover each and every topic
Ankush Rawal
7 months ago
thank you so very very very much mam
  1. Probability for Management NTA UGC NET


  2. Introduction to Probability Theory The statistician is basically concerned with drawing conclusions (or inference) from experiments involving uncertainties. For these conclusions and inferences to be reasonably accurate, an understanding of probability theory is essential. In this section, we shall develop the concept of probability with equally likely outcomes


  3. Experiment, Sample Space and Event Experiment: This is any process of observation or procedure that: (1) Can be repeated (theoretically) an infinite number of times; and (2) Has a well-defined set of possible outcomes. Sample space: This is the set of all possible outcomes of an experiment. Event: This is a subset of the sample space of an experiment.


  4. Experiment 1: Tossing a coin. Sample space: S = { Head or Tail} or we could write: S {0, 1} where 0 represents a tail and 1 represents a head.


  5. Experiment 2: Tossing a coin twice. sample Space: S = {HH, TT, HT, TH) where H represents head and T represents tail. Some possible events: El-(Head), E2 Tail), E3All heads)


  6. Experiment 3: Throwing a die. Sample space: S1, 2, 3, 4, 5, 6 or S Even, odd) Some events: Even numbers, E1 12, 4, 6) Odd numbers, E2 1, 3, 5 The number 1, E3 111 At least 3, E43, 4, 5, 6j


  7. Experiment 4: Defective items Two items are picked, one at a time, at random from a manufacturing process, and each item is inspected and classified as defective or non-defective. Sample space: S ENN, ND, DN, DD} where N Non-defective D Defective Some events: E1 - (only one item is defective) -(ND, DN E2 [Both are non-defectiveINN


  8. Probability of an Event Definition of a Probability: Suppose an event E can happen in r ways out of a total of n possible equally likely ways. Then the probability of occurrence of the event (called its success) is denoted by P(E)


  9. The probability of non-occurrence of the event (called its failure) is denoted by n- r P(E)


  10. Notice the bar above the E, indicating the event does not occur. Thus,


  11. In words, this means that the sum of the probabilities in any experiment is 1.


  12. Definition of Probability using Sample Spaces When an experiment is performed, we set up a sample space of al possible outcomes. In a sample of N equally likely outcomes we assign a chance (or weight) of 1/N to each outcome. We define the probability of an event for such a sample as follows: The probability of an event E is defined as the number of outcomes favourable to E divided by the total number of equally likely outcomes in the sample space S of the experiment.


  13. Properties of Probability (a)0 Plevent) < 1 In words, this means that the probability of an event must be a number between 0 and 1 (inclusive) (b) Plimpossible event)0 In words: The probability of an impossible event is 0. (c) P(certain event) 1 In words: The probability of an absolutely certain event is 1.


  14. Independent and Dependent Events If the occurrence or non-occurrence of E does not affect the probability of occurrence of E, then and Ey and Ey are said to be independent events. Otherwise they are said to be dependent events


  15. Mutually Exclusive Events Two or more events are said to be mutually exclusive if the occurrence of any one of them means the others will not occur (That is, we cannot have 2 or more such events occurring at the same time). For example, if we throw a 6-sided die, the events "4" and "5" are mutually exclusive. We cannot get both 4 and 5 at the same time when we throw one die. If E1 and E2 are mutually exclusive events, then E1 and E2 will not happen together. So the probability of the 2 events will be zero: P(E1 and E2) 0


  16. Now, suppose or denotes the event that "either E or E both occur", then (a) If E1 and E are not mutually exclusive events: PE, or E)-PE) + PE)-PE1 and E) We can also write:


  17. A diagram for this situation is as follows. We see that there is some overlap between the events E1 and E2. The probability of that overlap portion is PEInE2). E1 An example for non-mutually exclusive events could be E1 students in the swimming team E2 students in the debating team In this case, the yellow area represents students in the swimming team only, and the darker green area represents students in the debating team only. The light green overlap area represents the students in both the swimming team and the debating team


  18. In this case, the intersection El n E2 is empty, leading to the conclusion: P(E1 n E2)0 This explains why, for the mutually exclusive case, P(E1 or E2) P(E1) P(E2)


  19. Bayes' Theorem Let E1 and E2 be two mutually exclusive events forming a partition of the sample space S and let E be any event of the sample space such that P(E) 0 E. E. 2


  20. Definitions A random variable is a variable whose value is determined by the outcome of a random experiment. A discrete random variable is one whose set of assumed values is countable (arises from counting) A continuous random variable is one whose set of assumed values is uncountable (arises from measurement.)


  21. The Binomial Probability Distribution A binomial experiment is one that possesses the following properties: 1. The experiment consists of n repeated trials; 2. Each trial results in an outcome that may be classified as a success or a failure (hence the name, binomial); 3. The probability of a success, denoted by p, remains constant from trial to trial and repeated trials are independent The number of successes X in n trials of a binomial experiment is called a binomial random variable.


  22. The probability distribution of the random variable X is called a binomial distribution, and is given by the formula: where n the number of trials p the probability of success in a single trial q the probability of failure in a single trial C is a combination P(X) gives the probability of successes in n binomial trials.


  23. Mean and Variance of Binomial Distribution If p is the probability of success and q is the probability of failure in a binomial trial, then the expected number of successes in n trials (i.e. the mean value of the binomial distribution) is The variance of the binomial distribution is Note: In a binomial distribution, only 2 parameters, namely n and p, are needed to determine the probability.


  24. The Poisson Probability Distribution The Poisson Distribution was developed by the French mathematician Simeon Denis Poisson in 1837. The Poisson random variable satisfies the following conditions: The number of successes in two disjoint time intervals is independent. The probability of a success during a small time interval is proportional to the entire length of the time interval Apart from disjoint time intervals, the Poisson random variable also applies to disjoint regions of space.


  25. The probability distribution of a Poisson random variable X representing the number of successes occurring in a given time interval or a specified region of space is given by the formula: e n r! where X=0, 1, 2, 3 e 2.71828 (but use your calculator's e button) H mean number of successes in the given time interval or region of space


  26. Mean and Variance of Poisson Distribution If u is the average number of successes occurring in a given time interval or region in the Poisson distribution, then the mean and the variance of the Poisson distribution are both equal to u Note: In a Poisson distribution, only one parameter, u is needed to determine the probability of an event.


  27. The Standard Normal Distribution It makes life a lot easier for us if we standardize our normal curve, with a mean of zero and a standard deviation of 1 unit. If we have the standardized situation of --0 and -1, then we have: /2 f(X) = -e-z 3 023 Standard Normal Curve 0, -1 We can transform all the observations of any normal random variable with mean observations of another normal random variable Z with mean 0 and variance 1 using the following transformation: and variance orto a new set of Z-