Why Study Probability?
Mathematics is the logic of certainty; probability is the logic of uncertainty. Probability is extremely useful in a wide variety of fields, since it provides tools for understanding and explaining variation, separating signal from noise, and modeling complex phenomena. For example, probability is needed in:
- Statistics: Probability is the foundation and language for statistics, enabling many powerful methods for using data to learn about the world.
- Physics: Einstein famously said ''God does not play dice with the universe", but current understanding of quantum physics heavily involves probability at the most fundamental level of nature.
- Biology: Genetics is deeply intertwined with probability, both in the inheritance of genes and in modeling random mutations.
- Computer science: Probability also plays an essential role in studying randomized algorithms, machine learning, and artificial intelligence.
- Finance: Probability is central in quantitative finance. Modeling stock prices over time and determining ''fair" prices for financial instruments are based heavily on probability.
- Political science: In recent years, political science has become more and more quantitative and statistical, e.g., in predicting and understanding election results.
- Medicine: The development of randomized clinical trials, in which patients are randomly assigned to receive treatment or placebo, has transformed medical research in recent years.
- Life: Life is uncertain, and probability is the logic of uncertainty. While it isn't practical to carry out a formal probability calculation for every decision made in life, thinking hard about probability can help us avert some common fallacies, shed light on coincidences, and make better predictions.
Sample Spaces and Pebble World
The mathematical framework for probability is built around sets. Imagine that an experiment is performed, resulting in one out of a set of possible outcomes. Before the experiment is performed, it is unknown which outcome will be the result; after, the result ''crystallizes" into the actual outcome.
Definition: Sample Sspace and Event
The sample space of an experiment is the set of all possible outcomes of the experiment. An event is a subset of the sample space , and we say that occurred if the actual outcome is in .
View Larger Image
Image Description
The sample space of an experiment can be finite or infinite. When the sample space is finite, we can visualize it as Pebble World. Each pebble represents an outcome, and an event is a set of pebbles. Performing the experiment amounts to randomly selecting one pebble.
Set theory is very useful in probability, since it provides a rich language for expressing and working with events. Set operations, especially unions, intersections, and complements, make it easy to build new events in terms of already-defined events.
For example, let be the sample space of an experiment and let be events. Then the union is the event that occurs if and only if at least one of and occurs, the intersection is the event that occurs if and only if both and occur, and the complement is the event that occurs if and only if does not occur. We also have De Morgan's laws:
since saying that it is not the case that at least one of and occur is the same as saying that does not occur and does not occur, and saying that it is not the case that both occur is the same as saying that at least one does not occur. Analogous results hold for unions and intersections of more than two events.
Example Coin Flips
A coin is flipped 10 times. Writing Heads as and Tails as , a possible outcome (pebble) is , and the sample space is the set of all possible strings of length 10 of 's and 's. We can (and will) encode as and as , so that an outcome is a sequence with , and the sample space is the set of all such sequences. Now let's look at some events:
-
Let be the event that the first flip is Heads. As a set,
This is a subset of the sample space, so it is indeed an event; saying that occurs is the same thing as saying that the first flip is Heads. Similarly, let be the event that the th flip is Heads for .
-
Let be the event that at least one flip was Heads. As a set,
-
Let be the event that all the flips were Heads. As a set,
-
Let be the event that there were at least two consecutive Heads. As a set,
A Set Theory Dictionary
Naive definition of probability
Historically, the earliest definition of the probability of an event was to count the number of ways the event could happen and divide by the total number of possible outcomes for the experiment. We call this the naive definition since it is restrictive and relies on strong assumptions; nevertheless, it is important to understand, and useful when not misused.
Definition: Naive Definition of Probability
Let be an event for an experiment with a finite sample space . The naive probability of is
where is the size (cardinality) of set .
The naive definition is very restrictive in that it requires to be finite, with equal mass for each pebble. It has often been misapplied by people who assume equally likely outcomes without justification and make arguments to the effect of "either it will happen or it won't, and we don't know which, so it's 50-50". For example, if we don't know whether or not there is life on Saturn, should we conclude that it is 50-50? What about intelligent life on Saturn, which seems like it should be strictly less likely than there being any form of life on Saturn? But there are several important types of problems where the naive definition is applicable, such as when there is symmetry in the problem that makes the outcomes equally likely.
How to Count
Multiplication rule
In some problems, we can directly count the number of possibilities using a basic but versatile principle called the multiplication rule. We'll see that the multiplication rule leads naturally to counting rules for sampling with replacement and sampling without replacement, two scenarios that often arise in probability and statistics.
Theorem: Multiplication Rule
Consider a compound experiment consisting of two sub-experiments, Experiment A and Experiment B. Suppose that Experiment A has possible outcomes, and for each of those outcomes Experiment B has possible outcomes. Then the compound experiment has possible outcomes.
We can use the multiplication rule to arrive at formulas for sampling with and without replacement. Many experiments in probability and statistics can be interpreted in one of these two contexts, so it is appealing that both formulas follow directly from the same basic counting principle.
Theorem: Sampling with Replacement
Consider objects and making choices from them, one at a time with replacement (i.e., choosing a certain object does not preclude it from being chosen again). Then there are possible outcomes.
Theorem: Sampling without Replacement
Consider objects and making choices from them, one at a time without replacement (i.e., choosing a certain object precludes it from being chosen again). Then there are possible outcomes, for (and possibilities for ).
The above theorems about counting, but when the naive definition applies, we can use them to calculate probabilities. This brings us to our next example, a famous problem in probability called the birthday problem. The solution incorporates both sampling with replacement and sampling without replacement.
Example Birthday Problem
There are people in a room. Assume each person's birthday is equally likely to be any of the 365 days of the year (we exclude February 29), and that people's birthdays are independent (we assume there are no twins in the room). What is the probability that two or more people in the group have the same birthday?
There are ways to assign birthdays to the people in the room, since we can imagine the 365 days of the year being sampled times, with replacement. By assumption, all of these possibilities are equally likely, so the naive definition of probability applies.
Used directly, the naive definition says we just need to count the number of ways to assign birthdays to people such that there are two or more people who share a birthday. But this counting problem is hard, since it could be Emma and Steve who share a birthday, or Steve and Naomi, or all three of them, or the three of them could share a birthday while two others in the group share a different birthday, or various other possibilities.
Instead, let's count the complement: the number of ways to assign birthdays to people such that no two people share a birthday. This amounts to sampling the 365 days of the year without replacement, so the number of possibilities is for . Therefore the probability of no birthday matches in a group of people is
and the probability of at least one birthday match is
The figure plots the probability of at least one birthday match as a function of . The first value of for which the probability of a match exceeds 0.5 is . Thus, in a group of 23 people, there is a better than 50% chance that two or more of them will have the same birthday. By the time we reach , the probability of a match exceeds 99%.
Of course, for we are guaranteed to have a match, but it's surprising that even with a much smaller number of people it's overwhelmingly likely that there is a birthday match. For a quick intuition into why it should not be so surprising, note that with 23 people there are pairs of people, any of which could be a birthday match.
Adjusting for Overcounting
In many counting problems, it is not easy to directly count each possibility once and only once. If, however, we are able to count each possibility exactly times for some , then we can adjust by dividing by . For example, if we have exactly double-counted each possibility, we can divide by to get the correct count. We call this adjusting for overcounting.
Example Committees and Teams
Consider a group of four people.
(a) How many ways are there to choose a two-person committee?
(b) How many ways are there to break the people into two teams of two?
(a) One way to count the possibilities is by listing them out: labeling the people as 1, 2, 3, 4, the possibilities are , , , , , .
Another approach is to use the multiplication rule with an adjustment for overcounting. By the multiplication rule, there are 4 ways to choose the first person on the committee and 3 ways to choose the second person on the committee, but this counts each possibility twice, since picking 1 and 2 to be on the committee is the same as picking 2 and 1 to be on the committee. Since we have overcounted by a factor of 2, the number of possibilities is .
(b) Here are 3 ways to see that there are 3 ways to form the teams. Labeling the people as , we can directly list out the possibilities: , , and . Listing out all possibilities would quickly become tedious or infeasible with more people though. Another approach is to note that it suffices to specify person 1's teammate (and then the other team is determined). A third way is to use (a) to see that there are 6 ways to choose one team. This overcounts by a factor of 2, since picking and to be a team is equivalent to picking and to be a team. So again the answer is .
A binomial coefficient counts the number of subsets of a certain size for a set, such as the number of ways to choose a committee of size from a set of people. Sets and subsets are by definition unordered, e.g., , so we are counting the number of ways to choose objects out of , without replacement and without distinguishing between the different orders in which they could be chosen.
Definition: Binomial Coefficient
For any nonnegative integers and , the binomial coefficient , read as '' choose '', is the number of subsets of size for a set of size .
Theorem: Binomial Coefficient Formula
For , we have
For , we have .
Proof: Let be a set with . Any subset of has size at most , so for . Now let . By Theorem Sampling without Replacement, there are ways to make an ordered choice of elements without replacement. This overcounts each subset of interest by a factor of (since we don't care how these elements are ordered), so we can get the correct count by dividing by .
Example Full House in Poker
A 5-card hand is dealt from a standard, well-shuffled 52-card deck. The hand is called a full house in poker if it consists of three cards of some rank and two cards of another rank, e.g., three 7's and two 10's (in any order). What is the probability of a full house?
All of the possible hands are equally likely by symmetry, so the naive definition is applicable. To find the number of full house hands, use the multiplication rule. There are 13 choices for what rank we have three of; for concreteness, assume we have three 7's. There are ways to choose which 7's we have. Then there are 12 choices for what rank we have two of, say 10's for concreteness, and ways to choose two 10's. Thus,