A key idea in data science and statistics is the Bernoulli distribution, named for the Swiss mathematician Jacob Bernoulli. It is crucial to probability theory and a foundational element for more intricate statistical models, ranging from machine learning algorithms to customer behaviour prediction. In this article, we will discuss the Bernoulli distribution in detail.
Read on!
What is a Bernoulli distribution?
A Bernoulli distribution is a discrete probability distribution representing a random variable with only two possible outcomes. Usually, these results are denoted by the terms “success” and “failure,” or alternatively, by the numbers 1 and 0.
Let X be a random variable. Then, X is said to follow a Bernoulli distribution with success probability p
The Probability mass function of the Bernoulli distribution
Let X be a random variable following a Bernoulli distribution:
Then, the probability mass function of X is
This follows directly from the definition given above.
Mean of the Bernoulli Distribution
Let X be a random variable following a Bernoulli distribution:
Then, the mean or expected value of X is
Proof: The expected value is the probability-weighted average of all possible values:
Since there are only two possible outcomes for a Bernoulli random variable, we have:
Sources: https://en.wikipedia.org/wiki/Bernoulli_distribution#Mean.
Also read: End to End Statistics for Data Science
Variance of the Bernoulli distribution
Let X be a random variable following a Bernoulli distribution:
Then, the variance of X is
Proof: The variance is the probability-weighted average of the squared deviation from the expected value across all possible values
and can also be written in terms of the expected values:
Equation (1)
The mean of a Bernoulli random variable is
Equation(2)
and the mean of a squared Bernoulli random variable is
Equation(3)
Combining Equations (1), (2) and (3), we have:
Bernoulli Distribution vs Binomial Distribution
The Bernoulli distribution is a special case of the Binomial distribution where the number of trials n=1. Here’s a detailed comparison between the two:
Aspect | Bernoulli Distribution | Binomial Distribution |
Purpose | Models the outcome of a single trial of an event. | Models the outcome of multiple trials of the same event. |
Representation | X∼Bernoulli(p), where p is the probability of success. | X∼Binomial(n,p), where n is the number of trials and p is the probability of success in each trial. |
Mean | E[X]=p | E[X]=n⋅p |
Variance | Var(X)=p(1−p) | Var(X)=n⋅p⋅(1−p) |
Support | Outcomes are X∈{0,1}, representing failure (0) and success (1). | Outcomes are X∈{0,1,2,…,n}, representing the number of successes in n trials. |
Special Case Relationship | A Bernoulli distribution is a special case of the Binomial distribution when n=1. | A Binomial distribution generalizes the Bernoulli distribution for n>1. |
Example | If the probability of winning a game is 60%, the Bernoulli distribution can model whether you win (1) or lose (0) in a single game. | If the probability of winning a game is 60%, the Binomial distribution can model the probability of winning exactly 3 out of 5 games. |
The Bernoulli distribution (left) models the outcome of a single trial with two possible outcomes: 0(failure) or 1 (success). In this example, with p=0.6 there is a 40% chance of failure (P(X=0)=0.4) and a 60% chance of success (P(X=1)=0.6). The graph clearly shows two bars, one for each outcome, where the height corresponds to their respective probabilities.
The Binomial distribution (right) represents the number of successes across multiple trials (in this case, n=5 trials). It shows the probability of observing each possible number of successes, ranging from 0 to 5. The number of trials n and the success probability p=0.6 influence the distribution’s shape. Here, the highest probability occurs at X=3, indicating that achieving exactly 3 successes out of 5 trials is most likely. The probabilities for fewer (X=0,1,2) or more (X=4,5) successes decrease symmetrically around the mean E[X]=n⋅p=3.
Also read: A Guide To Complete Statistics For Data Science Beginners!
Use of Bernoulli Distributions in Real-world Applications
The Bernoulli distribution is widely used in real-world applications involving binary outcomes. Bernoulli distributions are essential to machine learning when it comes to binary classification issues. In these situations, we must classify the data into one of two groups. Among the examples are:
- Email spam detection (spam or not spam)
- Financial transaction fraud detection (legal or fraudulent)
- Diagnosis of disease based on symptoms (missing or present)
- Medical Testing: Determining if a treatment is effective (positive/negative result).
- Gaming: Modeling outcomes of a single event, such as win or lose.
- Churn Analysis: Predicting if a customer will leave a service or stay.
- Sentiment Analysis: Classifying text as positive or negative.
Why Use the Bernoulli Distribution?
- Simplicity: It’s ideal for scenarios where only two possible outcomes exist.
- Building Block: The Bernoulli distribution serves as the foundation for the Binomial and other advanced distributions.
- Interpretable: Real-world outcomes like success/failure, pass/fail, or yes/no fit naturally into its framework.
Numerical Example on Bernoulli Distribution:
A factory produces light bulbs. Each light bulb has a 90% chance of passing the quality test (p=0.9) and a 10% chance of failing (1−p=0.1). Let X be the random variable that represents the outcome of the quality test:
- X=1: The bulb passes.
- X=0: The bulb fails.
Problem:
- What is the probability that the bulb passes the test?
- What is the expected value E[X]?
- What is the variance Var(X)?
Solution:
- Probability of Passing the Test: Using the Bernoulli PMF:
So, the probability of passing is 0.9 (90%).
- Expected Value E[X]
E[X]=p.
Here, p=0.9.
E[X]=0.9..
This means the average success rate is 0.9 (90%).
- Variance Var(X)
Var(X)=p(1−p)
Here, p=0.9:
Var(X)=0.9(1−0.9)=0.9⋅0.1=0.09.
The variance is 0.09.
Final Answer:
- Probability of passing: 0.9 (90%).
- Expected value: 0.9.
- Variance: 0.09.
This example shows how the Bernoulli distribution models single binary events like a quality test outcome.
Now let’s see how this question can be solved in python
Implementation
Step 1: Install the necessary library
You need to install matplotlib if you haven’t already:
pip install matplotlib
Step 2: Import the packages
Now, import the necessary packages for the plot and Bernoulli distribution.
import matplotlib.pyplot as plt
from scipy.stats import bernoulli
Step 3: Define the probability of success
Set the given probability of success for the Bernoulli distribution.
p = 0.9
Step 4: Calculate the PMF for success and failure
Calculate the probability mass function (PMF) for both the “Fail” (X=0) and “Pass” (X=1) outcomes.
probabilities = [bernoulli.pmf(0, p), bernoulli.pmf(1, p)]
Step 5: Set labels for the outcomes
Define the labels for the outcomes (“Fail” and “Pass”).
outcomes = ['Fail (X=0)', 'Pass (X=1)']
Step 6: Calculate the expected value
The expected value (mean) for the Bernoulli distribution is simply the probability of success.
expected_value = p # Mean of Bernoulli distribution
Step 7: Calculate the variance
The variance of a Bernoulli distribution is calculated using the formula Var[X]=p(1−p)
variance = p * (1 - p) # Variance formula
Step 8: Display the results
Print the calculated probabilities, expected value, and variance.
print("Probability of Passing (X = 1):", probabilities[1])
print("Probability of Failing (X = 0):", probabilities[0])
print("Expected Value (E[X]):", expected_value)
print("Variance (Var[X]):", variance)
Output:
Step 9: Plotting the probabilities
Create a bar plot for the probabilities of failure and success using matplotlib.
bars = plt.bar(outcomes, probabilities, color=['red', 'green'])
Step 10: Add title and labels to the plot
Set the title and labels for the x-axis and y-axis of the plot.
plt.title(f'Bernoulli Distribution (p = {p})')
plt.xlabel('Outcome')
plt.ylabel('Probability')
Step 10: Add labels to the legend
Add labels for each bar to the legend, showing the probabilities for “Fail” and “Pass”.
bars[0].set_label(f'Fail (X=0): {probabilities[0]:.2f}')
bars[1].set_label(f'Pass (X=1): {probabilities[1]:.2f}')
Step 11: Display the legend
Show the legend on the plot.
plt.legend()
Step 12: Show the plot
Finally, display the plot.
plt.show()
This step-by-step breakdown allows you to create the plot and calculate the necessary values for the Bernoulli distribution.
Conclusion
A key idea in statistics is the Bernoulli distribution model scenarios with two possible outcomes: success or failure. It is employed in many different applications, such as quality testing, consumer behaviour prediction, and machine learning for binary categorisation. Key characteristics of the distribution, such as variance, expected value, and probability mass function (PMF), aid in the comprehension and analysis of such binary events. You may create more intricate models, like the Binomial distribution, by becoming proficient with the Bernoulli distribution.
Frequently Asked Questions
Ans. No, it only handles two outcomes (success or failure). For more than two outcomes, other distributions, like the multinomial distribution, are used.
Ans. Some examples of Bernoulli trails are:
1. Tossing a coin (heads or tails)
2. Passing a quality test (pass or fail)
Ans. The Bernoulli distribution is a discrete probability distribution representing a random variable with two possible outcomes: success (1) and failure (0). It is defined by the probability of success, denoted by p.
Ans. When the number of trials (n) equals 1, the Bernoulli distribution is a particular instance of the Binomial distribution. The Binomial distribution models several trials, whereas the Bernoulli distribution models just one.Ans.