Probability fundamentals provide the framework for reasoning under uncertainty in machine learning. This section introduces key concepts such as random variables, probability distributions, conditional probability, and Bayes’ theorem, which are essential for modeling uncertainty, making predictions, and designing probabilistic algorithms.
Probability Definition: Probability measures the likelihood of an event, ranging from 0 (impossible) to 1 (certain).
import numpy as np
# Simulating coin flips
np.random.seed(42)
n_flips = 10000
flips = np.random.choice(['H', 'T'], size=n_flips)
# Probability of heads
prob_heads = np.sum(flips == 'H') / n_flips
print(f"P(Heads) from simulation: {prob_heads:.4f}")
print(f"Theoretical P(Heads): 0.5000")
Conditional probability measures the probability of event A given that event B has occurred: P(A|B).
Bayes' Theorem:
P(A|B) = P(B|A) × P(A) / P(B)
# Medical test example
# Disease prevalence: 1%
# Test sensitivity (true positive rate): 95%
# Test specificity (true negative rate): 90%
P_disease = 0.01
P_no_disease = 0.99
P_positive_given_disease = 0.95 # Sensitivity
P_positive_given_no_disease = 0.10 # 1 - Specificity
# P(Positive) using law of total probability
P_positive = (P_positive_given_disease * P_disease +
P_positive_given_no_disease * P_no_disease)
# P(Disease | Positive) using Bayes' theorem
P_disease_given_positive = (P_positive_given_disease * P_disease) / P_positive
print(f"P(Disease | Positive Test): {P_disease_given_positive:.2%}")
# Even with a positive test, probability is surprisingly low!
This example illustrates why understanding conditional probability is crucial in ML classification problems.
Two events are independent if the occurrence of one doesn't affect the probability of the other.
import numpy as np
# Check independence through simulation
np.random.seed(42)
n_samples = 100000
# Independent events: two separate coin flips
coin1 = np.random.choice([0, 1], size=n_samples)
coin2 = np.random.choice([0, 1], size=n_samples)
P_coin1_heads = coin1.mean()
P_coin2_heads = coin2.mean()
P_both_heads = ((coin1 == 1) & (coin2 == 1)).mean()
print("Independent events (two coins):")
print(f"P(Coin1=H): {P_coin1_heads:.4f}")
print(f"P(Coin2=H): {P_coin2_heads:.4f}")
print(f"P(Both=H): {P_both_heads:.4f}")
print(f"P(C1=H) × P(C2=H): {P_coin1_heads * P_coin2_heads:.4f}")
# For independent events: P(A and B) = P(A) × P(B)
Expected Value: The long-run average value of a random variable.
import numpy as np
# Dice roll expected value
outcomes = np.array([1, 2, 3, 4, 5, 6])
probabilities = np.array([1/6, 1/6, 1/6, 1/6, 1/6, 1/6])
expected_value = np.sum(outcomes * probabilities)
print(f"Expected value of fair die: {expected_value:.4f}")
# Simulation verification
np.random.seed(42)
rolls = np.random.randint(1, 7, size=100000)
simulated_mean = rolls.mean()
print(f"Simulated mean: {simulated_mean:.4f}")
Variance of Random Variable:
import numpy as np
# Variance: E[(X - μ)²]
outcomes = np.array([1, 2, 3, 4, 5, 6])
probabilities = np.array([1/6] * 6)
mean = np.sum(outcomes * probabilities)
variance = np.sum(((outcomes - mean)**2) * probabilities)
print(f"Variance of fair die: {variance:.4f}")
print(f"Standard deviation: {np.sqrt(variance):.4f}")
import numpy as np
import pandas as pd
# Joint probability table example
# Weather (Sunny/Rainy) vs. Play Tennis (Yes/No)
joint_prob = np.array([
[0.4, 0.1], # Sunny: [Yes, No]
[0.2, 0.3] # Rainy: [Yes, No]
])
df = pd.DataFrame(
joint_prob,
index=['Sunny', 'Rainy'],
columns=['Play', "Don't Play"]
)
print("Joint Probability Table:")
print(df)
# Marginal probabilities
print(f"\nP(Sunny) = {joint_prob[0].sum():.1f}")
print(f"P(Rainy) = {joint_prob[1].sum():.1f}")
print(f"P(Play) = {joint_prob[:, 0].sum():.1f}")
print(f"P(Don't Play) = {joint_prob[:, 1].sum():.1f}")
# Conditional probability
P_play_given_sunny = joint_prob[0, 0] / joint_prob[0].sum()
print(f"\nP(Play | Sunny) = {P_play_given_sunny:.2f}")
Maximum Likelihood Estimation (MLE) is a fundamental method for estimating model parameters.
import numpy as np
from scipy import stats
from scipy.optimize import minimize
# Generate data from unknown normal distribution
np.random.seed(42)
true_mean, true_std = 5, 2
data = np.random.normal(true_mean, true_std, size=100)
# MLE for normal distribution
# For normal: MLE of mean = sample mean, MLE of std = sample std
mle_mean = np.mean(data)
mle_std = np.std(data) # Note: MLE uses n, not n-1
print(f"True parameters: μ={true_mean}, σ={true_std}")
print(f"MLE estimates: μ={mle_mean:.3f}, σ={mle_std:.3f}")
# Verify with scipy
mle_params = stats.norm.fit(data)
print(f"Scipy MLE: μ={mle_params[0]:.3f}, σ={mle_params[1]:.3f}")
Probability distributions describe how data values are spread and are essential for modeling and inference in machine learning. This section covers common distributions—such as normal, binomial, and uniform—and explains their role in understanding data, estimating probabilities, and building probabilistic models.
Descriptive statistics summarize and describe the key features of a dataset, providing a foundation for data analysis in machine learning. This section covers measures of central tendency, dispersion, and data distribution, helping to identify patterns, detect anomalies, and inform preprocessing and modeling decisions.