0/70 completed
Statistical Models Interactive

Bayesian Updating

Combine prior beliefs with new evidence. Start with what you know, update as data arrives. The foundation of rational inference under uncertainty.

๐Ÿ“Š Bayes' Theorem

P(ฮธ | data) โˆ P(data | ฮธ) ร— P(ฮธ)
Posterior

Updated belief after data

Likelihood

How likely is data given ฮธ

Prior

Belief before data

Prior Belief

Prior Mean 50
30 70
Prior Uncertainty (ฯƒ) 10
2 20

Observed Data

Sample Mean 58
30 70
Sample Size 10
1 50

๐Ÿ“Š Posterior

Posterior Mean 57.3
Posterior ฯƒ 3.02
95% Credible Int. [51.4, 63.2]
Prior Weight 9%
Data Weight 91%

Distribution Evolution

Balanced update: both prior and data contribute to posterior.

Key Properties

Sequential

Today's posterior is tomorrow's prior

Shrinkage

Small samples โ†’ closer to prior

Uncertainty

Posterior width shows remaining uncertainty

๐Ÿˆ Betting Applications

Player Projections

Prior: historical/positional avg. Update with current season.

Win Probability

Prior: preseason. Update in-game as plays unfold.

Sharp Detection

Prior: new user is square. Update with betting patterns.

Odds Setting

Prior: opening line. Update with market action.

R Code Equivalent

# Bayesian update (conjugate normal-normal)
bayesian_update <- function(prior_mean, prior_var, obs_mean, obs_var, n) { 
  # Posterior variance
  post_var <- 1 / (1/prior_var + n/obs_var)
  
  # Posterior mean
  post_mean <- post_var * (prior_mean/prior_var + n*obs_mean/obs_var)
  
  list(
    mean = post_mean,
    sd = sqrt(post_var),
    ci_lower = post_mean - 1.96 * sqrt(post_var),
    ci_upper = post_mean + 1.96 * sqrt(post_var)
  )
}

# Example
prior <- list(mean = 50, var = 10^2)
data <- list(mean = 58, var = 100, n = 10)

posterior <- bayesian_update(prior$mean, prior$var, data$mean, data$var, data$n)
cat(sprintf("Posterior: %.1f (95%% CI: %.1f - %.1f)\n",
    posterior$mean, posterior$ci_lower, posterior$ci_upper))

โœ… Key Takeaways

  • โ€ข Prior ร— Likelihood โ†’ Posterior
  • โ€ข More data โ†’ posterior approaches MLE
  • โ€ข Less data โ†’ posterior stays near prior
  • โ€ข Credible intervals have direct probability interpretation
  • โ€ข Perfect for sequential updating
  • โ€ข Regularizes naturally via prior

Pricing Models & Frameworks Tutorial

Built for mastery ยท Interactive learning