Metrics & Evaluation Interactive
MAE & RMSE
Measure regression model accuracy. MAE = average error, RMSE = penalizes large errors more. Essential for player projection evaluation.
๐ The Formulas
MAE
1/n ร ฮฃ|y - ลท|
Mean Absolute Error. Average of absolute errors.
MSE
1/n ร ฮฃ(y - ลท)ยฒ
Mean Squared Error. Squares penalize large errors.
RMSE
โMSE
Root MSE. Same units as target variable.
Error Parameters
-5 5
1 8
0 30
๐ Error Metrics
Mean Error -0.06
MAE 2.74 pts
RMSE 4.14 pts
RMSE / MAE 1.51 (outliers!)
Error Distribution
โ ๏ธ High RMSE/MAE ratio suggests outliers
Metric Comparison
MAE
2.74
Linear penalty
Robust to outliers
MSE
17.10
Squared penalty
Differentiable
RMSE
4.14
Same units as target
Interpretable
๐ฏ When to Use Each
Use MAE When...
- โข Outliers are expected and shouldn't dominate
- โข All errors matter equally (cost is linear)
- โข You want median-like behavior
- โข Interpretability is key ("avg 3 pts off")
Use RMSE When...
- โข Large errors are especially bad
- โข Cost grows with error magnitude (squared)
- โข You need differentiable loss for training
- โข Data is relatively clean
R Code Equivalent
# Calculate error metrics
calculate_metrics <- function(actual, predicted) {
error <- predicted - actual
mae <- mean(abs(error))
mse <- mean(error^2)
rmse <- sqrt(mse)
me <- mean(error) # Mean error (bias)
list(
mae = mae,
mse = mse,
rmse = rmse,
bias = me,
rmse_mae_ratio = rmse / mae # >1.25 suggests outliers
)
}
# Example
actual <- c(22.5, 18.3, 25.1, 20.8)
predicted <- c(23.1, 17.5, 26.0, 19.2)
metrics <- calculate_metrics(actual, predicted)
cat(sprintf("MAE: %.2f, RMSE: %.2f\n", metrics$mae, metrics$rmse))โ Key Takeaways
- โข MAE = average absolute error (robust)
- โข RMSE = penalizes large errors more
- โข RMSE โฅ MAE always (equality when all errors equal)
- โข High RMSE/MAE ratio โ outliers present
- โข Check bias (mean error) separately
- โข For player props: MAE often more practical