0/70 completed
Metrics & Evaluation Interactive

ROC-AUC

Measure classifier performance across all thresholds. AUC = probability that model ranks a random positive higher than a random negative.

๐Ÿ“Š ROC Curve Basics

What It Measures

  • TPR (True Positive Rate) = TP / (TP + FN)
  • a.k.a. Sensitivity, Recall
  • FPR (False Positive Rate) = FP / (FP + TN)
  • a.k.a. 1 - Specificity

AUC Interpretation

  • โ€ข 1.0 = Perfect classifier
  • โ€ข 0.9+ = Excellent
  • โ€ข 0.7-0.9 = Good to Fair
  • โ€ข 0.5 = Random guessing

Model Quality

Class Separation 1.5
0 3
Decision Threshold 0.5
0.1 0.9

Higher separation = better model can distinguish classes

๐Ÿ“Š AUC Score

0.931
Excellent

At Threshold 0.50

TPR (Recall) 82.1%
FPR 11.4%
Precision 86.7%
Accuracy 85.5%

ROC Curve

Curve above diagonal = better than random. Area under curve = AUC.

Confusion Matrix

Pred: Pos
Pred: Neg
Actual: Pos
78
TP
17
FN
Actual: Neg
12
FP
93
TN

๐ŸŽฏ When to Use ROC-AUC

โœ“ Good For

  • โ€ข Comparing models across all thresholds
  • โ€ข Balanced class problems
  • โ€ข When threshold is flexible
  • โ€ข Ranking quality (who's more likely?)

โœ— Limitations

  • โ€ข Imbalanced classes (use PR-AUC instead)
  • โ€ข When you need specific threshold
  • โ€ข Doesn't measure calibration
  • โ€ข Can mislead with rare events

R Code Equivalent

# Calculate ROC-AUC
library(pROC)

# From predictions and actual
roc_obj <- roc(actual, predicted_prob)
auc_value <- auc(roc_obj)
cat(sprintf("AUC: %.3f\n", auc_value))

# Plot ROC curve
plot(roc_obj, main = "ROC Curve", 
     col = "#f5c542", lwd = 2)
abline(a = 0, b = 1, col = "gray", lty = 2)

# Confusion matrix at threshold
threshold <- 0.5
predicted_class <- ifelse(predicted_prob >= threshold, 1, 0)
table(Actual = actual, Predicted = predicted_class)

# Calculate metrics
library(caret)
confusionMatrix(factor(predicted_class), factor(actual))

โœ… Key Takeaways

  • โ€ข ROC plots TPR vs FPR at all thresholds
  • โ€ข AUC = area under curve (higher = better)
  • โ€ข 0.5 = random, 1.0 = perfect
  • โ€ข Threshold-independent metric
  • โ€ข Use PR-AUC for imbalanced classes
  • โ€ข Measures ranking, not calibration

Pricing Models & Frameworks Tutorial

Built for mastery ยท Interactive learning