True Positive Rate: Correctly classified positive out of all the positive cases (sensitivity)

$TPR = \frac{TP}{TP + FN}$

False Positive Rate: Incorrectly classified positive out of all the negative cases

$FPR = \frac{FP}{TN + FP}$

False Negative Rate: Incorrectly classified negative out of all the positive cases

$FNR = \frac{FN}{TP + FN}$

True Negative Rate: Correctly classified negative out of all the negative cases (specificity)

$TNR = \frac{TN}{TN + FP}$

Mnemonic for specificity: If you classify everything as positive - you’re not being specific

Mnemonic for sensitivity: In sensitive cases, you want to have very high TPR, i.e, you want high sensitivity

Not all errors are equally bad

In a cancer screening scenario, we’d like to have as small false negative rate as possible, i.e, we’d like to have very few cancer images be classified as non cancerous . While in the case of an email span classifier, we’d like to a very small false positive rate , i.e, very few normal emails being classified as spam.

We can use a multiplier $\alpha$ to influence (upweight or downweight the probabilities) the decision boundary of a Bayes classifier to adjust for risk associated with each outcome.