## 4 Answers

Reset to default

# Trending sort

**Trending** sort is based off of the default sorting method — by highest score — but it boosts votes that have happened recently, helping to surface more up-to-date answers.

It falls back to sorting by highest score if no posts are trending.

## Introduction

As a rule of thumb, every time you want to compare **ROC AUC** vs **F1 Score**, think about it as if you are comparing your model performance based on:

```
[Sensitivity vs (1-Specificity)] VS [Precision vs Recall]
```

**Note that Sensitivity is the Recall (they are the same exact metric).**

Now we need to understand what are: Specificity, Precision and Recall (Sensitivity) *intuitively*!

## Background

**Specificity:** is given by the following formula:

Intuitively speaking, if we have 100% specific model, that means it did **NOT** miss any True Negative, in other words, there were **NO** False Positives (*i.e. negative result that is falsely labeled as positive*). Yet, there is a risk of having a lot of False Negatives!

**Precision:** is given by the following formula:

Intuitively speaking, if we have a 100% precise model, that means it **could catch all** True positive but there were **NO** False Positive.

**Recall:** is given by the following formula:

Intuitively speaking, if we have a 100% recall model, that means it did **NOT** miss any True Positive, in other words, there were **NO** False Negatives (*i.e. a positive result that is falsely labeled as negative*). Yet, there is a risk of having a lot of False Positives!

As you can see, the three concepts are very close to each other!

**As a rule of thumb,** if the cost of having False negative is high, we want to increase the model sensitivity and recall (*which are the exact same in regard to their formula*)!.

For instance, in fraud detection or sick patient detection, we don’t want to label/predict a fraudulent transaction (True Positive) as non-fraudulent (False Negative). Also, we don’t want to label/predict a contagious sick patient (True Positive) as not sick (False Negative).

This is because the consequences will be worse than a False Positive (incorrectly labeling a a harmless transaction as fraudulent or a non-contagious patient as contagious).

On the other hand, if the cost of having False Positive is high, then we want to increase the model specificity and precision!.

For instance, in email spam detection, we don’t want to label/predict a non-spam email (True Negative) as spam (False Positive). On the other hand, failing to label a spam email as spam (False Negative) is less costly.

## F1 Score

It’s given by the following formula:

F1 Score keeps a **balance** between Precision and Recall. We use it if there is uneven class distribution, as precision and recall may give misleading results!

So we use F1 Score as a comparison indicator between Precision and Recall Numbers!

## Area Under the Receiver Operating Characteristic curve (AUROC)

It compares the Sensitivity vs (1-Specificity), in other words, compare the True Positive Rate vs False Positive Rate.

So, the bigger the AUROC, the greater the distinction between True Positives and True Negatives!

## AUROC vs F1 Score (Conclusion)

In general, the ROC is for many different levels of thresholds and thus it has many F score values. F1 score is applicable for any particular point on the ROC curve.

You may think of it as a measure of precision and recall at a particular threshold value whereas AUC is the area under the ROC curve. For F score to be high, both precision and recall should be high.

**Consequently**, when you have a data **imbalance** between positive and negative samples, you should always use F1-score because ROC **averages** over all possible thresholds!

Further read:

4

If you look at the definitions, you can that both AUC and F1-score optimize “something” together with the fraction of the sample labeled “positive” that is actually true positive.

This “something” is:

- For the AUC, the specificity, which is the fraction of the negatively labeled sample that is correctly labeled. You’re not looking at the fraction of your positively labeled samples that is correctly labeled.
- Using the F1 score, it’s precision: the fraction of the positively labeled sample that is correctly labeled. And using the F1-score you don’t consider the purity of the sample labeled as negative (the specificity).

The difference becomes important when you have highly unbalanced or skewed classes: For example there are many more true negatives than true positives.

Suppose you are looking at data from the general population to find people with a rare disease. There are far more people “negative” than “positive”, and trying to optimize how well you are doing on the positive and the negative samples simultaneously, using AUC, is not optimal. You want the positive sample to include all positives if possible and you don’t want it to be huge, due to a high false positive rate. So in this case you use the F1 score.

Conversely if both classes make up 50% of your dataset, or both make up a sizable fraction, and you care about your performance in identifying each class equally, then you should use the AUC, which optimizes for both classes, positive and negative.

just adding my 2 cents here:

AUC does an implicit weighting of the samples, which F1 does not.

In my last use case comparing the effectiveness of drugs on patients, it’s easy to learn which drugs are generally strong, and which are weak. The big question is whether you can hit the outliers (the few positives for a weak drug or the few negatives for a strong drug). To answer that, you have to specifically weigh the outliers up using F1, which you don’t need to do with AUC.

to predict the positive cases correctly

one can rewrite a bit your goal and get: when a case is really `positive`

you want classify it as `positive`

too. The probability of such event `p(predicted_label = positive | true_label = positive)`

is a recall by definition. If you want to maximize this property of your model, you’d choose the Model 1.

63