|
|
|
@ -237,9 +237,9 @@ As you might have guessed it's preferable to have a larger number of true positi
|
|
|
|
|
|
|
|
|
|
Let's revisit the terms we saw earlier with the help of the confusion matrix's mapping of TP/TN and FP/FN:
|
|
|
|
|
|
|
|
|
|
🎓 Precision: TP/(TP + FN) The fraction of relevant instances among the retrieved instances (e.g. which labels were well-labeled)
|
|
|
|
|
🎓 Precision: TP/(TP + FP) The fraction of relevant instances among the retrieved instances (e.g. which labels were well-labeled)
|
|
|
|
|
|
|
|
|
|
🎓 Recall: TP/(TP + FP) The fraction of relevant instances that were retrieved, whether well-labeled or not
|
|
|
|
|
🎓 Recall: TP/(TP + FN) The fraction of relevant instances that were retrieved, whether well-labeled or not
|
|
|
|
|
|
|
|
|
|
🎓 f1-score: (2 * precision * recall)/(precision + recall) A weighted average of the precision and recall, with best being 1 and worst being 0
|
|
|
|
|
|
|
|
|
|