Jump to content

True positive rate (TPR): Difference between revisions

no edit summary
No edit summary
No edit summary
Line 1: Line 1:
==Introduction==
==Introduction==
[[True positive rate (TPR)]], also referred to as Sensitivity, is a metric used to evaluate the performance of binary classification models. TPR measures how many positive cases are correctly classified as such by the model out of all those classified positively in the dataset. In other words, TPR indicates how many instances were correctly classified as positive out of all those present in the sample.
[[True positive rate (TPR)]], also referred to as '''Sensitivity''', is a [[metric]] used to [[evaluate]] the performance of [[binary classification model]]s. TPR measures how many positive cases are correctly [[classified]] as such by the [[model]] out of all those classified positively in the [[dataset]]. In other words, TPR indicates how many instances were correctly classified as positive out of all those present in the [[examples]].


==Mathematical Definition==
==Mathematical Definition==
The True Positive Rate (TPR) is calculated as the sum of True Positives and False Negatives (FN), divided by 2. Mathematically, this can be represented as:
The true positive rate (TPR) is calculated as the number of true positive (TP) divided by the sum of the sum of the number of true positive (TP) and the number of true negative (FN). Mathematically, this can be represented as:
 
TPR = true positives / (true positives + false negatives)


TPR = True Positives + False Negatives
True Positives are cases in which the model accurately predicts a positive class, and False Negatives occur when it incorrectly predicts a negative class for a positive case.
True Positives are cases in which the model accurately predicts a positive class, and False Negatives occur when it incorrectly predicts a negative class for a positive case.