Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
{{see also|Machine learning terms}} | |||
==Introduction== | ==Introduction== | ||
[[True positive rate (TPR)]], also referred to as '''Sensitivity''', is a [[metric]] used to [[evaluate]] the performance of [[binary classification model]]s. TPR measures how many positive cases are correctly [[classified]] as such by the [[model]] out of all of the actual positives in the [[dataset]]. In other words, TPR is the percent of actual positives that are predicted as positive. | [[True positive rate (TPR)]], also referred to as '''Sensitivity''', is a [[metric]] used to [[evaluate]] the performance of [[binary classification model]]s. TPR measures how many positive cases are correctly [[classified]] as such by the [[model]] out of all of the actual positives in the [[dataset]]. In other words, TPR is the percent of actual positives that are predicted as positive. | ||
Line 20: | Line 21: | ||
Similar to a machine learning model, which attempts to identify all positive cases within a dataset. If it succeeds in finding all of them, we can say it did a good job and give it a high true positive rate; on the contrary, if some positive cases are missed, then its accuracy drops off and we attribute a low true positive rate. | Similar to a machine learning model, which attempts to identify all positive cases within a dataset. If it succeeds in finding all of them, we can say it did a good job and give it a high true positive rate; on the contrary, if some positive cases are missed, then its accuracy drops off and we attribute a low true positive rate. | ||
[[Category:Terms]] [[Category:Machine learning terms]] |