Inter-rater agreement: Revision history

Diff selection: Mark the radio buttons of the revisions to compare and hit enter or the button at the bottom.
Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.

20 March 2023

  • curprev 05:0505:05, 20 March 2023Walle talk contribs 3,171 bytes +3,171 Created page with "{{see also|Machine learning terms}} ==Introduction== Inter-rater agreement, also referred to as inter-rater reliability or inter-annotator agreement, is a measure of the degree of consistency or consensus among multiple raters or annotators when evaluating a set of items, such as classifying data points in a machine learning task. This measure is essential in various machine learning and natural language processing (NLP) applications, where human-annotated data i..."