Accuracy: Difference between revisions

From AI Wiki
No edit summary
No edit summary
Line 4: Line 4:
The accuracy of a model in a classification task is the sum of all the correct [[prediction]]s divided by the total number of predictions. It can be expressed mathematically as:
The accuracy of a model in a classification task is the sum of all the correct [[prediction]]s divided by the total number of predictions. It can be expressed mathematically as:


<math>Accuracy = Number correct predictions / Total number of predictions</math>
Accuracy = Number correct predictions / Total number of predictions


==Example==
==Example==
Consider a binary classification problem, where the goal is predict whether an email is spam or not. On a database of 1000 emails, 800 are classified as "not spam" while 200 are classified as "spam". The model makes a total 1000 predictions during the evaluation phase. The model correctly predicted that 750 emails were "not spam" while 150 emails were "spam". The accuracy of the model can thus be calculated as follows:
Consider a [[binary classification]] problem, where the goal is to predict whether an email is spam email or not. In a set of 1000 emails, 800 are classified as "not spam" while 200 are classified as "spam". The model makes a total of 1000 predictions during the [[evaluation phase]]. The model correctly predicted that 750 emails were "not spam" while 150 emails were "spam"; 900 total correct predictions and 100 incorrect predictions. The accuracy of the model can thus be calculated as follows:


Accuracy = (750 +150) / 1000 = 0.99
Accuracy = (750 + 150) / 1000 = 0.90


This means that the model accurately predicts the class of 90% percent of emails.
This means that the model accurately predicts the class of 90% percent of emails.


==Explain Like I'm 5 (ELI5)==
==Explain Like I'm 5 (ELI5)==
Accuracy can be described as a score on a test. Imagine that you have to answer 10 questions. If you get 9 correct answers, your accuracy score is 9/10 or 90%. This is how machine learning models can be tested. They are given questions and then based on how many answers they got correct, we calculate their accuracy score.
Accuracy can be described as a score on a test. Imagine that you have to answer 10 questions. If you get 9 correct answers, your accuracy score is 9/10 or 90%. This is how [[machine learning models]] can be tested. They are given questions and then based on how many answers they got correct, we calculate their accuracy score.

Revision as of 01:42, 28 January 2023

In machine learning, accuracy refers to the ability to accurately predict the output of a given input. It is a common metric used to evaluate the performance of a model in particular classification tasks.

Mathematical Definition

The accuracy of a model in a classification task is the sum of all the correct predictions divided by the total number of predictions. It can be expressed mathematically as:

Accuracy = Number correct predictions / Total number of predictions

Example

Consider a binary classification problem, where the goal is to predict whether an email is spam email or not. In a set of 1000 emails, 800 are classified as "not spam" while 200 are classified as "spam". The model makes a total of 1000 predictions during the evaluation phase. The model correctly predicted that 750 emails were "not spam" while 150 emails were "spam"; 900 total correct predictions and 100 incorrect predictions. The accuracy of the model can thus be calculated as follows:

Accuracy = (750 + 150) / 1000 = 0.90

This means that the model accurately predicts the class of 90% percent of emails.

Explain Like I'm 5 (ELI5)

Accuracy can be described as a score on a test. Imagine that you have to answer 10 questions. If you get 9 correct answers, your accuracy score is 9/10 or 90%. This is how machine learning models can be tested. They are given questions and then based on how many answers they got correct, we calculate their accuracy score.