Accuracy: Difference between revisions

From AI Wiki
(Created page with "In machine learning, accuracy refers to the ability to accurately predict the output of a given input. It is a common metric used to evaluate the performance of a model in particular classification tasks. ==Mathematical Definition== The accuracy of a model in a classification task is the sum of all the correct predictions divided by the total number of predictions. It can be expressed mathematically as: Accuracy = Number correct predictions / Total...")
(No difference)

Revision as of 00:18, 28 January 2023

In machine learning, accuracy refers to the ability to accurately predict the output of a given input. It is a common metric used to evaluate the performance of a model in particular classification tasks.

Mathematical Definition

The accuracy of a model in a classification task is the sum of all the correct predictions divided by the total number of predictions. It can be expressed mathematically as:

Accuracy = Number correct predictions / Total number of predictions

Example

Consider a binary classification problem, where the goal is predict whether an email is spam or not. On a database of 1000 emails, 800 are classified as "not spam" while 200 are classified as "spam". The model makes a total 1000 predictions during the evaluation phase. The model correctly predicted that 750 emails were "not spam" while 150 emails were "spam". The accuracy of the model can thus be calculated as follows:

Accuracy = (750 +150) / 1000 = 0.99

This means that the model accurately predicts the class of 90% percent of emails.

Explain Like I'm 5 (ELI5)

Accuracy can be described as a score on a test. Imagine that you have to answer 10 questions. If you get 9 correct answers, your accuracy score is 9/10 or 90%. This is how machine learning models can be tested. They are given questions and then based on how many answers they got correct, we calculate their accuracy score.