Jump to content

Validation: Difference between revisions

461 bytes removed ,  22 February 2023
no edit summary
No edit summary
No edit summary
Line 7: Line 7:


===k-Fold Cross-Validation===
===k-Fold Cross-Validation===
K-fold cross validation (kFCV) is a popular technique that involves splitting the data into k equal subsets. One subset serves as the testing set, while the remaining k-1 subsets train the model. This cycle repeats itself k times with each subset being tested once. After averaging these results, an estimate of their accuracy can be made.
[[K-fold cross validation]] (kFCV) is a popular technique that involves splitting the data into k equal subsets. One subset serves as the testing set, while the remaining k-1 subsets train the model. This cycle repeats itself k times with each subset being tested once. After averaging these results, an estimate of their accuracy can be made.


===Hold-Out Validation===
===Hold-Out Validation===
Hold-out validation involves dividing the data into training and testing sets. Usually, a large portion of this information goes toward training the model, while the remainder serves for testing. While this approach is straightforward and straightforward to execute, it may not provide an accurate representation of model performance if the testing set is too small or not representative of all available information.
Hold-out validation involves dividing the data into training and testing sets. Usually, a large portion of this information goes toward training the model, while the remainder serves for [[testing]]. While this approach is straightforward and straightforward to execute, it may not provide an accurate representation of model performance if the testing set is too small or not representative of all available information.


===Leave-One-Out Validation===
===Leave-One-Out Validation===
Line 22: Line 22:


===Precision and Recall===
===Precision and Recall===
Precision measures the percentage of true positive predictions among all positive predictions, while recall evaluates the proportion of true positives among actual positives. Precision and recall are often combined to assess a model's performance when there is an imbalance in class size.
[[Precision]] measures the percentage of [[true positive]] predictions among all predicted positives, while [[recall]] evaluates the proportion of true positives among all actual positives. Precision and recall are often combined to assess a model's performance when there is an imbalance in [[class]] size.


===F1 Score===
===F1 Score===
The F1 score is the harmonic mean of precision and recall. It can be an useful metric when both precision and recall are important factors.
The [[F1 score]] is the harmonic mean of precision and recall. It can be a useful [[metric]] when both precision and recall are important factors.


===AUC-ROC===
===AUC-ROC===
AUC-ROC is a measure of a model's capability to discriminate between positive and negative instances. It's calculated as the area under the curve on an ROC plot. A model with a higher AUC-ROC value will be better at discriminating between positive and negative instances.
[[AUC]]-[[ROC]] is a measure of a model's capability to discriminate between positive and negative instances. It's calculated as the area under the curve on an ROC plot. A model with a higher AUC-ROC value will be better at discriminating between positive and negative instances.
 
==Explain Like I'm 5 (ELI5)==
Validation is the process that determines if a machine learning model can accurately predict events. To do this, we first provide it with examples to learn from and then test its accuracy using numbers. Different approaches exist for validating this process but all involve using some examples as teaching material and some as testing exercises. Finally, numbers tell us how well-trained the computer was at getting things right based on how often those numbers match up.


==Explain Like I'm 5 (ELI5)==
==Explain Like I'm 5 (ELI5)==