Bias (ethics/fairness): Difference between revisions

no edit summary
(Created page with "==Introduction== Bias in machine learning refers to systematic errors or discrimination present in a model's predictions or decisions. It can arise when the data used to train the model is not representative of the population it will be applied to, or certain groups are disproportionately represented or excluded from training data. ==Sources of bias in machine learning== Biases can arise during the creation and deployment of machine learning models. 1. Data Bias: This...")
 
No edit summary
Line 1: Line 1:
{{see also|Machine learning terms|Bias}}
==Introduction==
==Introduction==
Bias in machine learning refers to systematic errors or discrimination present in a model's predictions or decisions. It can arise when the data used to train the model is not representative of the population it will be applied to, or certain groups are disproportionately represented or excluded from training data.
Bias in [[machine learning]] refers to systematic errors or discrimination present in a [[model]]'s [[prediction]]s or decisions. It can arise when the data used to train the model is not representative of the population it will be applied to, or certain groups are disproportionately represented or excluded from training data.


==Sources of bias in machine learning==
==Sources of bias in machine learning==
Line 24: Line 25:
==Explain Like I'm 5 (ELI5)==
==Explain Like I'm 5 (ELI5)==
Machine learning is like teaching a robot how to do things like humans do. Sometimes, however, the robot may make mistakes due to being taught with poor examples--this is known as "bias". To address this issue, we can ensure the robot receives appropriate examples that mirror what it will be doing in the future, and ensure there are no biases based on skin color or gender. Furthermore, people should check up on the robot's work regularly to guarantee it does an adequate job.
Machine learning is like teaching a robot how to do things like humans do. Sometimes, however, the robot may make mistakes due to being taught with poor examples--this is known as "bias". To address this issue, we can ensure the robot receives appropriate examples that mirror what it will be doing in the future, and ensure there are no biases based on skin color or gender. Furthermore, people should check up on the robot's work regularly to guarantee it does an adequate job.
[[Category:Machine learning terms]] [[Category:Terms]]