Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
No edit summary |
|||
Line 27: | Line 27: | ||
===Data Augmentation=== | ===Data Augmentation=== | ||
[[Data augmentation]] is a practice that involves creating new information. It is a technique used to expand a training dataset by altering original data in various ways. It involves making various transformations to the original sample, such as [[cropping]], [[flipping]], [[rotating]], [[scaling]] and adding noise, in order to produce new training [[examples]] that look similar but slightly different from their originals. | [[Data augmentation]] is a practice that involves creating new information. It is a technique used to expand a training dataset by altering original data in various ways. It involves making various transformations to the original sample, such as [[cropping]], [[flipping]], [[rotating]], [[scaling]] and adding noise, in order to produce new training [[examples]] that look similar but slightly different from their originals. | ||
===Dropout=== | |||
Dropout randomly set some [[neuron]]s' output to zero during the training process - in other words, dropout randomly turns off some neurons in a [[neural network]] during each [[iteration]] of training. | |||
Every training iteration, a random subset of neurons is chosen to be dropped out with a probability defined by a [[hyperparameter]] called the [[dropout rate]]. This encourages remaining neurons to learn robust and independent [[features]] without being dependent on other neurons' presence; in turn, this prevents the model from being overly specific and more susceptible to overfitting. | |||
==Explain Like I'm 5 (ELI5)== | ==Explain Like I'm 5 (ELI5)== |