Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
No edit summary |
|||
Line 15: | Line 15: | ||
==How Training Loss is Used== | ==How Training Loss is Used== | ||
The training loss is used to evaluate the performance of a machine learning model during training. To minimize this loss, optimization | The training loss is used to evaluate the performance of a machine learning model during training. To minimize this loss, [[optimization algorithm]]s such as [[stochastic gradient descent]] (SGD) or [[Adam]] are employed. These optimization processes modify the model's [[parameters]] in order to minimize its training loss. | ||
==Overfitting and Underfitting== | ==Overfitting and Underfitting== |