Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
Line 16: | Line 16: | ||
Machine learning often employs several types of iterations, such as: | Machine learning often employs several types of iterations, such as: | ||
#[[Stochastic gradient descent]] (SGD): when each iteration uses only 1 [[example]] of the [[training data]]. After processing just 1 example, the model updates its weights and biases. | #[[Stochastic gradient descent]] (SGD): when each iteration uses only 1 [[example]] of the [[training data]]. After processing just 1 example, the model updates its weights and biases. While it is fast, SGD can be [[unstable]]. | ||
#[[Mini-batch gradient descent]]: when each iteration uses a randomly chosen subset of training data to balance speed of [[convergence]] with [[stability]] in the optimization process. | #[[Mini-batch gradient descent]]: when each iteration uses a randomly chosen subset of training data to balance speed of [[convergence]] with [[stability]] in the optimization process. | ||
#[[Batch gradient descent]]: when each iteration uses all of the training data. This form of gradient descent offers stability but may be computationally expensive for large datasets. | #[[Batch gradient descent]]: when each iteration uses all of the training data. This form of gradient descent offers stability but may be computationally expensive for large datasets. |