Gradient descent: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 14: Line 14:


===Batch Gradient Descent===
===Batch Gradient Descent===
Batch gradient descent updates the parameters after computing the gradient of a cost function over all training datasets. Although it can be computationally expensive for large datasets, it can converge to the global minimum in terms of this cost function.
[[Batch gradient descent]] updates the parameters after computing the gradient of a cost function over all [[training dataset]]s. Although it can be computationally expensive for large datasets, it can converge to the [[global minimum]] in terms of this cost function.


===Stochastic Gradient Descent===
===Stochastic Gradient Descent===
Stochastic gradient descent updates the parameters after computing the gradient of a cost function for one training example. It is less computationally expensive than batch gradient descent, though it may converge to a local minimum in the cost function.
[[Stochastic gradient descent]] updates the parameters after computing the gradient of a cost function for one training [[example]]. It is less computationally expensive than batch gradient descent, though it may converge to a local minimum in the cost function.


===Mini-Batch Gradient Descent===
===Mini-Batch Gradient Descent===
Mini-batch gradient descent is a method for updating parameters after computing the gradient of a cost function for a small set of training examples. It offers an alternative to batch gradient descent and stochastic gradient descent, being less computationally intensive than batch gradient descent and capable of convergeng to either global minimums or local minima in the cost function.
Mini-batch gradient descent is a method for updating parameters after computing the gradient of a cost function for a small set of training [[examples]]. It offers an alternative to batch gradient descent and stochastic gradient descent, being less computationally intensive than batch gradient descent and capable of converge to either global minimum or local minimum in the cost function.


==Regularization==
==Regularization==
Gradient descent can also be enhanced with regularization techniques, which reduce overfitting and enhance generalization of the model. Regularization techniques like L1 or L2 regularization add a penalty term to the cost function that penalizes large parameter values; this encourages models to use smaller parameter values while helping prevent overfitting.
Gradient descent can also be enhanced with regularization techniques, which reduce overfitting and enhance the generalization of the model. Regularization techniques like L1 or L2 regularization add a penalty term to the cost function that penalizes large parameter values; this encourages models to use smaller parameter values while helping prevent overfitting.


==Explain Like I'm 5 (ELI5)==
==Explain Like I'm 5 (ELI5)==
Line 31: Line 31:
Gradient descent is like finding the fastest route down a steep mountain. Imagine yourself perched atop this immense peak, eager to reach its base as quickly as possible.
Gradient descent is like finding the fastest route down a steep mountain. Imagine yourself perched atop this immense peak, eager to reach its base as quickly as possible.


To expedite your descent down the mountain, take steps in the direction that will bring you down fastest. You can tell which way to go by looking at the slope beneath your feet; if one direction is steeper than another, that is likely where you should head.
To expedite your descent down the mountain, take steps in the direction that will bring you down the fastest. You can tell which way to go by looking at the slope beneath your feet; if one direction is steeper than another, that is likely where you should head.


Take a step in that direction, then look again at the slope. Repeat this process until you reach the bottom.
Take a step in that direction, then look again at the slope. Repeat this process until you reach the bottom.


Machine learning utilizes gradient descent to determine the best values for certain parameters that influence predictions. We examine how changing these parameters affects how accurately our predictions match actual outcomes, and use gradient descent to find these values that will give our forecasts maximum precision.
Machine learning utilizes gradient descent to determine the best values for certain parameters that influence predictions. We examine how changing these parameters affects how accurately our predictions match actual outcomes and use gradient descent to find these values that will give our forecasts maximum [[precision]].




[[Category:Terms]] [[Category:Machine learning terms]]
[[Category:Terms]] [[Category:Machine learning terms]]