L2 regularization: Revision history

Diff selection: Mark the radio buttons of the revisions to compare and hit enter or the button at the bottom.
Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.

18 March 2023

  • curprev 13:1213:12, 18 March 2023Walle talk contribs 3,193 bytes +3,193 Created page with "{{see also|Machine learning terms}} ==Introduction== L2 regularization, also known as ridge regression or Tikhonov regularization, is a technique employed in machine learning to prevent overfitting and improve the generalization of a model. It is a form of regularization that adds a penalty term to the objective function, which helps in constraining the model's complexity. L2 regularization is particularly useful for linear regression models, but can also be appl..."