L0 regularization: Revision history

Diff selection: Mark the radio buttons of the revisions to compare and hit enter or the button at the bottom.
Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.

17 March 2023

26 February 2023

25 February 2023

  • curprev 14:0314:03, 25 February 2023Alpha5 talk contribs 3,048 bytes +3,048 Created page with "{{see also|Machine learning terms}} ==Introduction== L0 regularization, also referred to as the "feature selection" regularization, is a machine learning technique used to encourage models to utilize only some of the available features from data. It does this by adding a penalty term to the loss function that encourages models to have sparse weights - that is, weights close to zero. The goal of L0 regularization is to reduce feature counts used by the model which improve..."