Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
Line 9: | Line 9: | ||
#[[Mean squared error]] (MSE) Loss: This loss function is the most commonly employed for [[regression]] problems. It measures the average squared difference between predicted output and actual output. | #[[Mean squared error]] (MSE) Loss: This loss function is the most commonly employed for [[regression]] problems. It measures the average squared difference between predicted output and actual output. | ||
#[[Binary cross-entropy]] (BCE): Used in [[binary classification]] problems where the objective is to accurately predict one of two possible [[class]]es, this statistic measures the | #[[Binary cross-entropy]] (BCE): Used in [[binary classification]] problems where the objective is to accurately predict one of two possible [[class]]es, this statistic measures the distance between the predicted probabilities and the actual target values. | ||
#[[Categorical cross-entropy]] (CCE): Used in multiclass classification problems to predict one of several classes, this statistic measures the difference between a predicted probability distribution and an actual one-hot encoded class label. | #[[Categorical cross-entropy]] (CCE): Used in [[multiclass classification]] problems to predict one of several classes, this statistic measures the difference between a [[predicted probability distribution]] and an actual [[one-hot encoded]] class [[label]]. | ||
#Softmax | #[[Softmax cross-entropy]]: This approach is used for multiclass classification problems with mutually exclusive classes. It calculates the [[categorical cross-entropy]] loss for each class and then takes its average across all classes. | ||
#KL- | #[[KL-divergence]]: This statistic measures the difference in probability distributions. It's commonly employed when training [[generative models]] such as [[Generative Adversarial Network]]s (GANs). | ||
==How Training Loss is Used== | ==How Training Loss is Used== |