Training loss: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 12: Line 12:
#[[Categorical cross-entropy]] (CCE): Used in multiclass classification problems to predict one of several classes, this statistic measures the difference between a predicted probability distribution and an actual one-hot encoded class label.
#[[Categorical cross-entropy]] (CCE): Used in multiclass classification problems to predict one of several classes, this statistic measures the difference between a predicted probability distribution and an actual one-hot encoded class label.
#Softmax Cross-Entropy Loss: This approach is used for multiclass classification problems with mutually exclusive classes. It calculates the categorical cross-entropy loss for each class and then takes its average across all classes.
#Softmax Cross-Entropy Loss: This approach is used for multiclass classification problems with mutually exclusive classes. It calculates the categorical cross-entropy loss for each class and then takes its average across all classes.
#KL-Divergence: This statistic measures the difference in probability distributions. It's commonly employed when training generative models such as Generative Adversarial Networks (GANs).


==How Training Loss is Used==
==How Training Loss is Used==