Perplexity: Difference between revisions

155 bytes added ,  31 January 2023
no edit summary
No edit summary
No edit summary
Line 10: Line 10:
{{see also|language models}}
{{see also|language models}}
Perplexity is an important measurement for determining how good a [[language model]] is. Essentially, it quantifies the quality of the model's predictions by evaluating the inverse probability of the [[test set]], normalized by the number of words, or by calculating the average number of bits required to encode a single word through [[cross-entropy]]. Perplexity can be perceived as the [[weighted branching factor]], and a high perplexity score represents a higher degree of confusion in the model's next-word predictions, while a low perplexity score implies greater confidence in the model's output.
Perplexity is an important measurement for determining how good a [[language model]] is. Essentially, it quantifies the quality of the model's predictions by evaluating the inverse probability of the [[test set]], normalized by the number of words, or by calculating the average number of bits required to encode a single word through [[cross-entropy]]. Perplexity can be perceived as the [[weighted branching factor]], and a high perplexity score represents a higher degree of confusion in the model's next-word predictions, while a low perplexity score implies greater confidence in the model's output.
[[Category:AI text]] [[Category:AI content generation]] [[Category:AI content detection]] [[Category:Terms]] [[Category:Language model]] [[Category:NLP]]