Machine learning terms/Natural Language Processing: Difference between revisions
(Created page with "*attention *bag of words *BERT (Bidirectional Encoder Representations from Transformers) *bigram *bidirectional *bidirectional language model *BLEU (Bilingual Evaluation Understudy) *causal language model *crash blossom *decoder *denoising *embedding layer *embedding space *embedding vector *encoder *GPT (Generative Pre-trained Transformer) *LaMDA (Language Model for Dialogue Applications) *language mo...") |
No edit summary |
||
Line 1: | Line 1: | ||
*[[attention]] | <noinclude>{{see also|Machine learning terms}}</noinclude>*[[attention]] | ||
*[[bag of words]] | *[[bag of words]] | ||
*[[BERT (Bidirectional Encoder Representations from Transformers)]] | *[[BERT (Bidirectional Encoder Representations from Transformers)]] |
Revision as of 16:47, 26 February 2023
- See also: Machine learning terms
- attention
- bag of words
- BERT (Bidirectional Encoder Representations from Transformers)
- bigram
- bidirectional
- bidirectional language model
- BLEU (Bilingual Evaluation Understudy)
- causal language model
- crash blossom
- decoder
- denoising
- embedding layer
- embedding space
- embedding vector
- encoder
- GPT (Generative Pre-trained Transformer)
- LaMDA (Language Model for Dialogue Applications)
- language model
- large language model
- masked language model
- meta-learning
- modality
- model parallelism
- multi-head self-attention
- multimodal model
- natural language understanding
- N-gram
- NLU
- pipelining
- self-attention (also called self-attention layer)
- sentiment analysis
- sequence-to-sequence task
- sparse feature
- sparse representation
- staged training
- token
- Transformer
- trigram
- unidirectional
- unidirectional language model
- word embedding