Machine learning terms/Natural Language Processing: Difference between revisions
Revision as of 16:40, 26 February 2023
- attention
- bag of words
- BERT (Bidirectional Encoder Representations from Transformers)
- bigram
- bidirectional
- bidirectional language model
- BLEU (Bilingual Evaluation Understudy)
- causal language model
- crash blossom
- decoder
- denoising
- embedding layer
- embedding space
- embedding vector
- encoder
- GPT (Generative Pre-trained Transformer)
- LaMDA (Language Model for Dialogue Applications)
- language model
- large language model
- masked language model
- meta-learning
- modality
- model parallelism
- multi-head self-attention
- multimodal model
- natural language understanding
- N-gram
- NLU
- pipelining
- self-attention (also called self-attention layer)
- sentiment analysis
- sequence-to-sequence task
- sparse feature
- sparse representation
- staged training
- token
- Transformer
- trigram
- unidirectional
- unidirectional language model
- word embedding