Natural Language Processing: Difference between revisions
(Created page with "'''Natural Language Processing Models'''") |
No edit summary |
||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
__TOC__ | |||
==Terms== | |||
{{see also|Natural Language Processing terms}} | |||
{{:machine learning terms/Natural Language Processing}} | |||
==Models== | |||
{{see also|Natural Language Processing Models}} | |||
{{:Natural Language Processing Models}} |
Latest revision as of 18:14, 26 February 2023
Terms
- See also: Natural Language Processing terms
- attention
- bag of words
- BERT (Bidirectional Encoder Representations from Transformers)
- bigram
- bidirectional
- bidirectional language model
- BLEU (Bilingual Evaluation Understudy)
- causal language model
- crash blossom
- decoder
- denoising
- embedding layer
- embedding space
- embedding vector
- encoder
- GPT (Generative Pre-trained Transformer)
- LaMDA (Language Model for Dialogue Applications)
- language model
- large language model
- masked language model
- meta-learning
- modality
- model parallelism
- multi-head self-attention
- multimodal model
- natural language understanding
- N-gram
- NLU
- pipelining
- self-attention (also called self-attention layer)
- sentiment analysis
- sequence-to-sequence task
- sparse feature
- sparse representation
- staged training
- token
- Transformer
- trigram
- unidirectional
- unidirectional language model
- word embedding
Models
- See also: Natural Language Processing Models