Jump to content

Improving Language Understanding by Generative Pre-Training (GPT): Difference between revisions

Created page with "===Introduction=== In June 2018, OpenAI introduced GPT-1, a language model that combined unsupervised pre-training with the transformer architecture to achieve significant progress in natural language understanding. The team fine-tuned the model for specific tasks and found that pre-training helped it perform well on various NLP tasks with minimal fine-tuning. GPT-1 used the BooksCorpus dataset and self-attention in the transformer's decoder with 117 million parameters,..."
(Created page with "===Introduction=== In June 2018, OpenAI introduced GPT-1, a language model that combined unsupervised pre-training with the transformer architecture to achieve significant progress in natural language understanding. The team fine-tuned the model for specific tasks and found that pre-training helped it perform well on various NLP tasks with minimal fine-tuning. GPT-1 used the BooksCorpus dataset and self-attention in the transformer's decoder with 117 million parameters,...")
(No difference)
370

edits