Search results

Results 1 – 30 of 85
Advanced search

Search in namespaces:

Page title matches

Page text matches

  • ...t and can be fine-tuned for a specific task. The motivation behind using a pre-trained model is to leverage the knowledge gained during its initial training, thus ...mple, in natural language processing, models like [[BERT]] and [[GPT]] are pre-trained using self-supervised techniques such as masked language modeling and next-
    3 KB (430 words) - 01:10, 21 March 2023
  • ...g, is a technique used in machine learning to improve the performance of a pre-trained model on a specific task. This approach leverages the knowledge gained from ===Pre-trained Models===
    3 KB (543 words) - 01:17, 20 March 2023
  • ===Pre-Trained Models=== ...ross a wide range of tasks without requiring further training. Examples of pre-trained models include [[BERT]] for natural language processing, and [[ResNet]] for
    3 KB (450 words) - 13:29, 18 March 2023
  • ...ge-based abilities]]s. It showed that language models can be efficiently [[pre-trained]] which may help them generalize. The architecture was capable of performin
    2 KB (221 words) - 18:55, 2 March 2023
  • ...ving significant improvements in several language understanding tasks. The pre-trained model also acquires useful linguistic knowledge for downstream tasks, even ...cation, speech recognition, and machine translation. Some researchers have pre-trained a neural network using a language modeling objective and fine-tuned it on a
    3 KB (469 words) - 20:23, 2 March 2023
  • The '''Generative Pre-trained Transformer''' ('''GPT''') is a series of machine learning models developed ...was introduced by OpenAI in 2018. GPT-1 had 117 million parameters and was pre-trained on the [[BooksCorpus]] dataset. It demonstrated state-of-the-art performanc
    4 KB (548 words) - 13:11, 18 March 2023
  • ...urce task. Examples of inductive transfer learning include [[fine-tuning]] pre-trained models and multi-task learning. * [[Computer vision]]: Pre-trained [[convolutional neural networks]] (CNNs) have been employed to solve tasks
    4 KB (614 words) - 22:28, 21 March 2023
  • Once pre-trained, BERT can be fine-tuned using supervised learning on a relatively small amo
    4 KB (542 words) - 13:11, 18 March 2023
  • 79 bytes (8 words) - 18:44, 2 March 2023
  • ...ach the article describes is to take the pre-trained language model, PaLM (Pre-trained autoregressive Language Model), and add sensor data to it to create PaLM-E ...an embodied language model that incorporates real-world sensor data into a pre-trained language model. The program is designed to help robots understand language
    4 KB (594 words) - 18:43, 9 March 2023
  • *[[GPT (Generative Pre-trained Transformer)]]
    1 KB (96 words) - 17:50, 26 February 2023
  • 4 KB (538 words) - 13:16, 18 March 2023
  • ===Pre-trained Models=== Keras offers a collection of pre-trained models, such as [[VGG]], [[Inception]], and [[ResNet]], that can be fine-tu
    4 KB (562 words) - 05:02, 20 March 2023
  • ...learning is biased model initialization. When a model is initialized with pre-trained weights, these initial weights may contain biases from the original trainin
    3 KB (484 words) - 15:45, 19 March 2023
  • 3 KB (396 words) - 15:11, 1 April 2023
  • ...revalent in natural language processing, where models like GPT (Generative Pre-trained Transformer) are trained to predict the next word in a sentence. Autoregres
    4 KB (600 words) - 01:15, 21 March 2023
  • '''[[Pre-trained]] / [[Pretraining]]'''
    2 KB (300 words) - 21:33, 11 January 2024
  • [[GPT]], which stands for [[Generative Pre-trained Transformer]], is a type of [[language model]] developed by [[OpenAI]]. Bas
    4 KB (493 words) - 18:00, 15 July 2023
  • 3 KB (480 words) - 05:03, 20 March 2023
  • 3 KB (388 words) - 15:09, 6 April 2023
View (previous 20 | ) (20 | 50 | 100 | 250 | 500)