All public logs
Combined display of all available logs of AI Wiki. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 13:13, 18 March 2023 Walle talk contribs created page ReLU (Created page with "{{see also|Machine learning terms}} ==ReLU in Machine Learning== ReLU, or '''Rectified Linear Unit''', is a popular activation function used in artificial neural networks (ANNs) for implementing deep learning models. The primary role of an activation function is to introduce non-linearity in the model and improve its learning capability. ReLU has been widely adopted due to its simplicity, efficiency, and ability to mitigate the vanishing gradient problem....")
- 13:13, 18 March 2023 Walle talk contribs created page ROC (receiver operating characteristic) Curve (Created page with "{{see also|Machine learning terms}} ==Introduction== The '''Receiver Operating Characteristic''' ('''ROC''') curve is a graphical representation that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. It is widely used in machine learning, statistics, and data analysis for evaluating the performance of classification algorithms, particularly in the presence of imbalanced class distribution. ==Background== ===Origi...")
- 13:12, 18 March 2023 Walle talk contribs created page NLU (Created page with "{{see also|Machine learning terms}} ==Introduction== Natural Language Understanding (NLU) is a subfield of Artificial Intelligence (AI) and Machine Learning (ML) that focuses on enabling computers to comprehend and interpret human language. This process includes the analysis of linguistic data to identify key elements such as entities, relations, and sentiments. NLU enables machines to understand the meaning and context of natural language input, allowing them to...")
- 13:12, 18 March 2023 Walle talk contribs created page N-gram (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning and natural language processing, an '''N-gram''' is a contiguous sequence of N items from a given sample of text or speech. N-grams are widely used for various tasks in computational linguistics, such as statistical language modeling, text classification, and information retrieval. The term "N-gram" is derived from the combination of the letter "N" and the word "gram," which originates...")
- 13:12, 18 March 2023 Walle talk contribs created page Log Loss (Created page with "{{see also|Machine learning terms}} ==Log Loss== Log Loss, also known as logarithmic loss or cross-entropy loss, is a common loss function used in machine learning for classification problems. It is a measure of the difference between the predicted probabilities and the true labels of a dataset. The Log Loss function quantifies the performance of a classifier by penalizing the predicted probabilities that deviate from the actual class labels. ==Usage in Machine Learning...")
- 13:12, 18 March 2023 Walle talk contribs created page LaMDA (Language Model for Dialogue Applications) (Created page with "{{see also|Machine learning terms}} ==Introduction== '''LaMDA''' ('''L'''anguage '''M'''odel for '''D'''ialogue '''A'''pplications) is a conversational AI model developed by Google in the field of machine learning. LaMDA aims to improve the interaction between humans and computers by enabling open-domain conversations, thereby allowing machines to understand and respond to a wide range of topics. This article discusses the design, functionality, and key aspects of La...")
- 13:12, 18 March 2023 Walle talk contribs created page L2 regularization (Created page with "{{see also|Machine learning terms}} ==Introduction== L2 regularization, also known as ridge regression or Tikhonov regularization, is a technique employed in machine learning to prevent overfitting and improve the generalization of a model. It is a form of regularization that adds a penalty term to the objective function, which helps in constraining the model's complexity. L2 regularization is particularly useful for linear regression models, but can also be appl...")
- 13:12, 18 March 2023 Walle talk contribs created page L2 loss (Created page with "{{see also|Machine learning terms}} ==L2 Loss in Machine Learning== L2 Loss, also known as Euclidean Loss or Squared Error Loss, is a widely-used loss function in machine learning and deep learning. It is a popular choice for regression tasks, where the goal is to predict a continuous output value. L2 Loss quantifies the difference between the predicted output and the true output, providing a measure of model accuracy. ===Definition and Properties=== The L2 Loss is def...")
- 13:11, 18 March 2023 Walle talk contribs created page L1 regularization (Created page with "{{see also|Machine learning terms}} ==L1 Regularization in Machine Learning== L1 regularization, also known as Lasso regularization or L1 norm, is a widely used regularization technique in machine learning and statistical modeling to prevent overfitting and enhance the generalization of the model. It achieves this by introducing a penalty term in the optimization objective that encourages sparsity in the model parameters. ===Overview=== Regularization techniques are emp...")
- 13:11, 18 March 2023 Walle talk contribs created page L1 loss (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, various loss functions are used to measure the discrepancy between predicted values and actual values. L1 loss, also known as ''Least Absolute Deviations'' (LAD) or ''Least Absolute Errors'' (LAE), is one such loss function used in regression problems to estimate model parameters. L1 loss calculates the sum of absolute differences between predicted and actual values, making it robust to outliers an...")
- 13:11, 18 March 2023 Walle talk contribs created page GPT (Generative Pre-trained Transformer) (Created page with "{{see also|Machine learning terms}} ==Introduction== The '''Generative Pre-trained Transformer''' ('''GPT''') is a series of machine learning models developed by OpenAI for natural language processing tasks. These models are based on the Transformer architecture introduced by Vaswani et al. in 2017. GPT models are designed to generate text by predicting subsequent words in a sequence, and have been applied to tasks such as text generation, translation, summarization,...")
- 13:11, 18 March 2023 Walle talk contribs created page BLEU (Bilingual Evaluation Understudy) (Created page with "{{see also|Machine learning terms}} ==Introduction== The '''Bilingual Evaluation Understudy''' ('''BLEU''') is an automatic evaluation metric used in the field of Natural Language Processing (NLP) to measure the quality of machine-generated translations. Developed by IBM Research in 2002, it compares translations generated by a machine with a set of human-generated reference translations. BLEU scores are widely used in the evaluation of machine translation system...")
- 13:11, 18 March 2023 Walle talk contribs created page BERT (Bidirectional Encoder Representations from Transformers) (Created page with "{{see also|Machine learning terms}} ==Introduction== BERT, or '''Bidirectional Encoder Representations from Transformers''', is a pre-training technique for natural language understanding tasks in the field of machine learning. Developed by researchers at Google AI Language, BERT has significantly advanced the state of the art in a wide range of tasks, such as question answering, sentiment analysis, and named entity recognition. BERT's breakthrough lies in its abilit...")
- 15:34, 9 March 2023 Daikon Radish talk contribs created page PaLM-E: An Embodied Multimodal Language Model (Created page with "{{see also|PaLM-E|Papers}} ==Explore Like I'm 5 (ELI5)== ==Abstract== Large language models have been demonstrated to perform complex tasks. However, enabling general inference in the real world, e.g. for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts. Input to our embodied language mode...")
- 16:59, 6 March 2023 Daikon Radish talk contribs created page File:Coding model diagram1.png (File uploaded with MsUpload)
- 16:59, 6 March 2023 Daikon Radish talk contribs uploaded File:Coding model diagram1.png (File uploaded with MsUpload)
- 13:19, 6 March 2023 Daikon Radish talk contribs created page Zero shot, one shot and few shot learning (Created page with "{{Needs Expansion}} Zero shot learning is when you have no examples in the prompt. One shot learning is when you have 1 example in the prompt. Few shot learning is when you have a few examples in the prompt. all of these techniques allow the machine learning model to learn with limited or no labeled data. ==Zero Shot Learning== Zero-shot learning, one-shot learning, and few-shot learning are all machine learning techniques used to train model...")
- 22:58, 4 March 2023 Elegant angel talk contribs created page Fine-tune ChatGPT with Perplexity, Burstiness, Professionalism, Randomness and Sentimentality Guide (Created page with "{{see also|Guides|ChatGPT Guides|Prompt Engineering Guides}} In order to generate a text response, please adhere to the following parameters. Please note that each parameter is set on a scale from 1 to 10, where a higher value represents more of the specified attribute. Please ensure that the output includes what each parameter is set at in a bulleted list format prior to the actual response. Perplexity: This parameter measures the complexity of the text. Higher val...")
- 10:10, 3 March 2023 Alpha5 talk contribs created page GPT4 (Redirected page to GPT-4) Tag: New redirect
- 10:09, 3 March 2023 Alpha5 talk contribs created page GPT3.5 (Redirected page to GPT-3.5) Tag: New redirect
- 10:09, 3 March 2023 Alpha5 talk contribs created page GPT3 (Redirected page to GPT-3) Tag: New redirect
- 10:09, 3 March 2023 Alpha5 talk contribs created page GPT2 (Redirected page to GPT-2) Tag: New redirect
- 10:09, 3 March 2023 Alpha5 talk contribs created page GPT1 (Redirected page to GPT-1) Tag: New redirect
- 10:03, 3 March 2023 Alpha5 talk contribs created page Generative Pre-trained Transformers (Redirected page to GPT) Tag: New redirect
- 10:03, 3 March 2023 Alpha5 talk contribs created page Generative pre-trained transformers (Redirected page to GPT) Tag: New redirect
- 10:02, 3 March 2023 Alpha5 talk contribs created page Generative pretrained transformer (Redirected page to GPT) Tag: New redirect
- 10:02, 3 March 2023 Alpha5 talk contribs created page Generative pre-trained transformer (Redirected page to GPT) Tag: New redirect
- 10:01, 3 March 2023 Alpha5 talk contribs created page Generative Pre-trained Transformer (Redirected page to GPT) Tag: New redirect
- 19:12, 2 March 2023 Alpha5 talk contribs created page Template:Needs Links (Created page with "<includeonly><div id="needsexpansion" class="notice metadata"> {| style="width:90%;max-width:600px;margin:auto;margin-bottom: 5px;background:rgba(255,255,255,0.4);border:solid #313235;border-width:1px 1px 1px 15px;padding: 2px;" ! rowspan="3" style="width:40px" | 40px|link= ! style="text-align:left" | This page needs additional information. |- |Key elements of this article are missing. You can help AI Wiki by <span class="plainlinks">[{{f...")
- 19:01, 2 March 2023 Elegant angel talk contribs created page Natural language input (Redirected page to Prompts) Tag: New redirect
- 19:00, 2 March 2023 Elegant angel talk contribs created page GPT models (Redirected page to GPT) Tag: New redirect
- 18:54, 2 March 2023 Elegant angel talk contribs created page Paper (Redirected page to Papers) Tag: New redirect
- 18:44, 2 March 2023 Elegant angel talk contribs created page Improving Language Understanding by Generative Pre-Training (Redirected page to Improving Language Understanding by Generative Pre-Training (GPT)) Tag: New redirect
- 18:21, 2 March 2023 Elegant angel talk contribs created page GPT-2 (Created page with "{{Needs Expansion}} GPT-2 is the 2nd GPT model released by OpenAI in February 2019. Although it is larger than its predecessor, GPT-1, it is very similar. The main difference is that GPT-2 can multitask. It is able to perform well on multiple tasks without being trained on any examples. GPT-2 demonstrated that language model could better comprehend natural language and perform better on more tasks when it is trained on a larger d...")
- 17:56, 2 March 2023 Elegant angel talk contribs created page GPT (Created page with "GPT-1 GPT-2 GPT-3 ChatGPT GPT-3.5")
- 17:55, 2 March 2023 Elegant angel talk contribs created page GPT-1 (Created page with "OpenAI released GPT-1 in June 2018. The developers found that combining the transformer architecture and unsupervised pertaining produced amazing results. GPT-1, according to the developers, was tailored for specific tasks in order to "strongly understand natural language." GPT-1 was an important stepping stone toward a language model that possesses general language-based abilitiess. It showed that language models can be efficiently pre-tra...")
- 16:56, 2 March 2023 Nicoboomer talk contribs created page Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers (VALL-E) (Created page with "{{see also|Papers}} ==Introduction== In the last decade, there have been significant advances in speech synthesis via neural networks and end to end modeling. Current text-to-speech (TTS), systems require high-quality data from recording studios. They also suffer from poor generalization for unseen speaker in zero-shot situations. A new TTS framework, VALL-E, has been developed to address this issue. It uses audio codec codes for an intermediate representation as well a...")
- 16:35, 2 March 2023 Nicoboomer talk contribs created page Template:Paper infobox (Created page with "<includeonly><div class="model"> <div class="heading">[[{{PAGENAME}}]]</div> <div style="width: 500px; display: flex; flex-direction: column;"> <div class="model-infobox-row"> {{#if:{{{name|}}}| <div class="model-infobox-cell">'''Name'''</div> <div class="model-infobox-cell">[[Has paper name::{{{name}}}]]</div> }} </div> <div class="model-infobox-row"> {{#if:{{{type|}}}| <div class="model-infobox-cell">'''Type'''</div> <div class="model-infobox-cell">{{#arraymap:{{{ty...")
- 16:19, 2 March 2023 User account Nicoboomer talk contribs was created
- 18:57, 1 March 2023 Alpha5 talk contribs created page Imbalanced dataset (Redirected page to Class-imbalanced dataset) Tag: New redirect
- 18:55, 28 February 2023 Alpha5 talk contribs created page Minority class (Created page with "{{see also|Machine learning terms}} ===Minority Class in Machine Learning== Minority class refers to a classification problem class with fewer instances or samples than its majority counterpart. For instance, in binary classification problems, if the positive class has more instances than the negative one, then it is considered the minority group. Multi-class problems also use this concept; minorities refer to classes with the fewest instances. Class imbalance is a prob...")
- 18:54, 28 February 2023 Alpha5 talk contribs created page Imbalanced data (Redirected page to Class-imbalanced dataset) Tag: New redirect
- 18:41, 28 February 2023 Alpha5 talk contribs created page Majority class (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, the majority class is the more common label in a dataset that is imbalanced. For example, in a dataset where there are 80% "yes" and 20% "no", "yes" is the majority class. The opposite of the majority class the minority class. ==Impact on Model Performance== A majority class in a dataset can have an enormous effect on the performance of a machine le...")
- 18:29, 28 February 2023 Alpha5 talk contribs created page Mini-batch size (Redirected page to Batch size) Tag: New redirect
- 18:03, 28 February 2023 Alpha5 talk contribs created page Mini batch (Redirected page to Mini-batch) Tag: New redirect
- 18:03, 28 February 2023 Alpha5 talk contribs created page Minibatch (Redirected page to Mini-batch) Tag: New redirect
- 18:02, 28 February 2023 Alpha5 talk contribs created page Mini-batch (Created page with "{{see also|Machine learning terms}} ==Introduction== Mini-batch training is a machine learning technique used to efficiently train large datasets. This division of the entire dataset into smaller batches allows for faster training as well as improved convergence of the model to its optimal solution. ==Theoretical Background== Traditional machine learning relies on batch gradient descent to train the model on all data in one iteration. Unfortunately, when the dataset gro...")
- 16:58, 28 February 2023 Elegant angel talk contribs created page Data-centric AI (Redirected page to Data-centric AI (DCAI)) Tag: New redirect
- 16:57, 28 February 2023 Elegant angel talk contribs created page DCAI (Redirected page to Data-centric AI (DCAI)) Tag: New redirect
- 15:48, 28 February 2023 Elegant angel talk contribs created page Manipulation (Redirected page to Manipulation problem) Tag: New redirect