All public logs
Combined display of all available logs of AI Wiki. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 01:12, 21 March 2023 Walle talk contribs created page Queue (Created page with "{{see also|Machine learning terms}} ==Queue in Machine Learning== Queue, in the context of machine learning, refers to the use of a data structure known as a queue to store and manage data during the processing of machine learning tasks. Queues are data structures that follow the First-In-First-Out (FIFO) principle, meaning that elements are removed from the queue in the order they were inserted. Queues can be utilized in various stages of the machine learning proces...")
- 01:12, 21 March 2023 Walle talk contribs created page Quantization (Created page with "{{see also|Machine learning terms}} ==Quantization in Machine Learning== Quantization is a technique utilized in machine learning and deep learning to reduce the size of models and computational resources needed for their operation. The process entails approximating the continuous values of parameters, such as weights and activations, using a smaller, discrete set of values. Quantization is particularly useful in deploying models on resource-constrained devices,...")
- 01:12, 21 March 2023 Walle talk contribs created page Quantile bucketing (Created page with "{{see also|Machine learning terms}} ==Introduction== Quantile bucketing, also known as quantile binning or quantile-based discretization, is a technique in machine learning and data preprocessing that aims to transform continuous numeric features into discrete categories by partitioning the data distribution into intervals, with each interval containing an equal proportion of data points. This process improves the efficiency and interpretability of certain algori...")
- 01:12, 21 March 2023 Walle talk contribs created page Quantile (Created page with "{{see also|Machine learning terms}} ==Quantile in Machine Learning== A '''quantile''' is a statistical concept used in machine learning, which refers to the division of a data distribution into equal intervals. These intervals represent different portions of the data distribution and are used for various statistical analyses, such as summarizing data, understanding its structure, and making inferences. ===Definition=== Formally, a quantile is defined as a value that div...")
- 01:12, 21 March 2023 Walle talk contribs created page Proxy (sensitive attributes) (Created page with "{{see also|Machine learning terms}} ==Definition== In machine learning, '''proxy (sensitive attributes)''' refers to variables that indirectly capture information about a sensitive attribute, such as race, gender, or age, which are often used in a model to make predictions or decisions. The use of proxy variables can inadvertently lead to biased outcomes or algorithmic discrimination, even when the original sensitive attribute is not explicitly used in the model. It...")
- 01:12, 21 March 2023 Walle talk contribs created page Probabilistic regression model (Created page with "{{see also|Machine learning terms}} ==Probabilistic Regression Model== Probabilistic regression models are a class of machine learning techniques that predict the relationship between input features and a continuous target variable by estimating a probability distribution of the target variable. These models account for uncertainties in the predictions by providing a range of possible outcomes and their associated probabilities. Probabilistic regression models are wi...")
- 01:11, 21 March 2023 Walle talk contribs created page Prior belief (Created page with "{{see also|Machine learning terms}} ==Prior Belief in Machine Learning== Prior belief, also known as '''prior probability''' or simply '''prior''', is a fundamental concept in the field of machine learning, particularly in Bayesian statistics and Bayesian machine learning methods. The prior represents the initial belief or probability distribution of a model regarding the values of its parameters before any data is taken into account. This section will cover the...")
- 01:11, 21 March 2023 Walle talk contribs created page Preprocessing (Created page with "{{see also|Machine learning terms}} ==Introduction== Preprocessing in machine learning refers to the initial stage of preparing raw data for use in machine learning algorithms. This critical step involves transforming and cleaning the data to enhance its quality, reduce noise, and ensure its compatibility with the chosen machine learning model. By performing preprocessing, data scientists and engineers aim to improve the efficiency and accuracy of machine learning al...")
- 01:11, 21 March 2023 Walle talk contribs created page Predictive rate parity (Created page with "{{see also|Machine learning terms}} ==Introduction== Predictive rate parity is an important concept in the field of machine learning, particularly in the context of fairness and bias. It is a metric used to measure the fairness of a machine learning model, especially in cases where the model makes predictions for different groups within a dataset. The goal of achieving predictive rate parity is to ensure that the model's predictions are equitable across these groups, min...")
- 01:11, 21 March 2023 Walle talk contribs created page Predictive parity (Created page with "{{see also|Machine learning terms}} ==Predictive Parity in Machine Learning== Predictive parity, also known as test fairness, is a crucial criterion for evaluating the fairness of machine learning algorithms. It refers to the condition when the predictive accuracy of an algorithm is consistent across different demographic groups. In other words, the probability of a correct prediction should be equal among all subgroups within the population. This concept is essentia...")
- 01:11, 21 March 2023 Walle talk contribs created page Prediction bias (Created page with "{{see also|Machine learning terms}} ==Definition== Prediction bias refers to a systematic error in a machine learning model's predictions, where the model consistently over- or under-estimates the true value of the target variable. This phenomenon occurs when the model's predictions exhibit a persistent deviation from the actual values, leading to inaccurate and unreliable results. The presence of prediction bias can significantly impair a model's generalization capabili...")
- 01:11, 21 March 2023 Walle talk contribs created page Precision (Created page with "{{see also|Machine learning terms}} ==Introduction== In the context of machine learning, ''precision'' is a fundamental metric used to evaluate the performance of classification models. Precision measures the accuracy of positive predictions made by a model, specifically the proportion of true positive instances among all instances classified as positive. This metric is particularly important in cases where the cost of false positives is high, such as in medical diag...")
- 01:10, 21 March 2023 Walle talk contribs created page Precision-recall curve (Created page with "{{see also|Machine learning terms}} ==Precision-Recall Curve in Machine Learning== In machine learning, the precision-recall curve is a graphical representation that illustrates the performance of a binary classification model. The curve is used to assess the trade-off between two important evaluation metrics: precision and recall. ===Definition of Precision and Recall=== * '''Precision''' refers to the proportion of true positive predictions out of all positive...")
- 01:10, 21 March 2023 Walle talk contribs created page Pre-trained model (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, a pre-trained model refers to a model that has been previously trained on a large dataset and can be fine-tuned for a specific task. The motivation behind using a pre-trained model is to leverage the knowledge gained during its initial training, thus reducing the time, computational resources, and the amount of data required for training a new model from scratch. ==Pre-training Me...")
- 01:10, 21 March 2023 Walle talk contribs created page Pipeline (Created page with "{{see also|Machine learning terms}} ==Pipeline in Machine Learning== A '''pipeline''' in machine learning refers to a sequence of data processing and transformation steps, combined with a learning algorithm, to create a complete end-to-end workflow for training and predicting outcomes. Pipelines are essential for streamlining machine learning tasks, ensuring reproducibility and efficiency, and facilitating collaboration among data scientists and engineers. ===Preproces...")
- 01:10, 21 March 2023 Walle talk contribs created page Performance (Created page with "{{see also|Machine learning terms}} ==Introduction== Performance in machine learning refers to the effectiveness of a machine learning model in achieving its intended purpose, which is typically to make accurate predictions or classifications based on input data. Performance evaluation is a critical aspect of machine learning, as it helps determine the quality of a model and its suitability for a particular task. This article will discuss various aspects of performance i...")
- 01:10, 21 March 2023 Walle talk contribs created page Perceptron (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''perceptron''' is a type of linear classifier and an early form of artificial neural network, which was introduced by Frank Rosenblatt in 1957. Perceptrons are designed to model simple decision-making processes in machine learning, and are primarily used for binary classification tasks, where the goal is to distinguish between two possible outcomes. Although they have been largely superseded by more advanced algori...")
- 01:10, 21 March 2023 Walle talk contribs created page Partitioning strategy (Created page with "{{see also|Machine learning terms}} ==Partitioning Strategy in Machine Learning== In the field of machine learning, the partitioning strategy refers to the method of dividing a dataset into separate subsets to facilitate the training, validation, and testing of models. Partitioning plays a crucial role in ensuring the robustness, accuracy, and generalizability of the model when applied to real-world situations. This article explores the various partitioning strategie...")
- 01:09, 21 March 2023 Walle talk contribs created page Participation bias (Created page with "{{see also|Machine learning terms}} ==Introduction== Participation bias, also known as selection bias, is a type of bias in machine learning that occurs when the training data used to develop a model is not representative of the population of interest. This can lead to a model that performs poorly on new, unseen data, as it has only learned the patterns present in the biased sample. Participation bias can be particularly problematic in applications such as medical di...")
- 01:09, 21 March 2023 Walle talk contribs created page Partial derivative (Created page with "{{see also|Machine learning terms}} ==Partial Derivative in Machine Learning== In machine learning, the concept of partial derivatives plays a crucial role in optimization techniques, primarily for the training and refinement of models. Partial derivatives are a mathematical concept derived from calculus and are utilized to understand how a function changes when one of its variables is altered, while keeping the other variables constant. ===Definition and Notation==...")
- 01:09, 21 March 2023 Walle talk contribs created page Parameter update (Created page with "{{see also|Machine learning terms}} ==Parameter Update in Machine Learning== In the field of machine learning, parameter update refers to the process of iteratively adjusting the values of a model's parameters to minimize the difference between the model's predictions and the actual outcomes. The primary objective of this process is to improve the model's performance on a given task, such as classification or regression, by reducing its error rate. ===Gradient Desce...")
- 01:09, 21 March 2023 Walle talk contribs created page Oversampling (Created page with "{{see also|Machine learning terms}} ==Oversampling in Machine Learning== Oversampling is a technique used in the field of machine learning to address the issue of imbalanced data by increasing the number of samples in the minority class. This process aims to achieve a balanced distribution of classes within the dataset, which ultimately leads to improved performance of machine learning algorithms. ===Imbalanced Data=== Imbalanced data occurs when the distribution o...")
- 01:09, 21 March 2023 Walle talk contribs created page Outliers (Created page with "{{see also|Machine learning terms}} ==Outliers in Machine Learning== In the field of machine learning, outliers are data points that deviate significantly from the majority of the other data points in a given dataset. These data points can have a substantial impact on the results and performance of machine learning algorithms, potentially leading to erroneous or misleading conclusions. This article discusses the concept of outliers, their implications in machine learning...")
- 01:09, 21 March 2023 Walle talk contribs created page Outlier detection (Created page with "{{see also|Machine learning terms}} ==Outlier Detection in Machine Learning== Outlier detection, also referred to as anomaly detection or novelty detection, is a process in machine learning and statistics that involves identifying data points, observations, or patterns that significantly deviate from the expected behavior or the majority of the data. These deviations, known as outliers, can indicate errors in data collection, unusual events, or the presence of pr...")
- 01:08, 21 March 2023 Walle talk contribs created page Out-group homogeneity bias (Created page with "{{see also|Machine learning terms}} ==Out-group Homogeneity Bias== Out-group homogeneity bias, also known as the out-group homogeneity effect, refers to the cognitive bias that leads individuals to perceive members of an out-group, or those that do not belong to their own social or cultural group, as more similar to one another than they actually are. This bias can manifest in various social, cultural, and demographic contexts, including ethnicity, nationality, gender, a...")
- 01:08, 21 March 2023 Walle talk contribs created page Optimizer (Created page with "{{see also|Machine learning terms}} ==Definition== An '''optimizer''' in machine learning is an algorithm or method used to adjust the parameters of a model with the aim of minimizing the error or loss function during the training process. Optimizers guide the model in learning patterns from the data and making predictions as accurately as possible. They are a crucial component of machine learning algorithms, as they determine the effectiveness and efficiency of the...")
- 01:08, 21 March 2023 Walle talk contribs created page Operation (op) (Created page with "{{see also|Machine learning terms}} ==Introduction== In the context of machine learning, an operation (often abbreviated as 'op') refers to a basic computational task or function that manipulates data, typically during the process of training or running a machine learning model. Operations can be arithmetic, logical, or relational, and are performed on input data to produce an output. They are the building blocks of more complex algorithms and machine learning models. =...")
- 01:08, 21 March 2023 Walle talk contribs created page Saver (Created page with "{{see also|Machine learning terms}} ==Saver in Machine Learning== In the context of machine learning, a '''Saver''' is a utility or class that enables users to save and restore the states of models, variables, or other components during the training and evaluation process. Saving the state of a model is important for various reasons, such as preserving intermediate results, facilitating transfer learning, and enabling the resumption of training after interruptions. Diffe...")
- 01:08, 21 March 2023 Walle talk contribs created page SavedModel (Created page with "{{see also|Machine learning terms}} ==SavedModel in Machine Learning== SavedModel is a standardized, language-agnostic, and platform-independent serialization format for machine learning models developed by Google as part of the TensorFlow framework. It facilitates the sharing, deployment, and management of trained models across different platforms, programming languages, and applications. ===Overview=== The primary objective of SavedModel is to streamline t...")
- 01:08, 21 March 2023 Walle talk contribs created page Parameter Server (PS) (Created page with "{{see also|Machine learning terms}} ==Parameter Server (PS) in Machine Learning== The '''Parameter Server (PS)''' is a distributed machine learning framework designed to manage the parameters of large-scale machine learning models during the training process. It is particularly useful when dealing with massive datasets and complex model architectures, which are common in Deep Learning and Distributed Machine Learning. ===Background=== Traditional machine learnin...")
- 01:07, 21 March 2023 Walle talk contribs created page PR AUC (area under the PR curve) (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, the evaluation of classification models is a critical task. One common metric used to measure the performance of such models is the PR AUC, or Area Under the Precision-Recall (PR) Curve. The PR AUC is particularly useful when dealing with imbalanced datasets, where the proportion of positive and negative samples is unequal. ==Precision-Recall Curve== ===Definition=== The Precision-...")
- 11:45, 20 March 2023 Walle talk contribs created page One-shot learning (Created page with "{{see also|Machine learning terms}} ==One-shot Learning in Machine Learning== One-shot learning is a type of machine learning approach that aims to build robust models capable of learning from a limited amount of data, typically with only one or very few examples per class. This is in contrast to traditional supervised learning techniques, which require large amounts of labeled data for training. ===Background=== Traditional machine learning and deep learning algorithms...")
- 11:45, 20 March 2023 Walle talk contribs created page Objective function (Created page with "{{see also|Machine learning terms}} ==Objective Function in Machine Learning== The objective function, also known as the loss function or cost function, is a key concept in machine learning and optimization problems. It is a mathematical function that quantifies the discrepancy between the predicted output and the true output (ground truth) for a given input. The goal of machine learning algorithms is to minimize the value of the objective function to improve the...")
- 11:45, 20 March 2023 Walle talk contribs created page Objective (Created page with "{{see also|Machine learning terms}} ==Objective in Machine Learning== The objective in machine learning refers to the goal or aim that an algorithm strives to achieve through the learning process. This typically involves minimizing a loss function or maximizing a utility function, which are mathematical representations of the algorithm's performance. The objective provides guidance for the machine learning model to optimize its parameters and improve its predictions over...")
- 11:44, 20 March 2023 Walle talk contribs created page Novelty detection (Created page with "{{see also|Machine learning terms}} ==Novelty Detection in Machine Learning== Novelty detection is a sub-field of machine learning that focuses on the identification and classification of previously unseen, novel patterns or data points in a given dataset. The primary goal of novelty detection algorithms is to differentiate between normal and abnormal patterns, enabling effective decision-making in various applications, such as anomaly detection, outlier detectio...")
- 11:44, 20 March 2023 Walle talk contribs created page Non-response bias (Created page with "{{see also|Machine learning terms}} ==Non-response Bias in Machine Learning== Non-response bias, a type of sampling bias, occurs in machine learning when the data used for training and evaluating a model fails to accurately represent the entire population due to the absence or underrepresentation of certain subgroups in the sample. This phenomenon can lead to poor generalization performance, as the model's predictions may be systematically biased and not applicable t...")
- 11:44, 20 March 2023 Walle talk contribs created page Noise (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, noise refers to the presence of unwanted or irrelevant data that can have a detrimental effect on the performance and accuracy of a model. Noise can be introduced during the data collection process, data preprocessing, or through inherent randomness in the data itself. This article will provide an overview of the various types of noise, their sources, and their impacts on machine l...")
- 11:44, 20 March 2023 Walle talk contribs created page Node (TensorFlow graph) (Created page with "{{see also|Machine learning terms}} ==Node (TensorFlow graph)== In the context of machine learning, a node is a fundamental unit within a computational graph, which is a directed, acyclic graph (DAG) used to represent the flow of data and operations in a TensorFlow model. A TensorFlow graph is composed of multiple nodes, each representing an operation or a variable, which are connected by edges representing the flow of data between these nodes. The TensorFlow graph is a...")
- 11:44, 20 March 2023 Walle talk contribs created page Multinomial regression (Created page with "{{see also|Machine learning terms}} ==Multinomial Regression== Multinomial regression, also known as multinomial logistic regression or softmax regression, is a statistical method used in machine learning and statistics for modeling the relationship between a categorical dependent variable and one or more independent variables. It is an extension of binary logistic regression, which is used for predicting binary outcomes. Multinomial regression is particularly us...")
- 11:44, 20 March 2023 Walle talk contribs created page Multinomial classification (Created page with "{{see also|Machine learning terms}} ==Multinomial Classification== Multinomial classification, also known as multi-class or multi-nominal classification, is a type of supervised machine learning problem where the objective is to categorize an input data point into one of several discrete classes. In contrast to binary classification, where there are only two possible categories, multinomial classification deals with three or more categories. ===Problem Formulation==...")
- 11:43, 20 March 2023 Walle talk contribs created page Multi-class logistic regression (Created page with "{{see also|Machine learning terms}} ==Introduction== '''Multi-class logistic regression''', also referred to as '''softmax regression''' or '''multinomial logistic regression''', is a supervised machine learning algorithm used for predicting the categorical label of an input instance when there are more than two possible classes. It is an extension of the binary logistic regression model, which can only handle two-class classification problems. Multi-class logistic r...")
- 11:43, 20 March 2023 Walle talk contribs created page Model training (Created page with "{{see also|Machine learning terms}} ==Introduction== Model training in machine learning refers to the process of developing a mathematical model capable of making predictions or decisions based on input data. This is achieved by iteratively adjusting the model's parameters until it can accurately generalize from the training data to previously unseen data. The ultimate goal of this process is to create a model that can perform well on new, real-world data without bei...")
- 11:43, 20 March 2023 Walle talk contribs created page Model capacity (Created page with "{{see also|Machine learning terms}} ==Definition== In the context of machine learning, ''model capacity'' refers to the ability of a model to learn and represent various functions and patterns within a given dataset. High-capacity models have a larger number of parameters and can therefore represent more complex functions, while low-capacity models have fewer parameters and are limited in the complexity of functions they can represent. Model capacity plays a crucial role...")
- 11:43, 20 March 2023 Walle talk contribs created page Minimax loss (Created page with "{{see also|Machine learning terms}} ==Minimax Loss== The minimax loss, also known as the minimax regret, is a performance measure in machine learning and game theory that quantifies the worst-case performance of an algorithm or decision rule under uncertainty. This concept is utilized in various optimization problems, where the goal is to minimize the maximum possible loss or regret under uncertain conditions. ===Definition=== Given a decision-making problem, th...")
- 11:43, 20 March 2023 Walle talk contribs created page Mini-batch stochastic gradient descent (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, '''mini-batch stochastic gradient descent''' ('''MB-SGD''') is an optimization algorithm commonly used for training neural networks and other models. The algorithm operates by iteratively updating model parameters to minimize a loss function, which measures the discrepancy between the model's predictions and actual target values. Mini-batch stochastic gradient descent is a variant of stochastic g...")
- 11:43, 20 March 2023 Walle talk contribs created page Metric (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, a '''metric''' refers to a quantitative measure that is used to evaluate the performance of an algorithm or model. Metrics help researchers and practitioners understand the effectiveness of their models in solving a particular task and allow for comparison with other models. Several types of metrics exist, each tailored to different types of tasks or problems, such as classification, regression...")
- 11:42, 20 March 2023 Walle talk contribs created page Matrix factorization (Created page with "{{see also|Machine learning terms}} ==Introduction== Matrix factorization is a technique in machine learning that aims to discover latent features underlying the interactions between two different kinds of entities. It has been widely used for tasks such as recommendation systems, dimensionality reduction, and data imputation. The primary goal of matrix factorization is to approximate a given matrix by factorizing it into two or more lower-dimensional matrices, which can...")
- 11:42, 20 March 2023 Walle talk contribs created page Matplotlib (Created page with "{{see also|Machine learning terms}} ==Introduction== '''Matplotlib''' is a widely used data visualization library in Python that enables developers to create high-quality and interactive visualizations, such as line plots, scatter plots, bar plots, histograms, 3D plots, and more. It is an essential tool in machine learning and data science for exploring and analyzing data, as well as presenting the results of models and algorithm...")
- 11:42, 20 March 2023 Walle talk contribs created page Loss surface (Created page with "{{see also|Machine learning terms}} ==Loss Surface in Machine Learning== In the field of machine learning, the '''loss surface''' (also referred to as the '''error surface''' or the '''objective function surface''') refers to the graphical representation of the relationship between the parameters of a learning model and the associated loss or error. The primary goal of machine learning algorithms is to optimize these parameters, minimizing the loss and consequently e...")
- 11:42, 20 March 2023 Walle talk contribs created page NumPy (Created page with "{{see also|Machine learning terms}} ==Introduction== NumPy (Numerical Python) is a highly popular and widely used open-source library in the field of machine learning and data science. NumPy provides a variety of tools and functions for working with numerical data in the Python programming language. It is highly regarded for its efficiency, simplicity, and performance in handling multi-dimensional arrays and matrices, as well as for its comprehensive suite of...")