All public logs
Combined display of all available logs of AI Wiki. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 22:26, 21 March 2023 Walle talk contribs created page Sparsity (Created page with "{{see also|Machine learning terms}} ==Introduction== Sparsity, in the context of machine learning, refers to the phenomenon where only a small number of features or parameters have significant non-zero values in a model or dataset. This characteristic can be exploited to improve the efficiency and interpretability of machine learning models. The concept of sparsity has been applied in various areas, including feature selection, regularization, and sparse representati...")
- 22:26, 21 March 2023 Walle talk contribs created page Shape (Tensor) (Created page with "{{see also|Machine learning terms}} ==Definition== A '''shape''' in the context of machine learning and deep learning refers to the structure or dimensionality of a '''tensor''', which is a multi-dimensional array of numerical values. Tensors are the fundamental building blocks of many machine learning models and frameworks, such as TensorFlow and PyTorch. The shape of a tensor is characterized by the number of dimensions it has, known as its '''rank''', and the...")
- 22:26, 21 March 2023 Walle talk contribs created page Serving (Created page with "{{see also|Machine learning terms}} ==Serving in Machine Learning== Serving in machine learning refers to the process of deploying and utilizing a trained machine learning model to make predictions or decisions based on new input data. This process is an integral part of the machine learning pipeline, as it allows the machine learning models to be applied to real-world problems and provide value to users. The serving process typically follows the completion of the ...")
- 22:26, 21 March 2023 Walle talk contribs created page Sensitive attribute (Created page with "{{see also|Machine learning terms}} ==Sensitive Attribute in Machine Learning== Sensitive attributes, also known as protected attributes, are variables that carry the potential of causing unfair or biased outcomes in a machine learning algorithm. These attributes often relate to demographic information such as race, gender, age, religion, or disability, and may inadvertently contribute to discriminatory decisions or predictions when used inappropriate...")
- 22:26, 21 March 2023 Walle talk contribs created page Semi-supervised learning (Created page with "{{see also|Machine learning terms}} ==Introduction== Semi-supervised learning is a type of machine learning approach that combines elements of both supervised and unsupervised learning methods. It leverages a small amount of labeled data along with a larger volume of unlabeled data to train models. This article will provide an overview of semi-supervised learning, discuss its advantages and challenges, and present commonly used techniques. ==Motivation and Advantage...")
- 22:26, 21 March 2023 Walle talk contribs created page Self-training (Created page with "{{see also|Machine learning terms}} ==Introduction== Self-training, a form of semi-supervised learning, is an approach in machine learning that combines both labeled and unlabeled data to improve the performance of a model. In this method, an initial model is trained on a small set of labeled data, and then it iteratively refines itself by incorporating the predictions it generates for the unlabeled data. This article will discuss the key concepts, advantages, and ch...")
- 22:25, 21 March 2023 Walle talk contribs created page Weighted Alternating Least Squares (WALS) (Created page with "{{see also|Machine learning terms}} ==Weighted Alternating Least Squares (WALS)== Weighted Alternating Least Squares (WALS) is a widely-used optimization algorithm employed in the field of machine learning. It is particularly popular for addressing the matrix factorization problem, which is often used in collaborative filtering and recommendation systems. WALS iteratively refines the latent factors of the input data to minimize the error, while simultaneously applyin...")
- 22:25, 21 March 2023 Walle talk contribs created page Wasserstein loss (Created page with "{{see also|Machine learning terms}} ==Wasserstein Loss in Machine Learning== Wasserstein loss, also known as the Earth Mover's Distance (EMD), is a metric used in the field of machine learning, particularly in the training of Generative Adversarial Networks (GANs). Introduced by Martin Arjovsky, Soumith Chintala, and Léon Bottou in their 2017 paper "Wasserstein GAN," this loss function has become a popular choice for training GANs due to its stability and th...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor size (Created page with "{{see also|Machine learning terms}} ==Definition== In machine learning, '''tensor size''' refers to the dimensions of a tensor, which is a multi-dimensional data structure often used to represent and manipulate data in various mathematical operations. Tensors are the generalization of scalars, vectors, and matrices, with scalars being zero-dimensional tensors, vectors being one-dimensional tensors, and matrices being two-dimensional tensors. Tensor size, also known a...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor shape (Created page with "{{see also|Machine learning terms}} ==Tensor Shape in Machine Learning== Tensor shape is a fundamental concept in the field of machine learning, particularly in deep learning architectures, where tensors are used as the primary data structure for representing and processing multidimensional data. In this article, we will explore the meaning of tensor shape, its significance in machine learning, and some common operations performed on tensors. ===Definition and Backgroun...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor rank (Created page with "{{see also|Machine learning terms}} ==Definition of Tensor Rank== In the field of machine learning, tensors are multi-dimensional arrays that provide a mathematical framework to represent and manipulate data. The rank of a tensor, also known as its ''order'', refers to the number of dimensions or indices required to describe the tensor. Formally, the tensor rank is defined as the number of axes within a tensor. In other words, the tensor rank determines the complexit...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor Processing Unit (TPU) (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''Tensor Processing Unit (TPU)''' is a specialized type of hardware accelerator designed specifically for the efficient execution of machine learning tasks, particularly deep learning algorithms. TPUs were first introduced by Google in 2016 and have since become an essential component in the field of artificial intelligence (AI) and machine learning (ML) for their ability to perform high-throughput mathematical oper...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorFlow Serving (Created page with "{{see also|Machine learning terms}} ==Introduction== TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. Developed by Google, it is part of the larger TensorFlow ecosystem, an open-source machine learning library used to develop, train, and deploy ML models. TensorFlow Serving provides a standardized interface for deploying and serving machine learning models, enabling easy integrati...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorFlow Playground (Created page with "{{see also|Machine learning terms}} ==TensorFlow Playground== TensorFlow Playground is an interactive, web-based visualization tool for exploring and understanding neural networks. Developed by the TensorFlow team at Google, this tool allows users to visualize and manipulate neural networks in real-time, providing a deeper understanding of how these models work and their underlying principles. The TensorFlow Playground is an invaluable educational resource for those inte...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorFlow (Created page with "{{see also|Machine learning terms}} ==Overview== TensorFlow is an open-source software library developed by the Google Brain team primarily for machine learning, deep learning, and numerical computation. It uses data flow graphs for computation, where each node represents a mathematical operation, and each edge represents a multi-dimensional data array (tensor) that flows between the nodes. TensorFlow provides a flexible platform for designing, training, and deployin...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorBoard (Created page with "{{see also|Machine learning terms}} ==Introduction== TensorBoard is an open-source, interactive visualization tool designed for machine learning experiments. Developed by the Google Brain team, TensorBoard is an integral component of the TensorFlow ecosystem, which facilitates the monitoring and analysis of model training processes. It provides users with graphical representations of various metrics, including model performance, variable distributions, and comput...")
- 22:24, 21 March 2023 Walle talk contribs created page Tensor (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, a '''tensor''' is a mathematical object that generalizes the concepts of scalars, vectors, and matrices. Tensors are extensively used in machine learning and deep learning algorithms, particularly in the development and implementation of neural networks. They provide a flexible and efficient way to represent and manipulate data with multiple dimensions, allowing for the efficient execution of c...")
- 22:24, 21 March 2023 Walle talk contribs created page TPU worker (Created page with "{{see also|Machine learning terms}} ==Overview== A '''TPU worker''' refers to a specific type of hardware device known as a Tensor Processing Unit (TPU), which is utilized in the field of machine learning to accelerate the training and inference of deep neural networks. TPUs are application-specific integrated circuits (ASICs) developed by Google and optimized for their TensorFlow machine learning framework. TPU workers are designed to perform tensor computations...")
- 22:24, 21 March 2023 Walle talk contribs created page TPU type (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, a ''Tensor Processing Unit'' (TPU) is a specialized type of hardware designed to accelerate various operations in neural networks. TPUs, developed by Google, have gained significant traction in the deep learning community due to their ability to provide high-performance computation with reduced energy consumption compared to traditional GPUs or Central P...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU slice (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''TPU slice''' refers to a specific portion of a Tensor Processing Unit (TPU), which is a type of specialized hardware developed by Google to accelerate machine learning tasks. TPUs are designed to handle the computationally-intensive operations commonly associated with deep learning and neural networks, such as matrix multiplications and convolutions. TPU slices are integral components of the TPU archit...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU resource (Created page with "{{see also|Machine learning terms}} ==Introduction== The TPU, or Tensor Processing Unit, is a specialized type of hardware developed by Google for the purpose of accelerating machine learning tasks, particularly those involving deep learning and artificial intelligence. TPUs are designed to deliver high performance with low power consumption, making them an attractive option for large-scale machine learning applications. ==Architecture and Design== ===Overview=== Th...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU node (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''Tensor Processing Unit (TPU) node''' is a specialized hardware accelerator designed to significantly accelerate machine learning workloads. Developed by Google, TPUs are optimized for tensor processing, which is the foundational mathematical operation in various machine learning frameworks such as TensorFlow. By providing dedicated hardware for these calculations, TPUs enable faster training and inference of m...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU master (Created page with "{{see also|Machine learning terms}} ==Introduction== The '''TPU master''' in machine learning refers to the primary control unit of a Tensor Processing Unit (TPU), which is a specialized hardware accelerator designed to significantly speed up the execution of machine learning tasks. TPUs were developed by Google to improve the performance of deep learning algorithms and reduce their training and inference times. The TPU master coordinates the flow of data and instruc...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU device (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''Tensor Processing Unit (TPU)''' is a type of application-specific integrated circuit (ASIC) designed and developed by Google specifically for accelerating machine learning tasks. TPUs are custom-built hardware accelerators optimized to handle the computational demands of machine learning algorithms, particularly deep learning and neural networks. They provide significant performance improvements and en...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU chip (Created page with "{{see also|Machine learning terms}} ==Introduction== The '''Tensor Processing Unit''' ('''TPU''') is a type of application-specific integrated circuit (ASIC) designed by Google specifically for accelerating machine learning workloads. TPUs are optimized for the computational demands of neural networks and are particularly efficient at performing operations with tensors, which are multi-dimensional arrays of data commonly used in machine learning applications. TPU...")
- 22:22, 21 March 2023 Walle talk contribs created page TPU Pod (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, a '''TPU Pod''' is a cluster of Tensor Processing Units (TPUs) designed to accelerate high-performance computation tasks. TPUs are specialized hardware accelerators developed by Google, specifically optimized for performing tensor-based mathematical operations commonly used in machine learning and deep learning algorithms. TPU Pods allow researchers and engineers to scale up their...")
- 22:22, 21 March 2023 Walle talk contribs created page TPU (Created page with "{{see also|Machine learning terms}} ==Overview== A '''Tensor Processing Unit (TPU)''' is a type of application-specific integrated circuit (ASIC) developed by Google for accelerating machine learning workloads. TPUs are designed to perform tensor computations efficiently, which are the foundational operations in machine learning algorithms, particularly deep learning models. They are optimized for handling large-scale matrix operations with low precision, enabling fa...")
- 01:15, 21 March 2023 Walle talk contribs created page Self-supervised learning (Created page with "{{see also|Machine learning terms}} ==Introduction== Self-supervised learning (SSL) is a subfield of machine learning that focuses on learning representations of data in an unsupervised manner by exploiting the structure and inherent properties of the data itself. This approach has gained significant traction in recent years, as it enables algorithms to learn useful features from large volumes of unlabeled data, thereby reducing the reliance on labeled datasets. The lear...")
- 01:15, 21 March 2023 Walle talk contribs created page Selection bias (Created page with "{{see also|Machine learning terms}} ==Introduction== Selection bias in machine learning refers to the phenomenon where the sample data used to train or evaluate a machine learning model does not accurately represent the underlying population or the target domain. This issue arises when the training data is collected or selected in a way that introduces systematic errors, which can lead to biased predictions or conclusions when the model is applied to real-world scena...")
- 01:15, 21 March 2023 Walle talk contribs created page Scoring (Created page with "{{see also|Machine learning terms}} ==Overview== In the field of machine learning, scoring refers to the process of evaluating a trained model's performance based on its ability to make predictions on a given dataset. The scoring process typically involves comparing the model's predictions to the actual or true values, also known as ground truth or targets. A variety of evaluation metrics are used to quantify the model's performance, with the choice of metric often d...")
- 01:15, 21 March 2023 Walle talk contribs created page Scikit-learn (Created page with "{{see also|Machine learning terms}} ==Introduction== '''Scikit-learn''' is an open-source Python library designed for use in the field of machine learning. The library provides a wide range of machine learning algorithms, including those for classification, regression, clustering, dimensionality reduction, and model selection. Developed by a team of researchers and engineers, scikit-learn is built on top of the NumPy, SciPy, and matplotlib libraries,...")
- 01:14, 21 March 2023 Walle talk contribs created page Scaling (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, scaling refers to the process of adjusting the range of input features or data points to a uniform scale. This normalization of data is an essential pre-processing step that enhances the performance and efficiency of machine learning algorithms by addressing issues of heterogeneity and uneven distribution of features. ==Importance of Scaling in Machine Learning== Scaling is a crit...")
- 01:14, 21 March 2023 Walle talk contribs created page Scalar (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, a ''scalar'' refers to a single numerical value that can represent a quantity or measurement. Scalars play a crucial role in many aspects of machine learning algorithms, from representing weights and biases in neural networks to serving as input features or output labels in various machine learning models. This article will cover the definition, importance, and usage of scalars in machine learn...")
- 01:14, 21 March 2023 Walle talk contribs created page Sampling bias (Created page with "{{see also|Machine learning terms}} ==Introduction== Sampling bias in machine learning is a type of bias that occurs when the data used for training and testing a model does not accurately represent the underlying population. This can lead to a model that performs poorly in real-world applications, as it is not able to generalize well to the broader population. In this article, we will discuss the various causes and types of sampling bias, the consequences of samplin...")
- 01:14, 21 March 2023 Walle talk contribs created page Root directory (Created page with "{{see also|Machine learning terms}} ==Root Directory in Machine Learning== In the context of machine learning, the term "root directory" does not directly refer to a specific concept or technique. Instead, it is related to file and folder organization in computer systems, which is crucial for managing datasets, code, and resources for machine learning projects. In this article, we will discuss the concept of a root directory in the context of computer systems and how it...")
- 01:14, 21 March 2023 Walle talk contribs created page Ridge regularization (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, regularization is a technique used to prevent overfitting and improve the generalization of models by adding a penalty term to the objective function. Ridge regularization, also known as L2 regularization or Tikhonov regularization, is a specific type of regularization that adds a squared L2-norm of the model parameters to the loss function. This article discusses the underlying principles of ridge...")
- 01:14, 21 March 2023 Walle talk contribs created page Representation (Created page with "{{see also|Machine learning terms}} ==Introduction== Representation in machine learning refers to the method by which a model captures and encodes the underlying structure, patterns, and relationships present in the input data. A suitable representation allows the model to learn and generalize from the data effectively, enabling it to make accurate predictions or perform other tasks. Representations can be hand-crafted features, which are based on expert knowledge, o...")
- 01:13, 21 March 2023 Walle talk contribs created page Reporting bias (Created page with "{{see also|Machine learning terms}} ==Introduction== Reporting bias in machine learning refers to a systematic distortion of the information used to train and evaluate machine learning models. This distortion arises when the data being used to train a model is influenced by factors that are not representative of the true underlying phenomenon. These factors can lead to an overestimation or underestimation of certain model predictions, ultimately affecting the performance...")
- 01:13, 21 March 2023 Walle talk contribs created page Recommendation system (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''recommendation system''' in machine learning is a type of algorithm that provides personalized suggestions or recommendations to users, typically in the context of digital platforms such as e-commerce websites, streaming services, and social media platforms. These systems leverage various techniques from the fields of machine learning, data mining, and information retrieval to identify and rank items or conten...")
- 01:13, 21 March 2023 Walle talk contribs created page Recall (Created page with "{{see also|Machine learning terms}} ==Introduction== '''Recall''' is a performance metric commonly used in machine learning and information retrieval to evaluate the effectiveness of classification and retrieval models. It is particularly useful when the cost of false negatives (failing to identify positive instances) is high. This article provides an in-depth understanding of the concept of recall, its mathematical formulation, and its relation to other performa...")
- 01:13, 21 March 2023 Walle talk contribs created page Re-ranking (Created page with "{{see also|Machine learning terms}} ==Introduction== Re-ranking, also known as rank refinement or re-scoring, is an essential technique in machine learning that aims to improve the quality of ranked results generated by a primary ranking model. It involves using a secondary model to adjust the initial ranking produced by the primary model, based on various features and criteria. Re-ranking is widely applied in diverse fields, such as information retrieval, natu...")
- 01:13, 21 March 2023 Walle talk contribs created page Ranking (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, ranking refers to the process of sorting a set of items in a specific order based on their relevance, importance, or some other predefined criteria. This process has become increasingly important in a wide range of applications, such as information retrieval, recommendation systems, and natural language processing. By utilizing machine learning algorithms and models, ranking system...")
- 01:13, 21 March 2023 Walle talk contribs created page Rank (ordinality) (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, '''rank''' or '''ordinality''' refers to a specific type of data that represents a relative order or position among a set of items. Unlike continuous numerical data, which can take any value within a range, or categorical data, which consists of discrete values with no inherent order, ordinal data possesses an inherent order or ranking, but the intervals between the values are not necessarily consi...")
- 01:12, 21 March 2023 Walle talk contribs created page Rank (Tensor) (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, the term "rank" is commonly used in the context of tensor algebra. A tensor is a mathematical object that is a generalization of scalars, vectors, and matrices, and is used to represent complex data structures in various machine learning algorithms. The rank of a tensor refers to the number of dimensions or indices required to represent the tensor. ==Tensor Basics== ===Scalars, Vectors, and Matric...")
- 01:12, 21 March 2023 Walle talk contribs created page Queue (Created page with "{{see also|Machine learning terms}} ==Queue in Machine Learning== Queue, in the context of machine learning, refers to the use of a data structure known as a queue to store and manage data during the processing of machine learning tasks. Queues are data structures that follow the First-In-First-Out (FIFO) principle, meaning that elements are removed from the queue in the order they were inserted. Queues can be utilized in various stages of the machine learning proces...")
- 01:12, 21 March 2023 Walle talk contribs created page Quantization (Created page with "{{see also|Machine learning terms}} ==Quantization in Machine Learning== Quantization is a technique utilized in machine learning and deep learning to reduce the size of models and computational resources needed for their operation. The process entails approximating the continuous values of parameters, such as weights and activations, using a smaller, discrete set of values. Quantization is particularly useful in deploying models on resource-constrained devices,...")
- 01:12, 21 March 2023 Walle talk contribs created page Quantile bucketing (Created page with "{{see also|Machine learning terms}} ==Introduction== Quantile bucketing, also known as quantile binning or quantile-based discretization, is a technique in machine learning and data preprocessing that aims to transform continuous numeric features into discrete categories by partitioning the data distribution into intervals, with each interval containing an equal proportion of data points. This process improves the efficiency and interpretability of certain algori...")
- 01:12, 21 March 2023 Walle talk contribs created page Quantile (Created page with "{{see also|Machine learning terms}} ==Quantile in Machine Learning== A '''quantile''' is a statistical concept used in machine learning, which refers to the division of a data distribution into equal intervals. These intervals represent different portions of the data distribution and are used for various statistical analyses, such as summarizing data, understanding its structure, and making inferences. ===Definition=== Formally, a quantile is defined as a value that div...")
- 01:12, 21 March 2023 Walle talk contribs created page Proxy (sensitive attributes) (Created page with "{{see also|Machine learning terms}} ==Definition== In machine learning, '''proxy (sensitive attributes)''' refers to variables that indirectly capture information about a sensitive attribute, such as race, gender, or age, which are often used in a model to make predictions or decisions. The use of proxy variables can inadvertently lead to biased outcomes or algorithmic discrimination, even when the original sensitive attribute is not explicitly used in the model. It...")
- 01:12, 21 March 2023 Walle talk contribs created page Probabilistic regression model (Created page with "{{see also|Machine learning terms}} ==Probabilistic Regression Model== Probabilistic regression models are a class of machine learning techniques that predict the relationship between input features and a continuous target variable by estimating a probability distribution of the target variable. These models account for uncertainties in the predictions by providing a range of possible outcomes and their associated probabilities. Probabilistic regression models are wi...")