All public logs
Combined display of all available logs of AI Wiki. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 05:07, 23 March 2023 Daikon Radish talk contribs created page File:Microsoft 365 copilot1.jpg (File uploaded with MsUpload)
- 05:07, 23 March 2023 Daikon Radish talk contribs uploaded File:Microsoft 365 copilot1.jpg (File uploaded with MsUpload)
- 05:07, 23 March 2023 Daikon Radish talk contribs created page File:Microsoft 365 copilot2.jpg (File uploaded with MsUpload)
- 05:07, 23 March 2023 Daikon Radish talk contribs uploaded File:Microsoft 365 copilot2.jpg (File uploaded with MsUpload)
- 04:55, 23 March 2023 Daikon Radish talk contribs created page Microsoft Copilot (Redirected page to Microsoft 365 Copilot) Tag: New redirect
- 01:51, 23 March 2023 Daikon Radish talk contribs created page Microsoft 365 Copilot (Created page with "Microsoft 365 Copilot, introduced on March 16, 2023, is a groundbreaking AI assistant designed to transform the way people work by increasing productivity, enhancing creativity, and improving collaboration. Developed by Microsoft, the Copilot aims to eliminate mundane tasks and maximize efficiency by harnessing the power of large language models (LLMs) and integrating with Microsoft 365 applications and data. ==Overview== Microsoft 365 Copilot is an AI-driven tool that...")
- 04:35, 22 March 2023 Alpha5 talk contribs created page File:Adobe firefly1.jpeg (File uploaded with MsUpload)
- 04:35, 22 March 2023 Alpha5 talk contribs uploaded File:Adobe firefly1.jpeg (File uploaded with MsUpload)
- 04:34, 22 March 2023 Alpha5 talk contribs created page Firefly (Redirected page to Adobe Firefly) Tag: New redirect
- 02:12, 22 March 2023 Alpha5 talk contribs created page Adobe Firefly (Created page with "== Adobe Firefly == Adobe Firefly is a family of creative generative AI models being integrated into Adobe products. Firefly aims to enhance the creative process by providing new ways for creators to ideate, create, and communicate, while significantly improving creative workflows. Initially focusing on image and text effect generation, Firefly has the potential to expand its capabilities across various forms of media, including digital imagin...")
- 22:29, 21 March 2023 Walle talk contribs created page Width (Created page with "{{see also|Machine learning terms}} ==Width in Machine Learning== Width in machine learning refers to the number of neurons, or computational units, contained within a specific layer of a neural network. Neural networks are a class of machine learning algorithms that are designed to mimic the structure and function of the human brain, and they consist of interconnected layers of neurons. Width is an essential aspect of the architecture of a neural network, as it affects...")
- 22:29, 21 March 2023 Walle talk contribs created page Wide model (Created page with "{{see also|Machine learning terms}} ==Wide Models in Machine Learning== Wide models, also known as wide learning or ''wide & deep learning'', are a class of machine learning models that combine the strengths of both linear models and deep learning models. They were introduced by researchers at Google in a paper titled "Wide & Deep Learning for Recommender Systems" by Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, et al. in 2016. ==Motivation and Architecture== The primar...")
- 22:29, 21 March 2023 Walle talk contribs created page User matrix (Created page with "{{see also|Machine learning terms}} ==User Matrix in Machine Learning== In machine learning, a user matrix is a mathematical representation of users in a dataset, particularly in the context of collaborative filtering and recommendation systems. Collaborative filtering is a technique used to provide personalized recommendations by utilizing the preferences and behavior of multiple users. The user matrix is a vital component in model-based collaborative filtering methods,...")
- 22:29, 21 March 2023 Walle talk contribs created page Upweighting (Created page with "{{see also|Machine learning terms}} ==Upweighting in Machine Learning== Upweighting is a technique used in machine learning to assign higher importance or weights to certain data points or features during the training process. This method is particularly useful when dealing with imbalanced datasets or when attempting to emphasize specific aspects of the data. Upweighting can be applied in various machine learning algorithms, including supervised and unsupervised techniqu...")
- 22:29, 21 March 2023 Walle talk contribs created page Uplift modeling (Created page with "{{see also|Machine learning terms}} ==Uplift Modeling== Uplift modeling, also known as '''uplift prediction''' or '''treatment effect modeling''', is a technique in machine learning and statistics that focuses on estimating the impact of an intervention on a specific outcome of interest. This method is particularly useful in fields such as marketing, healthcare, and public policy, where it is crucial to identify and target the most responsive...")
- 22:29, 21 March 2023 Walle talk contribs created page Undersampling (Created page with "{{see also|Machine learning terms}} ==Overview== Undersampling is a technique used in machine learning to address the issue of imbalanced datasets. In this context, an imbalanced dataset refers to a dataset where the classes are not represented equally. This can lead to poor performance for certain machine learning algorithms, as they may be biased towards the majority class. Undersampling involves reducing the number of instances in the majority class, with the goal...")
- 22:28, 21 March 2023 Walle talk contribs created page Unawareness (to a sensitive attribute) (Created page with "{{see also|Machine learning terms}} ==Unawareness in Machine Learning== Unawareness in machine learning refers to the deliberate exclusion or ignorance of specific sensitive attributes during the process of model training and decision-making. Sensitive attributes are those that may potentially lead to unfair or discriminatory outcomes, such as race, gender, age, or sexual orientation. The primary goal of incorporating unawareness in machine learning is to ensure fairness...")
- 22:28, 21 March 2023 Walle talk contribs created page Transfer learning (Created page with "{{see also|Machine learning terms}} ==Introduction== Transfer learning is a subfield of machine learning that focuses on leveraging the knowledge gained from solving one problem and applying it to a different but related problem. The primary motivation behind transfer learning is to reduce the amount of time, computational resources, and data required to train models for new tasks by reusing the knowledge gained from previous tasks. In this article, we will discuss t...")
- 22:28, 21 March 2023 Walle talk contribs created page Tower (Created page with "{{see also|Machine learning terms}} ==Tower in Machine Learning== The term "tower" in machine learning typically refers to a specific arrangement of layers within a neural network architecture. The term is primarily used to describe architectures where multiple parallel branches are vertically stacked, allowing for a hierarchical structure that can help improve the model's performance and accuracy. ===Background=== Tower architectures were introduced as a way to address...")
- 22:28, 21 March 2023 Walle talk contribs created page Tf.keras (Created page with "{{see also|Machine learning terms}} ==Introduction== '''tf.keras''' is a high-level neural networks API, integrated within the TensorFlow machine learning framework. Developed by the Google Brain Team, tf.keras is designed to facilitate the creation, training, and evaluation of deep learning models. It is designed for quick prototyping and is user-friendly, modular, and extensible. In this article, we explore the key features and components of tf.keras, its advantage...")
- 22:28, 21 March 2023 Walle talk contribs created page Tf.Example (Created page with "{{see also|Machine learning terms}} ==Introduction== In the realm of machine learning, '''''tf.Example''''' is a standard data serialization format employed by the TensorFlow framework, which is an open-source library developed by the Google Brain Team. The primary purpose of ''tf.Example'' is to facilitate the storage and exchange of data across diverse machine learning pipelines. This data structure efficiently represents data as a collection of key-value pairs, ma...")
- 22:28, 21 March 2023 Walle talk contribs created page Test set (Created page with "{{see also|Machine learning terms}} ==Test Set in Machine Learning== ===Definition=== In the context of machine learning, the '''test set''' refers to a subset of data that is distinct from the data used for model training and validation. It is typically utilized to evaluate the performance and generalization capabilities of a machine learning model after the training and validation processes are complete. Test sets play a vital role in ensuring that a model can perf...")
- 22:27, 21 March 2023 Walle talk contribs created page Temporal data (Created page with "{{see also|Machine learning terms}} ==Temporal Data in Machine Learning== Temporal data, also known as time series data, refers to data containing time-dependent observations. These data points are collected at consistent time intervals, which can range from milliseconds to years. In the context of machine learning, temporal data is used to build models that can analyze and predict trends, patterns, and relationships over time. Time series analysis and forecasting are wi...")
- 22:27, 21 March 2023 Walle talk contribs created page Target (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, the term '''target''' refers to the variable or outcome that a learning algorithm aims to predict, estimate, or classify. The target is also commonly referred to as a '''label''' or '''ground truth'''. Machine learning models utilize target data during the training phase to learn patterns, relationships, or rules, and subsequently generalize these findings to make predictions on un...")
- 22:27, 21 March 2023 Walle talk contribs created page Summary (Created page with "{{see also|Machine learning terms}} ==Summary in Machine Learning== In machine learning, a '''summary''' refers to the process of reducing a large dataset or model into a simplified representation, which retains the most essential information. This can be done through various methods, such as dimensionality reduction, model compression, and ensemble methods. Summarization is crucial for improving computational efficiency, enhancing interpretability, and mitigating overfi...")
- 22:27, 21 March 2023 Walle talk contribs created page Structural risk minimization (SRM) (Created page with "{{see also|Machine learning terms}} ==Introduction== Structural Risk Minimization (SRM) is a fundamental concept in the field of machine learning and statistical learning theory, introduced by Vladimir Vapnik and Alexey Chervonenkis. It serves as a regularization principle that aims to minimize the risk of overfitting in a model by finding an optimal balance between the model's complexity and its ability to generalize to unseen data. In essence, SRM strives to st...")
- 22:27, 21 March 2023 Walle talk contribs created page Step size (Created page with "{{see also|Machine learning terms}} ==Definition== In machine learning, the '''step size''' (also known as learning rate or alpha) is a hyperparameter that determines the magnitude of the update applied to the weights of a model during optimization. Step size is a crucial factor in the training process, as it influences the model's convergence speed and its ability to reach the global minimum of the loss function. The step size is used in various optimization algorit...")
- 22:27, 21 March 2023 Walle talk contribs created page Step (Created page with "{{see also|Machine learning terms}} ==Definition of Step in Machine Learning== In the context of machine learning, a '''step''' typically refers to an iteration or a single pass through a specific part of the algorithm during the learning process. A step can involve various actions, such as updating model parameters, assessing the current model's performance, or executing a certain phase of the algorithm. Steps are often part of larger processes like training, validation...")
- 22:27, 21 March 2023 Walle talk contribs created page Squared hinge loss (Created page with "{{see also|Machine learning terms}} ==Squared Hinge Loss== Squared hinge loss, also known as the squared variant of the hinge loss, is a popular loss function in the field of machine learning and support vector machines (SVM). It is a modification of the standard hinge loss function that provides better convergence properties and smoothness, while still maintaining the ability to handle non-linear classification problems. The squared hinge loss function can be us...")
- 22:26, 21 March 2023 Walle talk contribs created page Sparsity (Created page with "{{see also|Machine learning terms}} ==Introduction== Sparsity, in the context of machine learning, refers to the phenomenon where only a small number of features or parameters have significant non-zero values in a model or dataset. This characteristic can be exploited to improve the efficiency and interpretability of machine learning models. The concept of sparsity has been applied in various areas, including feature selection, regularization, and sparse representati...")
- 22:26, 21 March 2023 Walle talk contribs created page Shape (Tensor) (Created page with "{{see also|Machine learning terms}} ==Definition== A '''shape''' in the context of machine learning and deep learning refers to the structure or dimensionality of a '''tensor''', which is a multi-dimensional array of numerical values. Tensors are the fundamental building blocks of many machine learning models and frameworks, such as TensorFlow and PyTorch. The shape of a tensor is characterized by the number of dimensions it has, known as its '''rank''', and the...")
- 22:26, 21 March 2023 Walle talk contribs created page Serving (Created page with "{{see also|Machine learning terms}} ==Serving in Machine Learning== Serving in machine learning refers to the process of deploying and utilizing a trained machine learning model to make predictions or decisions based on new input data. This process is an integral part of the machine learning pipeline, as it allows the machine learning models to be applied to real-world problems and provide value to users. The serving process typically follows the completion of the ...")
- 22:26, 21 March 2023 Walle talk contribs created page Sensitive attribute (Created page with "{{see also|Machine learning terms}} ==Sensitive Attribute in Machine Learning== Sensitive attributes, also known as protected attributes, are variables that carry the potential of causing unfair or biased outcomes in a machine learning algorithm. These attributes often relate to demographic information such as race, gender, age, religion, or disability, and may inadvertently contribute to discriminatory decisions or predictions when used inappropriate...")
- 22:26, 21 March 2023 Walle talk contribs created page Semi-supervised learning (Created page with "{{see also|Machine learning terms}} ==Introduction== Semi-supervised learning is a type of machine learning approach that combines elements of both supervised and unsupervised learning methods. It leverages a small amount of labeled data along with a larger volume of unlabeled data to train models. This article will provide an overview of semi-supervised learning, discuss its advantages and challenges, and present commonly used techniques. ==Motivation and Advantage...")
- 22:26, 21 March 2023 Walle talk contribs created page Self-training (Created page with "{{see also|Machine learning terms}} ==Introduction== Self-training, a form of semi-supervised learning, is an approach in machine learning that combines both labeled and unlabeled data to improve the performance of a model. In this method, an initial model is trained on a small set of labeled data, and then it iteratively refines itself by incorporating the predictions it generates for the unlabeled data. This article will discuss the key concepts, advantages, and ch...")
- 22:25, 21 March 2023 Walle talk contribs created page Weighted Alternating Least Squares (WALS) (Created page with "{{see also|Machine learning terms}} ==Weighted Alternating Least Squares (WALS)== Weighted Alternating Least Squares (WALS) is a widely-used optimization algorithm employed in the field of machine learning. It is particularly popular for addressing the matrix factorization problem, which is often used in collaborative filtering and recommendation systems. WALS iteratively refines the latent factors of the input data to minimize the error, while simultaneously applyin...")
- 22:25, 21 March 2023 Walle talk contribs created page Wasserstein loss (Created page with "{{see also|Machine learning terms}} ==Wasserstein Loss in Machine Learning== Wasserstein loss, also known as the Earth Mover's Distance (EMD), is a metric used in the field of machine learning, particularly in the training of Generative Adversarial Networks (GANs). Introduced by Martin Arjovsky, Soumith Chintala, and Léon Bottou in their 2017 paper "Wasserstein GAN," this loss function has become a popular choice for training GANs due to its stability and th...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor size (Created page with "{{see also|Machine learning terms}} ==Definition== In machine learning, '''tensor size''' refers to the dimensions of a tensor, which is a multi-dimensional data structure often used to represent and manipulate data in various mathematical operations. Tensors are the generalization of scalars, vectors, and matrices, with scalars being zero-dimensional tensors, vectors being one-dimensional tensors, and matrices being two-dimensional tensors. Tensor size, also known a...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor shape (Created page with "{{see also|Machine learning terms}} ==Tensor Shape in Machine Learning== Tensor shape is a fundamental concept in the field of machine learning, particularly in deep learning architectures, where tensors are used as the primary data structure for representing and processing multidimensional data. In this article, we will explore the meaning of tensor shape, its significance in machine learning, and some common operations performed on tensors. ===Definition and Backgroun...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor rank (Created page with "{{see also|Machine learning terms}} ==Definition of Tensor Rank== In the field of machine learning, tensors are multi-dimensional arrays that provide a mathematical framework to represent and manipulate data. The rank of a tensor, also known as its ''order'', refers to the number of dimensions or indices required to describe the tensor. Formally, the tensor rank is defined as the number of axes within a tensor. In other words, the tensor rank determines the complexit...")
- 22:25, 21 March 2023 Walle talk contribs created page Tensor Processing Unit (TPU) (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''Tensor Processing Unit (TPU)''' is a specialized type of hardware accelerator designed specifically for the efficient execution of machine learning tasks, particularly deep learning algorithms. TPUs were first introduced by Google in 2016 and have since become an essential component in the field of artificial intelligence (AI) and machine learning (ML) for their ability to perform high-throughput mathematical oper...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorFlow Serving (Created page with "{{see also|Machine learning terms}} ==Introduction== TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. Developed by Google, it is part of the larger TensorFlow ecosystem, an open-source machine learning library used to develop, train, and deploy ML models. TensorFlow Serving provides a standardized interface for deploying and serving machine learning models, enabling easy integrati...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorFlow Playground (Created page with "{{see also|Machine learning terms}} ==TensorFlow Playground== TensorFlow Playground is an interactive, web-based visualization tool for exploring and understanding neural networks. Developed by the TensorFlow team at Google, this tool allows users to visualize and manipulate neural networks in real-time, providing a deeper understanding of how these models work and their underlying principles. The TensorFlow Playground is an invaluable educational resource for those inte...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorFlow (Created page with "{{see also|Machine learning terms}} ==Overview== TensorFlow is an open-source software library developed by the Google Brain team primarily for machine learning, deep learning, and numerical computation. It uses data flow graphs for computation, where each node represents a mathematical operation, and each edge represents a multi-dimensional data array (tensor) that flows between the nodes. TensorFlow provides a flexible platform for designing, training, and deployin...")
- 22:24, 21 March 2023 Walle talk contribs created page TensorBoard (Created page with "{{see also|Machine learning terms}} ==Introduction== TensorBoard is an open-source, interactive visualization tool designed for machine learning experiments. Developed by the Google Brain team, TensorBoard is an integral component of the TensorFlow ecosystem, which facilitates the monitoring and analysis of model training processes. It provides users with graphical representations of various metrics, including model performance, variable distributions, and comput...")
- 22:24, 21 March 2023 Walle talk contribs created page Tensor (Created page with "{{see also|Machine learning terms}} ==Introduction== In machine learning, a '''tensor''' is a mathematical object that generalizes the concepts of scalars, vectors, and matrices. Tensors are extensively used in machine learning and deep learning algorithms, particularly in the development and implementation of neural networks. They provide a flexible and efficient way to represent and manipulate data with multiple dimensions, allowing for the efficient execution of c...")
- 22:24, 21 March 2023 Walle talk contribs created page TPU worker (Created page with "{{see also|Machine learning terms}} ==Overview== A '''TPU worker''' refers to a specific type of hardware device known as a Tensor Processing Unit (TPU), which is utilized in the field of machine learning to accelerate the training and inference of deep neural networks. TPUs are application-specific integrated circuits (ASICs) developed by Google and optimized for their TensorFlow machine learning framework. TPU workers are designed to perform tensor computations...")
- 22:24, 21 March 2023 Walle talk contribs created page TPU type (Created page with "{{see also|Machine learning terms}} ==Introduction== In the field of machine learning, a ''Tensor Processing Unit'' (TPU) is a specialized type of hardware designed to accelerate various operations in neural networks. TPUs, developed by Google, have gained significant traction in the deep learning community due to their ability to provide high-performance computation with reduced energy consumption compared to traditional GPUs or Central P...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU slice (Created page with "{{see also|Machine learning terms}} ==Introduction== A '''TPU slice''' refers to a specific portion of a Tensor Processing Unit (TPU), which is a type of specialized hardware developed by Google to accelerate machine learning tasks. TPUs are designed to handle the computationally-intensive operations commonly associated with deep learning and neural networks, such as matrix multiplications and convolutions. TPU slices are integral components of the TPU archit...")
- 22:23, 21 March 2023 Walle talk contribs created page TPU resource (Created page with "{{see also|Machine learning terms}} ==Introduction== The TPU, or Tensor Processing Unit, is a specialized type of hardware developed by Google for the purpose of accelerating machine learning tasks, particularly those involving deep learning and artificial intelligence. TPUs are designed to deliver high performance with low power consumption, making them an attractive option for large-scale machine learning applications. ==Architecture and Design== ===Overview=== Th...")