Acronyms: Difference between revisions
No edit summary |
No edit summary |
||
(75 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
{{see also|Guides|Terms}} | {{Needs Expansion}} | ||
{{see also|Guides|Terms|Abbreviations}} | |||
{| | {| | ||
|- | |- | ||
| [[ | | '''[[A*]]''' || || [[A* Search Algorithm]] | ||
|- | |- | ||
| [[ | | '''[[A/B Testing]]''' || || [[A statistical method for comparing two or more treatments or algorithms]] | ||
|- | |- | ||
| [[ML]] || [[Machine learning]] | | '''[[A3C]]''' || || [[Asynchronous Advantage Actor-Critic]] | ||
|- | |||
| '''[[ABAC]]''' || || [[Attribute-Based Access Control]] | |||
|- | |||
| '''[[ACE]]''' || || [[Alternating conditional expectation algorithm]] | |||
|- | |||
| '''[[ACO]]''' || || [[Ant Colony Optimization]] | |||
|- | |||
| '''[[AdA]]''' || || [[Adaptive Agent]] | |||
|- | |||
| '''[[Adam]]''' || || [[Adaptive Moment Estimation]] | |||
|- | |||
| '''[[ADASYN]]''' || || [[Adaptive Synthetic Sampling]] | |||
|- | |||
| '''[[ADT]]''' || || [[Automatic Drum Transcription]] | |||
|- | |||
| '''[[AE]]''' || || [[Autoencoder]] | |||
|- | |||
| '''[[AGC]]''' || || [[Adaptive Gradient Clipping]] | |||
|- | |||
| '''[[AGI]]''' || || [[Artificial general intelligence]] | |||
|- | |||
| '''[[AI]]''' || || [[Artificial intelligence]] | |||
|- | |||
| '''[[AIaaS]]''' || || [[Artificial Intelligence as a Service]] | |||
|- | |||
| '''[[AIWPSO]]''' || || [[Adaptive Inertia Weight Particle Swarm Optimization]] | |||
|- | |||
| '''[[AL]]''' || || [[Active Learning]] | |||
|- | |||
| '''[[AM]]''' || || [[Activation maximization]] | |||
|- | |||
| '''[[AMR]]''' || || [[Abstract Meaning Representation]] | |||
|- | |||
| '''[[AMT]]''' || || [[Automatic Music Transcription]] | |||
|- | |||
| '''[[ANI]]''' || || [[Artificial Narrow Intelligence]] | |||
|- | |||
| '''[[ANN]]''' || || [[Artificial neural network]] | |||
|- | |||
| '''[[ANOVA]]''' || || [[Analysis of variance]] | |||
|- | |||
| '''[[API]]''' || || [[Application Programming Interface]] | |||
|- | |||
| '''[[AR]]''' || || [[Augmented reality]] | |||
|- | |||
| '''[[ARNN]]''' || || [[Anticipation Recurrent Neural Network]] | |||
|- | |||
| '''[[ASI]]''' || || [[Artificial superintelligence]] | |||
|- | |||
| '''[[ASIC]]''' || || [[Application-Specific Integrated Circuit]] | |||
|- | |||
| '''[[ASR]]''' || || [[Automatic speech recognition]] | |||
|- | |||
| '''[[AST]]''' || || [[Automated speech translation]] | |||
|- | |||
| '''[[AUC]]''' || || [[Area Under the Curve]] | |||
|- | |||
| '''[[AutoML]]''' || || [[Automated Machine Learning]] | |||
|- | |||
| '''[[BB84]]''' || || [[A quantum key distribution protocol (named after its inventors, Bennett and Brassard, and the year 1984)]] | |||
|- | |||
| '''[[BBO]]''' || || [[Biogeography-Based Optimization]] | |||
|- | |||
| '''[[BCE]]''' || || [[Binary cross-entropy]] | |||
|- | |||
| '''[[BDT]]''' || || [[Boosted Decision Tree]] | |||
|- | |||
| '''[[BERT]]''' || || [[Bidirectional Encoder Representations from Transformers]] | |||
|- | |||
| '''[[BFS]]''' || || [[Breadth-First Search]] | |||
|- | |||
| '''[[BI]]''' || || [[Business Intelligence]] | |||
|- | |||
| '''[[BiFPN]]''' || || [[Bidirectional Feature Pyramid Network]] | |||
|- | |||
| '''[[BILSTM]]''' || || [[Bidirectional Long Short-Term Memory]] | |||
|- | |||
| '''[[BLEU]]''' || || [[Bilingual evaluation understudy]] | |||
|- | |||
| '''[[BN]]''' || || [[Bayesian Network]] | |||
|- | |||
| '''[[BNN]]''' || || [[Bayesian Neural Network]] | |||
|- | |||
| '''[[BO]]''' || || [[Bayesian Optimization]] | |||
|- | |||
| '''[[BP]]''' || || [[Backpropagation]] | |||
|- | |||
| '''[[BPE]]''' || || [[Byte Pair Encoding]] | |||
|- | |||
| '''[[BPMF]]''' || || [[Bayesian Probabilistic Matrix Factorization]] | |||
|- | |||
| '''[[BPN]]''' || || [[Backpropagation Neural Network]] | |||
|- | |||
| '''[[BPTT]]''' || || [[Backpropagation through time]] | |||
|- | |||
| '''[[BQML]]''' || || [[Big Query Machine Learning]] | |||
|- | |||
| '''[[BR]]''' || || [[Best-Response (in game theory)]] | |||
|- | |||
| '''[[BRDF]]''' || || [[Bidirectional reflectance distribution function]] | |||
|- | |||
| '''[[BRNN]]''' || || [[Bidirectional Recurrent Neural Network]] | |||
|- | |||
| '''[[BRR]]''' || || [[Bayesian ridge regression]] | |||
|- | |||
| '''[[CAD]]''' || || [[Computer-Aided Design]] | |||
|- | |||
| '''[[CAE]]''' || || [[Contractive Autoencoder]] | |||
|- | |||
| '''[[CALA]]''' || || [[Continuous Action-set Learning Automata]] | |||
|- | |||
| '''[[CAM]]''' || || [[Computer-Aided Manufacturing]] | |||
|- | |||
| '''[[CAPTCHA]]''' || || [[Completely Automated Public Turing test to tell Computers and Humans Apart]] | |||
|- | |||
| '''[[CART]]''' || || [[Classification And Regression Tree]] | |||
|- | |||
| '''[[CASE]]''' || || [[Computer-Aided Software Engineering]] | |||
|- | |||
| '''[[CatBoost]]''' || || [[Categorical Boosting]] | |||
|- | |||
| '''[[CAV]]''' || || [[Concept Activation Vectors]] | |||
|- | |||
| '''[[CBAC]]''' || || [[Content-Based Access Control]] | |||
|- | |||
| '''[[CBI]]''' || || [[Counterfactual Bias Insertion]] | |||
|- | |||
| '''[[CBOW]]''' || || [[Continuous Bag of Words]] | |||
|- | |||
| '''[[CBR]]''' || || [[Case-Based Reasoning]] | |||
|- | |||
| '''[[CCA]]''' || || [[Canonical Correlation Analysis]] | |||
|- | |||
| '''[[CCC]]''' || || [[Canonical Correlation Coefficients]] | |||
|- | |||
| '''[[CCE]]''' || || [[Categorical cross-entropy]] | |||
|- | |||
| '''[[CDBN]]''' || || [[Convolutional Deep Belief Networks]] | |||
|- | |||
| '''[[CE]]''' || || [[Cross-Entropy]] | |||
|- | |||
| '''[[CEC]]''' || || [[Constant Error Carousel]] | |||
|- | |||
| '''[[CEGAR]]''' || || [[Counterexample-Guided Abstraction Refinement]] | |||
|- | |||
| '''[[CEGIS]]''' || || [[Counterexample-Guided Inductive Synthesis]] | |||
|- | |||
| '''[[CF]]''' || || [[Common Features]] | |||
|- | |||
| '''[[cGAN]]''' || || [[Conditional Generative Adversarial Network]] | |||
|- | |||
| '''[[CL]]''' || || [[Confident learning]] | |||
|- | |||
| '''[[CLIP]]''' || || [[Contrastive Language-Image Pre-Training]] | |||
|- | |||
| '''[[CLNN]]''' || || [[ConditionaL Neural Networks]] | |||
|- | |||
| '''[[CMA]]''' || || [[Covariance Matrix Adaptation]] | |||
|- | |||
| '''[[CMA-ES]]''' || || [[Covariance Matrix Adaptation Evolution Strategy]] | |||
|- | |||
| '''[[CMAC]]''' || || [[Cerebellar Model Articulation Controller]] | |||
|- | |||
| '''[[CMMs]]''' || || [[Conditional Markov Model]] | |||
|- | |||
| '''[[CNN]]''' || || [[Convolutional neural network]] | |||
|- | |||
| '''[[COIN-OR]]''' || || [[Computational Infrastructure for Operations Research]] | |||
|- | |||
| '''[[ConvNet]]''' || || [[Convolutional Neural Network]] | |||
|- | |||
| '''[[COT]]''' || || [[Chain of Thought]] | |||
|- | |||
| '''[[COTE]]''' || || [[Collective of Transformation-Based Ensembles]] | |||
|- | |||
| '''[[COTP]]''' || || [[Chain of Thought Prompting]] | |||
|- | |||
| '''[[CP]]''' || || [[Constraint Programming]] | |||
|- | |||
| '''[[CPLEX]]''' || || [[An optimization solver (from "C" programming language and "simplex")]] | |||
|- | |||
| '''[[CPN]]''' || || [[Colored Petri Nets]] | |||
|- | |||
| '''[[CRBM]]''' || || [[Conditional Restricted Boltzmann Machine]] | |||
|- | |||
| '''[[CRF]]''' || || [[Conditional Random Field]] | |||
|- | |||
| '''[[CRFs]]''' || || [[Conditional Random Fields]] | |||
|- | |||
| '''[[CRNN]]''' || || [[Convolutional Recurrent Neural Network]] | |||
|- | |||
| '''[[CSLR]]''' || || [[Continuous Sign Language Recognition]] | |||
|- | |||
| '''[[CSP]]''' || || [[Constraint Satisfaction Problem]] | |||
|- | |||
| '''[[CSV]]''' || || [[Comma-separated values]] | |||
|- | |||
| '''[[CT-LSTM]]''' || || [[Convolutional Transformer Long Short-Term Memory]] | |||
|- | |||
| '''[[CTC]]''' || || [[Connectionist Temporal Classification]] | |||
|- | |||
| '''[[CTR]]''' || || [[Collaborative Topic Regression]] | |||
|- | |||
| '''[[CUDA]]''' || || [[Compute Unified Device Architecture]] | |||
|- | |||
| '''[[CV]]''' || || [[Computer Vision, Cross validation, Coefficient of variation]] | |||
|- | |||
| '''[[Cyc]]''' || || [[CycL and OpenCyc, a knowledge representation and reasoning system]] | |||
|- | |||
| '''[[D*]]''' || || [[Dynamic A* Search Algorithm]] | |||
|- | |||
| '''[[DAAF]]''' || || [[Data Augmentation and Auxiliary Feature]] | |||
|- | |||
| '''[[DaaS]]''' || || [[Data as a Service]] | |||
|- | |||
| '''[[DAE]]''' || || [[Denoising AutoEncoder or Deep AutoEncoder]] | |||
|- | |||
| '''[[DAML]]''' || || [[DARPA Agent Markup Language]] | |||
|- | |||
| '''[[DART]]''' || || [[Disturbance Aware Regression Tree]] | |||
|- | |||
| '''[[DBM]]''' || || [[Deep Boltzmann Machine]] | |||
|- | |||
| '''[[DBN]]''' || || [[Deep belief network]] | |||
|- | |||
| '''[[DBSCAN]]''' || || [[Density-Based Spatial Clustering of Applications with Noise]] | |||
|- | |||
| '''[[DCAI]]''' || || [[Data-centric AI]] | |||
|- | |||
| '''[[DCGAN]]''' || || [[Deep Convolutional Generative Adversarial Network]] | |||
|- | |||
| '''[[DCMDN]]''' || || [[Deep Convolutional Mixture Density Network]] | |||
|- | |||
| '''[[DDPG]]''' || || [[Deep Deterministic Policy Gradient]] | |||
|- | |||
| '''[[DE]]''' || || [[Differential evolution]] | |||
|- | |||
| '''[[DeconvNet]]''' || || [[DeConvolutional Neural Network]] | |||
|- | |||
| '''[[DeepLIFT]]''' || || [[Deep Learning Important FeaTures]] | |||
|- | |||
| '''[[DFS]]''' || || [[Depth-First Search]] | |||
|- | |||
| '''[[DL]]''' || || [[Deep learning]] | |||
|- | |||
| '''[[DM]]''' || || [[Diffusion model]] | |||
|- | |||
| '''[[DNN]]''' || || [[Deep neural network]] | |||
|- | |||
| '''[[DP]]''' || || [[Dynamic Programming]] | |||
|- | |||
| '''[[DQN]]''' || || [[Deep Q-Learning]] | |||
|- | |||
| '''[[DR]]''' || || [[Detection Rate]] | |||
|- | |||
| '''[[DRL]]''' || || [[Deep Reinforcement Learning]] | |||
|- | |||
| '''[[DS]]''' || || [[Data Science]] | |||
|- | |||
| '''[[DSN]]''' || || [[Deep Stacking Network]] | |||
|- | |||
| '''[[DSR]]''' || || [[Deep Symbolic Reinforcement Learning]] | |||
|- | |||
| '''[[DSS]]''' || || [[Decision Support System]] | |||
|- | |||
| '''[[DSW]]''' || || [[Data Stream Warehousing]] | |||
|- | |||
| '''[[DT]]''' || || [[Decision Tree]] | |||
|- | |||
| '''[[DTD]]''' || || [[Deep Taylor Decomposition]] | |||
|- | |||
| '''[[DWT]]''' || || [[Discrete Wavelet Transform]] | |||
|- | |||
| '''[[EDA]]''' || || [[Exploratory data analysis]] | |||
|- | |||
| '''[[EKF]]''' || || [[Extended Kalman Filter]] | |||
|- | |||
| '''[[ELECTRA]]''' || || [[Efficiently Learning an Encoder that Classifies Token Replacements Accurately]] | |||
|- | |||
| '''[[ELM]]''' || || [[Extreme Learning Machine]] | |||
|- | |||
| '''[[ELMo]]''' || || [[Embeddings from Language Models]] | |||
|- | |||
| '''[[ELU]]''' || || [[Exponential Linear Unit]] | |||
|- | |||
| '''[[EM]]''' || || [[Expectation maximization]] | |||
|- | |||
| '''[[EMD]]''' || || [[Entropy Minimization Discretization]] | |||
|- | |||
| '''[[ERNIE]]''' || || [[Enhanced Representation through kNowledge IntEgration]] | |||
|- | |||
| '''[[ES]]''' || || [[Evolution Strategies]] | |||
|- | |||
| '''[[ESN]]''' || || [[Echo State Network]] | |||
|- | |||
| '''[[ETL]]''' || || [[Extract, Transform, Load]] | |||
|- | |||
| '''[[ETL Pipeline]]''' || || [[Extract Transform Load Pipeline]] | |||
|- | |||
| '''[[EXT]]''' || || [[Extremely Randomized Trees]] | |||
|- | |||
| '''[[F1]]''' || || [[F1 Score (harmonic mean of precision and recall)]] | |||
|- | |||
| '''[[F1 Score]]''' || || [[Harmonic Precision-Recall Mean]] | |||
|- | |||
| '''[[FALA]]''' || || [[Finite Action-set Learning Automata]] | |||
|- | |||
| '''[[Fast R-CNN]]''' || || [[Faster Region-based Convolutional Neural Network]] | |||
|- | |||
| '''[[FC]]''' || || [[Fully-Connected]] | |||
|- | |||
| '''[[FC-CNN]]''' || || [[Fully Convolutional Convolutional Neural Network]] | |||
|- | |||
| '''[[FC-LSTM]]''' || || [[Fully Connected Long Short-Term Memory]] | |||
|- | |||
| '''[[FCM]]''' || || [[Fuzzy C-Means]] | |||
|- | |||
| '''[[FCN]]''' || || [[Fully Convolutional Network]] | |||
|- | |||
| '''[[FER]]''' || || [[Facial Expression Recognition]] | |||
|- | |||
| '''[[FFT]]''' || || [[Fast Fourier transform]] | |||
|- | |||
| '''[[FL]]''' || || [[Federated Learning]] | |||
|- | |||
| '''[[FLOP]]''' || || [[Floating Point Operations]] | |||
|- | |||
| '''[[FLOPS]]''' || || [[Floating Point Operations Per Second]] | |||
|- | |||
| '''[[FM]]''' || || [[Foundation model]] | |||
|- | |||
| '''[[FN]]''' || || [[False negative]] | |||
|- | |||
| '''[[FNN]]''' || || [[Feedforward Neural Network]] | |||
|- | |||
| '''[[FNR]]''' || || [[False negative rate]] | |||
|- | |||
| '''[[FOAF]]''' || || [[Friend of a Friend (ontology)]] | |||
|- | |||
| '''[[FP]]''' || || [[False positive]] | |||
|- | |||
| '''[[FPGA]]''' || || [[Field-Programmable Gate Array]] | |||
|- | |||
| '''[[FPN]]''' || || [[Feature Pyramid Network]] | |||
|- | |||
| '''[[FPR]]''' || || [[False positive rate]] | |||
|- | |||
| '''[[FST]]''' || || [[Finite state transducer]] | |||
|- | |||
| '''[[FTL]]''' || || [[Few-Shot Learning]] | |||
|- | |||
| '''[[FWA]]''' || || [[Fireworks Algorithm]] | |||
|- | |||
| '''[[FWIoU]]''' || || [[Frequency Weighted Intersection over Union]] | |||
|- | |||
| '''[[GA]]''' || || [[Genetic Algorithm]] | |||
|- | |||
| '''[[GALE]]''' || || [[Global Aggregations of Local Explanations]] | |||
|- | |||
| '''[[GAM]]''' || || [[Generalized Additive Model]] | |||
|- | |||
| '''[[GAN]]''' || || [[Generative Adversarial Network]] | |||
|- | |||
| '''[[GAP]]''' || || [[Global Average Pooling]] | |||
|- | |||
| '''[[GBDT]]''' || || [[Gradient Boosted Decision Tree]] | |||
|- | |||
| '''[[GBM]]''' || || [[Gradient Boosting Machine]] | |||
|- | |||
| '''[[GBRCN]]''' || || [[Gradient-Boosting Random Convolutional Network]] | |||
|- | |||
| '''[[GD]]''' || || [[Gradient descent]] | |||
|- | |||
| '''[[GEBI]]''' || || [[Global Explanation for Bias Identification]] | |||
|- | |||
| '''[[GFNN]]''' || || [[Gradient frequency neural network]] | |||
|- | |||
| '''[[GLCM]]''' || || [[Gray Level Co-occurrence Matrix]] | |||
|- | |||
| '''[[GLM]]''' || || [[Generalized Linear Model]] | |||
|- | |||
| '''[[GLOM]]''' || || [[A neural network architecture by Geoffrey Hinton]] | |||
|- | |||
| '''[[Gloss2Text]]''' || || [[A task of transforming raw glosses into meaningful sentences.]] | |||
|- | |||
| '''[[GloVE]]''' || || [[Global Vectors]] | |||
|- | |||
| '''[[GLPK]]''' || || [[GNU Linear Programming Kit]] | |||
|- | |||
| '''[[GLUE]]''' || || [[General Language Understanding Evaluation]] | |||
|- | |||
| '''[[GMM]]''' || || [[Gaussian mixture model]] | |||
|- | |||
| '''[[GP]]''' || || [[Genetic Programming]] | |||
|- | |||
| '''[[GPR]]''' || || [[Gaussian process regression]] | |||
|- | |||
| '''[[GPT]]''' || || [[Generative Pre-Training]] | |||
|- | |||
| '''[[GPU]]''' || || [[Graphics processing unit]] | |||
|- | |||
| '''[[GradCAM]]''' || || [[GRADient-weighted Class Activation Mapping]] | |||
|- | |||
| '''[[GRU]]''' || || [[Gated recurrent unit]] | |||
|- | |||
| '''[[Gurobi]]''' || || [[An optimization solver (named after its founders, Zonghao Gu, Edward Rothberg, and Robert Bixby)]] | |||
|- | |||
| '''[[HamNoSys]]''' || || [[Hamburg Sign Language Notation System]] | |||
|- | |||
| '''[[HAN]]''' || || [[Hierarchical Attention Networks]] | |||
|- | |||
| '''[[HC]]''' || || [[Hierarchical Clustering]] | |||
|- | |||
| '''[[HCA]]''' || || [[Hierarchical Clustering Analysis]] | |||
|- | |||
| '''[[HDP]]''' || || [[Hierarchical Dirichlet process]] | |||
|- | |||
| '''[[HF]]''' || || [[Hugging Face]] | |||
|- | |||
| '''[[HHDS]]''' || || [[HipHop Dataset]] | |||
|- | |||
| '''[[hLDA]]''' || || [[Hierarchical Latent Dirichlet allocation]] | |||
|- | |||
| '''[[HMM]]''' || || [[Hidden Markov Model]] | |||
|- | |||
| '''[[HNN]]''' || || [[Hopfield Neural Network]] | |||
|- | |||
| '''[[HOG]]''' || || [[Histogram of Oriented Gradients (feature descriptor)]] | |||
|- | |||
| '''[[Hopfield]]''' || || [[Hopfield Network]] | |||
|- | |||
| '''[[HPC]]''' || || [[High Performance Computing]] | |||
|- | |||
| '''[[HRED]]''' || || [[Hierarchical Recurrent Encoder-Decoder]] | |||
|- | |||
| '''[[HRI]]''' || || [[Human-Robot Interaction]] | |||
|- | |||
| '''[[HSMM]]''' || || [[Hidden Semi-Markov Model]] | |||
|- | |||
| '''[[HTM]]''' || || [[Hierarchical Temporal Memory]] | |||
|- | |||
| '''[[i.i.d]]''' || || [[Independent and Identically Distributed]] | |||
|- | |||
| '''[[i.i.d.]]''' || || [[Independently and identically distributed]] | |||
|- | |||
| '''[[IaaS]]''' || || [[Infrastructure as a Service]] | |||
|- | |||
| '''[[ICA]]''' || || [[Independent component analysis]] | |||
|- | |||
| '''[[ICP]]''' || || [[Iterative Closest Point (point cloud registration)]] | |||
|- | |||
| '''[[ID3]]''' || || [[Iterative Dichotomiser 3]] | |||
|- | |||
| '''[[IDA*]]''' || || [[Iterative Deepening A* Search Algorithm]] | |||
|- | |||
| '''[[IDR]]''' || || [[Input dependence rate]] | |||
|- | |||
| '''[[IG]]''' || || [[Invariant Generation]] | |||
|- | |||
| '''[[IID]]''' || || [[Independently and identically distributed]] | |||
|- | |||
| '''[[IIR]]''' || || [[Input independence rate]] | |||
|- | |||
| '''[[ILASP]]''' || || [[Inductive Learning of Answer Set Programs]] | |||
|- | |||
| '''[[ILP]]''' || || [[Integer Linear Programming]] | |||
|- | |||
| '''[[INFD]]''' || || [[Explanation Infidelity]] | |||
|- | |||
| '''[[IoA]]''' || || [[Internet of Agents]] | |||
|- | |||
| '''[[IoE]]''' || || [[Internet of Everything]] | |||
|- | |||
| '''[[IoT]]''' || || [[Internet of Things]] | |||
|- | |||
| '''[[IoU]]''' || || [[Jaccard index (intersection over union)]] | |||
|- | |||
| '''[[IR]]''' || || [[Information Retrieval]] | |||
|- | |||
| '''[[IRCoT]]''' || || [[Interleaving Retrieval CoT]] | |||
|- | |||
| '''[[ISIC]]''' || || [[International Skin Imaging Collaboration]] | |||
|- | |||
| '''[[IVR]]''' || || [[Interactive Voice Response]] | |||
|- | |||
| '''[[K-Means]]''' || || [[K-Means Clustering]] | |||
|- | |||
| '''[[KB]]''' || || [[Knowledge Base]] | |||
|- | |||
| '''[[KDE]]''' || || [[Kernel Density Estimation]] | |||
|- | |||
| '''[[KF]]''' || || [[Kalman Filter]] | |||
|- | |||
| '''[[kFCV]]''' || || [[K-fold cross validation]] | |||
|- | |||
| '''[[KL]]''' || || [[Kullback Leibler (KL) divergence]] | |||
|- | |||
| '''[[KNN]]''' || || [[K-nearest neighbors]] | |||
|- | |||
| '''[[KR]]''' || || [[Knowledge Representation]] | |||
|- | |||
| '''[[KRR]]''' || || [[Kernel Ridge Regression]] | |||
|- | |||
| '''[[LAION]]''' || || [[Large-scale Artificial Intelligence Open Network]] | |||
|- | |||
| '''[[LAMA]]''' || || [[LAnguage Model Analysis]] | |||
|- | |||
| '''[[LaMDA]]''' || || [[Language Models for Dialog Applications]] | |||
|- | |||
| '''[[LBP]]''' || || [[Local Binary Pattern (texture descriptor)]] | |||
|- | |||
| '''[[LDA]]''' || || [[Latent Dirichlet Allocation]] | |||
|- | |||
| '''[[LDADE]]''' || || [[Latent Dirichlet Allocation Differential Evolution]] | |||
|- | |||
| '''[[LEPOR]]''' || || [[Language Evaluation Portal]] | |||
|- | |||
| '''[[LightGBM]]''' || || [[Light Gradient Boosting Machine]] | |||
|- | |||
| '''[[LIME]]''' || || [[Local Interpretable Model-agnostic Explanations]] | |||
|- | |||
| '''[[LINGO]]''' || || [[A software for linear, nonlinear, and integer optimization]] | |||
|- | |||
| '''[[LL]]''' || || [[Lifelong learning]] | |||
|- | |||
| '''[[LLM]]''' || || [[Large language model]] | |||
|- | |||
| '''[[LLS]]''' || || [[Linear least squares]] | |||
|- | |||
| '''[[LMNN]]''' || || [[Large Margin Nearest Neighbor]] | |||
|- | |||
| '''[[LoLM]]''' || || [[Lots of Little Models]] | |||
|- | |||
| '''[[LP]]''' || || [[Linear Programming]] | |||
|- | |||
| '''[[LPAQA]]''' || || [[Language model Prompt And Query Archive]] | |||
|- | |||
| '''[[LRP]]''' || || [[Layer-wise Relevance Propagation]] | |||
|- | |||
| '''[[LSA]]''' || || [[Latent semantic analysis]] | |||
|- | |||
| '''[[LSI]]''' || || [[Latent Semantic Indexing]] | |||
|- | |||
| '''[[LSTM]]''' || || [[Long short-term memory]] | |||
|- | |||
| '''[[LSTM-CRF]]''' || || [[Long Short-Term Memory with Conditional Random Field]] | |||
|- | |||
| '''[[LTR]]''' || || [[Learning To Rank]] | |||
|- | |||
| '''[[LVQ]]''' || || [[Learning Vector Quantization]] | |||
|- | |||
| '''[[M2M]]''' || || [[Machine to Machine]] | |||
|- | |||
| '''[[MADE]]''' || || [[Masked Autoencoder for Distribution Estimation]] | |||
|- | |||
| '''[[MAE]]''' || || [[Mean absolute error]] | |||
|- | |||
| '''[[MAF]]''' || || [[Masked Autoregressive Flows]] | |||
|- | |||
| '''[[MAIRL]]''' || || [[Multi-Agent Inverse Reinforcement Learning]] | |||
|- | |||
| '''[[MAP]]''' || || [[Maximum A Posteriori (MAP) Estimation]] | |||
|- | |||
| '''[[MAPE]]''' || || [[Mean absolute percentage error]] | |||
|- | |||
| '''[[MARL]]''' || || [[Multi-Agent Reinforcement Learning]] | |||
|- | |||
| '''[[MART]]''' || || [[Multiple Additive Regression Tree]] | |||
|- | |||
| '''[[MaxEnt]]''' || || [[Maximum Entropy]] | |||
|- | |||
| '''[[MAXSAT]]''' || || [[Maximum Satisfiability Problem]] | |||
|- | |||
| '''[[MCLNN]]''' || || [[Masked ConditionaL Neural Networks]] | |||
|- | |||
| '''[[MCMC]]''' || || [[Markov Chain Monte Carlo]] | |||
|- | |||
| '''[[MCS]]''' || || [[Model contrast score]] | |||
|- | |||
| '''[[MCTS]]''' || || [[Monte Carlo Tree Search]] | |||
|- | |||
| '''[[MDL]]''' || || [[Minimum description length (MDL) principle]] | |||
|- | |||
| '''[[MDN]]''' || || [[Mixture Density Network]] | |||
|- | |||
| '''[[MDP]]''' || || [[Markov Decision Process]] | |||
|- | |||
| '''[[MDRNN]]''' || || [[Multidimensional recurrent neural network]] | |||
|- | |||
| '''[[MER]]''' || || [[Music Emotion Recognition]] | |||
|- | |||
| '''[[METEOR]]''' || || [[Metric for Evaluation of Translation with Explicit ORdering]] | |||
|- | |||
| '''[[MIL]]''' || || [[Multiple Instance Learning]] | |||
|- | |||
| '''[[MILP]]''' || || [[Mixed-Integer Linear Programming]] | |||
|- | |||
| '''[[MINT]]''' || || [[Mutual Information based Transductive Feature Selection]] | |||
|- | |||
| '''[[MIoU]]''' || || [[Mean Intersection over Union]] | |||
|- | |||
| '''[[MIP]]''' || || [[Mixed-Integer Programming]] | |||
|- | |||
| '''[[ML]]''' || || [[Machine learning]] | |||
|- | |||
| '''[[MLaaS]]''' || || [[Machine Learning as a Service]] | |||
|- | |||
| '''[[MLE]]''' || || [[Maximum Likelihood Estimation]] | |||
|- | |||
| '''[[MLLM]]''' || || [[Multimodal large language model]] | |||
|- | |||
| '''[[MLM]]''' || || [[Music Language Models]] | |||
|- | |||
| '''[[MLP]]''' || || [[Multi-Layer Perceptron]] | |||
|- | |||
| '''[[MMI]]''' || || [[Maximum Mutual Information]] | |||
|- | |||
| '''[[MNIST]]''' || || [[Modified National Institute of Standards and Technology]] | |||
|- | |||
| '''[[MOEA]]''' || || [[Multi-Objective Evolutionary Algorithm]] | |||
|- | |||
| '''[[MPA]]''' || || [[Mean Pixel Accuracy]] | |||
|- | |||
| '''[[MR]]''' || || [[Mixed Reality]] | |||
|- | |||
| '''[[MRF]]''' || || [[Markov Random Field]] | |||
|- | |||
| '''[[MRR]]''' || || [[Mean Reciprocal Rank]] | |||
|- | |||
| '''[[MRS]]''' || || [[Music Recommender System]] | |||
|- | |||
| '''[[MSDAE]]''' || || [[Modified Sparse Denoising Autoencoder]] | |||
|- | |||
| '''[[MSE]]''' || || [[Mean squared error]] | |||
|- | |||
| '''[[MSR]]''' || || [[Music Style Recognition]] | |||
|- | |||
| '''[[MTL]]''' || || [[Multi-Task Learning]] | |||
|- | |||
| '''[[NARX]]''' || || [[Nonlinear AutoRegressive with eXogenous input (neural network model)]] | |||
|- | |||
| '''[[NAS]]''' || || [[Neural Architecture Search]] | |||
|- | |||
| '''[[NB]]''' || || [[Na ̈ıve Bayes]] | |||
|- | |||
| '''[[NBKE]]''' || || [[Na ̈ıve Bayes with Kernel Estimation]] | |||
|- | |||
| '''[[NDCG]]''' || || [[Normalized Discounted Cumulative Gain]] | |||
|- | |||
| '''[[NE]]''' || || [[Nash Equilibrium (in game theory)]] | |||
|- | |||
| '''[[NEAT]]''' || || [[NeuroEvolution of Augmenting Topologies]] | |||
|- | |||
| '''[[NER]]''' || || [[Named entity recognition]] | |||
|- | |||
| '''[[NERQ]]''' || || [[Named Entity Recognition in Query]] | |||
|- | |||
| '''[[NEST]]''' || || [[Neural Simulation Tool]] | |||
|- | |||
| '''[[NF]]''' || || [[Normalizing Flow]] | |||
|- | |||
| '''[[NFL]]''' || || [[No Free Lunch (NFL) theorem]] | |||
|- | |||
| '''[[NISQ]]''' || || [[Noisy Intermediate-Scale Quantum (quantum computing)]] | |||
|- | |||
| '''[[NLG]]''' || || [[Natural Language Generation]] | |||
|- | |||
| '''[[NLP]]''' || || [[Natural Language Processing]] | |||
|- | |||
| '''[[NLT]]''' || || [[Neural Machine Translation]] | |||
|- | |||
| '''[[NLU]]''' || || [[Natural Language Understanding]] | |||
|- | |||
| '''[[NMF]]''' || || [[Non-negative matrix factorization]] | |||
|- | |||
| '''[[NMS]]''' || || [[Non Maximum Suppression]] | |||
|- | |||
| '''[[NMT]]''' || || [[Neural Machine Translation]] | |||
|- | |||
| '''[[NN]]''' || || [[Neural network]] | |||
|- | |||
| '''[[NNMODFF]]''' || || [[Neural Network based Multi-Onset Detection Function Fusion]] | |||
|- | |||
| '''[[NPE]]''' || || [[Neural Physical Engine]] | |||
|- | |||
| '''[[NRMSE]]''' || || [[Normalized RMSE]] | |||
|- | |||
| '''[[NSGA-II]]''' || || [[Non-dominated Sorting Genetic Algorithm II]] | |||
|- | |||
| '''[[NST]]''' || || [[Neural style transfer]] | |||
|- | |||
| '''[[NTM]]''' || || [[Neural Turing Machine]] | |||
|- | |||
| '''[[NuSVC]]''' || || [[Nu-Support Vector Classification]] | |||
|- | |||
| '''[[NuSVR]]''' || || [[Nu-Support Vector Regression]] | |||
|- | |||
| '''[[OBM]]''' || || [[One Big Model]] | |||
|- | |||
| '''[[OCR]]''' || || [[Optical character recognition]] | |||
|- | |||
| '''[[OD]]''' || || [[Object Detection]] | |||
|- | |||
| '''[[ODF]]''' || || [[Onset Detection Function]] | |||
|- | |||
| '''[[OIL]]''' || || [[Ontology Inference Layer]] | |||
|- | |||
| '''[[OLR]]''' || || [[Ordinary Linear Regression]] | |||
|- | |||
| '''[[OLS]]''' || || [[Ordinary Least Squares]] | |||
|- | |||
| '''[[OMNeT++]]''' || || [[Objective Modular Network Testbed in C++]] | |||
|- | |||
| '''[[OMR]]''' || || [[Optical Mark Recognition]] | |||
|- | |||
| '''[[OOF]]''' || || [[Out-of-fold]] | |||
|- | |||
| '''[[ORB]]''' || || [[Oriented FAST and Rotated BRIEF (feature descriptor)]] | |||
|- | |||
| '''[[OWL]]''' || || [[Web Ontology Language]] | |||
|- | |||
| '''[[PA]]''' || || [[Pixel Accuracy]] | |||
|- | |||
| '''[[PaaS]]''' || || [[Platform as a Service]] | |||
|- | |||
| '''[[PACO]]''' || || [[Poisson Additive Co-Clustering]] | |||
|- | |||
| '''[[PaLM]]''' || || [[Pathways Language Model]] | |||
|- | |||
| '''[[PBAC]]''' || || [[Policy-Based Access Control]] | |||
|- | |||
| '''[[PCA]]''' || || [[Principal component analysis]] | |||
|- | |||
| '''[[PCL]]''' || || [[Point Cloud Library (3D perception)]] | |||
|- | |||
| '''[[PECS]]''' || || [[Physics Engine for Collaborative Simulation]] | |||
|- | |||
| '''[[PEGASUS]]''' || || [[Pre-training with Extracted Gap-Sentences for Abstractive Summarization]] | |||
|- | |||
| '''[[PF]]''' || || [[Particle Filter]] | |||
|- | |||
| '''[[PFE]]''' || || [[Probabilistic facial embedding]] | |||
|- | |||
| '''[[PLSI]]''' || || [[Probabilistic Latent Semantic Indexing]] | |||
|- | |||
| '''[[PM]]''' || || [[Project Manager]] | |||
|- | |||
| '''[[PMF]]''' || || [[Probabilistic Matrix Factorization]] | |||
|- | |||
| '''[[PMI]]''' || || [[Pointwise Mutual Information]] | |||
|- | |||
| '''[[PNN]]''' || || [[Probabilistic Neural Network]] | |||
|- | |||
| '''[[POC]]''' || || [[Proof of Concept]] | |||
|- | |||
| '''[[POMDP]]''' || || [[Partially Observable Markov Decision Process]] | |||
|- | |||
| '''[[POS]]''' || || [[Part of Speech (POS) Tagging]] | |||
|- | |||
| '''[[POT]]''' || || [[Partially Observable Tree (decision-making under uncertainty)]] | |||
|- | |||
| '''[[PPL]]''' || || [[Perplexity (a measure of language model performance)]] | |||
|- | |||
| '''[[PPMI]]''' || || [[Positive Pointwise Mutual Information]] | |||
|- | |||
| '''[[PPO]]''' || || [[Proximal Policy Optimization]] | |||
|- | |||
| '''[[PReLU]]''' || || [[Parametric Rectified Linear Unit-Yor Topic Modeling]] | |||
|- | |||
| '''[[PRM]]''' || || [[Probabilistic Roadmap (motion planning algorithm)]] | |||
|- | |||
| '''[[PSO]]''' || || [[Particle Swarm Optimization]] | |||
|- | |||
| '''[[PU]]''' || || [[Positive Unlabaled]] | |||
|- | |||
| '''[[PYTM]]''' || || [[Pitman]] | |||
|- | |||
| '''[[QA]]''' || || [[Question Answering]] | |||
|- | |||
| '''[[QAOA]]''' || || [[Quantum Approximate Optimization Algorithm]] | |||
|- | |||
| '''[[QAP]]''' || || [[Quadratic Assignment Problem]] | |||
|- | |||
| '''[[QEC]]''' || || [[Quantum Error Correction]] | |||
|- | |||
| '''[[QFT]]''' || || [[Quantum Fourier Transform]] | |||
|- | |||
| '''[[QIP]]''' || || [[Quantum Information Processing]] | |||
|- | |||
| '''[[QKD]]''' || || [[Quantum Key Distribution]] | |||
|- | |||
| '''[[QML]]''' || || [[Quantum Machine Learning]] | |||
|- | |||
| '''[[QNN]]''' || || [[Quantum Neural Network]] | |||
|- | |||
| '''[[QP]]''' || || [[Quadratic Programming]] | |||
|- | |||
| '''[[QPE]]''' || || [[Quantum Phase Estimation]] | |||
|- | |||
| '''[[R-CNN]]''' || || [[Region-based Convolutional Neural Network]] | |||
|- | |||
| '''[[R2]]''' || || [[R-squared]] | |||
|- | |||
| '''[[RandNN]]''' || || [[Random Neural Network]] | |||
|- | |||
| '''[[RANSAC]]''' || || [[RANdom SAmple Consensus]] | |||
|- | |||
| '''[[RBAC]]''' || || [[Rule-Based Access Control]] | |||
|- | |||
| '''[[RBF]]''' || || [[Radial Basis Function]] | |||
|- | |||
| '''[[RBFNN]]''' || || [[Radial Basis Function Neural Network]] | |||
|- | |||
| '''[[RBM]]''' || || [[Restricted Boltzmann Machine]] | |||
|- | |||
| '''[[RDF]]''' || || [[Resource Description Framework]] | |||
|- | |||
| '''[[ReAct]]''' || || [[Reason + Act]] | |||
|- | |||
| '''[[REALM]]''' || || [[Retrieval-Augmented Language Model Pre-Training]] | |||
|- | |||
| '''[[ReCAPTCHA]]''' || || [[Reverse CAPTCHA]] | |||
|- | |||
| '''[[ReLU]]''' || || [[Rectified Linear Unit]] | |||
|- | |||
| '''[[REPTree]]''' || || [[Reduced Error Pruning Tree]] | |||
|- | |||
| '''[[RETRO]]''' || || [[Retrieval Enhanced Transformer]] | |||
|- | |||
| '''[[RF]]''' || || [[Random forest]] | |||
|- | |||
| '''[[RFE]]''' || || [[Recursive Feature Elimination]] | |||
|- | |||
| '''[[RGB]]''' || || [[Red Green Blue color model]] | |||
|- | |||
| '''[[RICNN]]''' || || [[Rotation Invariant Convolutional Neural Network]] | |||
|- | |||
| '''[[RIM]]''' || || [[Recurrent Interence Machines]] | |||
|- | |||
| '''[[RIPPER]]''' || || [[Repeated Incremental Pruning to Produce Error Reduction]] | |||
|- | |||
| '''[[RISE]]''' || || [[Random Interval Spectral Ensemble]] | |||
|- | |||
| '''[[RL]]''' || || [[Reinforcement learning]] | |||
|- | |||
| '''[[RLFM]]''' || || [[Regression based latent factors]] | |||
|- | |||
| '''[[RLHF]]''' || || [[Reinforcement Learning from Human Feedback]] | |||
|- | |||
| '''[[RMSE]]''' || || [[Root mean squared error]] | |||
|- | |||
| '''[[RMSLE]]''' || || [[Root mean squared logarithmic error]] | |||
|- | |||
| '''[[RMSprop]]''' || || [[Root Mean Square Propagation]] | |||
|- | |||
| '''[[RNN]]''' || || [[Recurrent neural network]] | |||
|- | |||
| '''[[RNNLM]]''' || || [[Recurrent Neural Network Language Model (RNNLM)]] | |||
|- | |||
| '''[[RoBERTa]]''' || || [[Robustly Optimized BERT Pretraining Approach]] | |||
|- | |||
| '''[[ROC]]''' || || [[Receiver operating characteristic]] | |||
|- | |||
| '''[[ROI]]''' || || [[Region Of Interest]] | |||
|- | |||
| '''[[ROS]]''' || || [[Robot Operating System]] | |||
|- | |||
| '''[[ROUGE]]''' || || [[Recall-Oriented Understudy for Gisting Evaluation (NLP metric)]] | |||
|- | |||
| '''[[RPA]]''' || || [[Robotic Process Automation]] | |||
|- | |||
| '''[[RR]]''' || || [[Ridge Regression]] | |||
|- | |||
| '''[[RRT]]''' || || [[Rapidly-exploring Random Tree (motion planning algorithm)]] | |||
|- | |||
| '''[[RSI]]''' || || [[Recursive self-improvement]] | |||
|- | |||
| '''[[RTRL]]''' || || [[Real-Time Recurrent Learning]] | |||
|- | |||
| '''[[SA]]''' || || [[Simulated Annealing]], [[Segment Anything]] | |||
|- | |||
| '''[[SAM]]''' || || [[Segment Anything Model]] | |||
|- | |||
| '''[[SaaS]]''' || || [[Software as a Service]] | |||
|- | |||
| '''[[SAC]]''' || || [[Soft Actor-Critic]] | |||
|- | |||
| '''[[SAE]]''' || || [[Stacked AE]] | |||
|- | |||
| '''[[SARSA]]''' || || [[State-Action-Reward-State-Action]] | |||
|- | |||
| '''[[SAT]]''' || || [[Satisfiability Problem]] | |||
|- | |||
| '''[[SBAC]]''' || || [[Situation-Based Access Control]] | |||
|- | |||
| '''[[SBM]]''' || || [[Stochastic block model]] | |||
|- | |||
| '''[[SBO]]''' || || [[Structured Bayesian optimization]] | |||
|- | |||
| '''[[SBSE]]''' || || [[Search-based software engineering]] | |||
|- | |||
| '''[[SCH]]''' || || [[Stochastic convex hull]] | |||
|- | |||
| '''[[SCIP]]''' || || [[Solving Constraint Integer Programs]] | |||
|- | |||
| '''[[SDAE]]''' || || [[Stacked DAE]] | |||
|- | |||
| '''[[seq2seq]]''' || || [[Sequence to Sequence Learning]] | |||
|- | |||
| '''[[SER]]''' || || [[Sentence Error Rate]] | |||
|- | |||
| '''[[SGBoost]]''' || || [[Stochastic Gradient Boosting]] | |||
|- | |||
| '''[[SGD]]''' || || [[Stochastic gradient descent]] | |||
|- | |||
| '''[[SGVB]]''' || || [[Stochastic Gradient Variational Bayes]] | |||
|- | |||
| '''[[SHAP]]''' || || [[SHapley Additive exPlanation]] | |||
|- | |||
| '''[[SHLLE]]''' || || [[Supervised Hessian Locally Linear Embedding]] | |||
|- | |||
| '''[[SIFT]]''' || || [[Scale-Invariant Feature Transform (feature detection)]] | |||
|- | |||
| '''[[Sign2(Gloss+Text)]]''' || || [[Sign to Gloss and Text]] | |||
|- | |||
| '''[[Sign2Gloss]]''' || || [[A one to one translation from the single sign to the single gloss.]] | |||
|- | |||
| '''[[Sign2Text]]''' || || [[A task of full translation from the sign language into the spoken one]] | |||
|- | |||
| '''[[SL]]''' || || [[Supervised learning]] | |||
|- | |||
| '''[[SLAM]]''' || || [[Simultaneous Localization and Mapping]] | |||
|- | |||
| '''[[SLDS]]''' || || [[Switching Linear Dynamical System]] | |||
|- | |||
| '''[[SLP]]''' || || [[Single-Layer Perceptron]] | |||
|- | |||
| '''[[SLRT]]''' || || [[Sign Language Recognition Transformer]] | |||
|- | |||
| '''[[SLT]]''' || || [[Sign Language Translation]] | |||
|- | |||
| '''[[SLTT]]''' || || [[Sign Language Translation Transformer]] | |||
|- | |||
| '''[[SMA*]]''' || || [[Simplified Memory-bounded A* Search Algorithm]] | |||
|- | |||
| '''[[SMBO]]''' || || [[Sequential Model-Based Optimization]] | |||
|- | |||
| '''[[SMO]]''' || || [[Sequential Minimal Optimization]] | |||
|- | |||
| '''[[SMOTE]]''' || || [[Synthetic Minority Over-sampling Technique]] | |||
|- | |||
| '''[[SNN]]''' || || [[Sparse Neural Network]] | |||
|- | |||
| '''[[SOM]]''' || || [[Self-Organizing Map]] | |||
|- | |||
| '''[[SOTA]]''' || || [[State of the Art]] | |||
|- | |||
| '''[[SPARQL]]''' || || [[SPARQL Protocol and RDF Query Language]] | |||
|- | |||
| '''[[Spiking NN]]''' || || [[Spiking Neural Network]] | |||
|- | |||
| '''[[SPM]]''' || || [[SentencePiece Model (subword tokenization)]] | |||
|- | |||
| '''[[SpRay]]''' || || [[Spectral Relevance Analysis]] | |||
|- | |||
| '''[[SSD]]''' || || [[Single-Shot Detector]] | |||
|- | |||
| '''[[SSL]]''' || || [[Self-Supervised Learning]] | |||
|- | |||
| '''[[SSVM]]''' || || [[Smooth support vector machine]] | |||
|- | |||
| '''[[ST]]''' || || [[Style transfer]] | |||
|- | |||
| '''[[STaR]]''' || || [[Self-Taught Reasoner]] | |||
|- | |||
| '''[[STDA]]''' || || [[Style Transfer Data Augmentation]] | |||
|- | |||
| '''[[STDP]]''' || || [[Spike Timing-Dependent Plasticity]] | |||
|- | |||
| '''[[STL]]''' || || [[Selt-Taught Learning]] | |||
|- | |||
| '''[[SUMO]]''' || || [[Simulation of Urban Mobility]] | |||
|- | |||
| '''[[SURF]]''' || || [[Speeded-Up Robust Features (feature detection)]] | |||
|- | |||
| '''[[SVC]]''' || || [[Support Vector Classification]] | |||
|- | |||
| '''[[SVD]]''' || || [[Singing Voice Detection]] | |||
|- | |||
| '''[[SVM]]''' || || [[Support vector machine]] | |||
|- | |||
| '''[[SVR]]''' || || [[Support Vector Regression]] | |||
|- | |||
| '''[[SVS]]''' || || [[Singing Voice Separation]] | |||
|- | |||
| '''[[SWI-Prolog]]''' || || [[Semantic Web Interface for Prolog]] | |||
|- | |||
| '''[[t-SNE]]''' || || [[t-distributed stochastic neighbor embedding]] | |||
|- | |||
| '''[[T5]]''' || || [[Text-To-Text Transfer Transformer]] | |||
|- | |||
| '''[[TD]]''' || || [[Temporal Difference]] | |||
|- | |||
| '''[[TDA]]''' || || [[Targeted Data Augmentation]] | |||
|- | |||
| '''[[TDE]]''' || || [[Time Domain Ensemble]] | |||
|- | |||
| '''[[tf-idf]]''' || || [[term frequency–inverse document frequency]] | |||
|- | |||
| '''[[TGAN]]''' || || [[Temporal Generative Adversarial Network]] | |||
|- | |||
| '''[[THAID]]''' || || [[THeta Automatic Interaction Detection]] | |||
|- | |||
| '''[[TINT]]''' || || [[Tree-Interpreter]] | |||
|- | |||
| '''[[TL]]''' || || [[Transfer Learning]] | |||
|- | |||
| '''[[TLFN]]''' || || [[Time-Lagged Feedforward Neural Network]] | |||
|- | |||
| '''[[TN]]''' || || [[True negative]] | |||
|- | |||
| '''[[TNR]]''' || || [[True negative rate]] | |||
|- | |||
| '''[[ToM]]''' || || [[Theory of Mind]] | |||
|- | |||
| '''[[TP]]''' || || [[True positive]] | |||
|- | |||
| '''[[TPOT]]''' || || [[Tree-based Pipeline Optimization Tool]] | |||
|- | |||
| '''[[TPR]]''' || || [[True positive rate]] | |||
|- | |||
| '''[[TPU]]''' || || [[Tensor Processing Unit]] | |||
|- | |||
| '''[[TRPO]]''' || || [[Trust Region Policy Optimization]] | |||
|- | |||
| '''[[TS]]''' || || [[Tabu Search]] | |||
|- | |||
| '''[[TSF]]''' || || [[Time Series Forest]] | |||
|- | |||
| '''[[TSP]]''' || || [[Traveling Salesman Problem]] | |||
|- | |||
| '''[[TTS]]''' || || [[Text-to-Speech]] | |||
|- | |||
| '''[[UCT]]''' || || [[Upper Confidence bounds applied to Trees (Monte Carlo Tree Search variant)]] | |||
|- | |||
| '''[[UDA]]''' || || [[Unsupervised Data Augmentation]] | |||
|- | |||
| '''[[UKF]]''' || || [[Unscented Kalman Filter]] | |||
|- | |||
| '''[[UL]]''' || || [[Unsupervised learning]] | |||
|- | |||
| '''[[ULMFiT]]''' || || [[Universal Language Model Fine-Tuning]] | |||
|- | |||
| '''[[UMAP]]''' || || [[Uniform Manifold Approximation and Projection]] | |||
|- | |||
| '''[[USM]]''' || || [[Universal Speech Model]] | |||
|- | |||
| '''[[V-Net]]''' || || [[Volumetric Convolutional neural network]] | |||
|- | |||
| '''[[VAD]]''' || || [[Voice Activity Detection]] | |||
|- | |||
| '''[[VAE]]''' || || [[Variational AutoEncoder]] | |||
|- | |||
| '''[[VGG]]''' || || [[Visual Geometry Group]] | |||
|- | |||
| '''[[VHRED]]''' || || [[Variational Hierarchical Recurrent Encoder-Decoder]] | |||
|- | |||
| '''[[VISSIM]]''' || || [[A traffic simulation software (from "Verkehr In Städten]] | |||
|- | |||
| '''[[ViT]]''' || || [[Vision Transformer]] | |||
|- | |||
| '''[[VPNN]]''' || || [[Vector Product Neural Network]] | |||
|- | |||
| '''[[VQ-VAE]]''' || || [[Vector Quantized Variational Autoencoders]] | |||
|- | |||
| '''[[VQE]]''' || || [[Variational Quantum Eigensolver]] | |||
|- | |||
| '''[[VR]]''' || || [[Virtual reality]] | |||
|- | |||
| '''[[VRP]]''' || || [[Vehicle Routing Problem]] | |||
|- | |||
| '''[[VUI]]''' || || [[Voice User Interface]] | |||
|- | |||
| '''[[WCSP]]''' || || [[Weighted Constraint Satisfaction Problem]] | |||
|- | |||
| '''[[WER]]''' || || [[Word Error Rate]] | |||
|- | |||
| '''[[WFST]]''' || || [[Weighted finite-state transducer (WFST)]] | |||
|- | |||
| '''[[WGAN]]''' || || [[Wasserstein Generative Adversarial Network]] | |||
|- | |||
| '''[[WMA]]''' || || [[Weighted Majority Algorithm]] | |||
|- | |||
| '''[[WPE]]''' || || [[Weighted Prediction Error]] | |||
|- | |||
| '''[[XAI]]''' || || [[Explainable Artificial Intelligence]] | |||
|- | |||
| '''[[XGBoost]]''' || || [[eXtreme Gradient Boosting]] | |||
|- | |||
| '''[[XOR]]''' || || [[Exclusive OR (a common problem in neural networks)]] | |||
|- | |||
| '''[[YOLO]]''' || || [[You Only Look Once]] | |||
|- | |||
| '''[[ZSL]]''' || || [[Zero-Shot Learning]] | |||
|- | |- | ||
|} | |} | ||
[[Category:Guides]] | [[Category:Guides]] |
Latest revision as of 04:37, 2 August 2023
- See also: Guides, Terms and Abbreviations