Jump to content

Acronyms: Difference between revisions

26,930 bytes added ,  19 March 2023
no edit summary
No edit summary
No edit summary
Line 2: Line 2:
{{see also|Guides|Terms|Abbreviations}}
{{see also|Guides|Terms|Abbreviations}}
{|
{|
|-
| '''[[A*]]''' ||  || [[A* Search Algorithm]]
|-
| '''[[A/B Testing]]''' ||  || [[A statistical method for comparing two or more treatments or algorithms]]
|-
| '''[[A3C]]''' ||  || [[Asynchronous Advantage Actor-Critic]]
|-
| '''[[ABAC]]''' ||  || [[Attribute-Based Access Control]]
|-
|-
| '''[[ACE]]''' ||  || [[Alternating conditional expectation algorithm]]
| '''[[ACE]]''' ||  || [[Alternating conditional expectation algorithm]]
|-
|-
| '''[[ADT]]''' ||  || [[Automatic Drum Transcription]]
| '''[[ACO]]''' ||  || [[Ant Colony Optimization]]
|-
|-
| '''[[AdA]]''' ||  || [[Adaptive Agent]]
| '''[[AdA]]''' ||  || [[Adaptive Agent]]
|-
| '''[[Adam]]''' ||  || [[Adaptive Moment Estimation]]
|-
| '''[[ADASYN]]''' ||  || [[Adaptive Synthetic Sampling]]
|-
| '''[[ADT]]''' ||  || [[Automatic Drum Transcription]]
|-
|-
| '''[[AE]]''' ||  || [[Autoencoder]]
| '''[[AE]]''' ||  || [[Autoencoder]]
|-
| '''[[AGC]]''' ||  || [[Adaptive Gradient Clipping]]
|-
|-
| '''[[AGI]]''' ||  || [[Artificial general intelligence]]
| '''[[AGI]]''' ||  || [[Artificial general intelligence]]
|-
|-
| '''[[AI]]''' ||  || [[Artificial intelligence]]
| '''[[AI]]''' ||  || [[Artificial intelligence]]
|-
| '''[[AIaaS]]''' ||  || [[Artificial Intelligence as a Service]]
|-
|-
| '''[[AIWPSO]]''' ||  || [[Adaptive Inertia Weight Particle Swarm Optimization]]
| '''[[AIWPSO]]''' ||  || [[Adaptive Inertia Weight Particle Swarm Optimization]]
|-
| '''[[AL]]''' ||  || [[Active Learning]]
|-
|-
| '''[[AM]]''' ||  || [[Activation maximization]]
| '''[[AM]]''' ||  || [[Activation maximization]]
|-
| '''[[AMR]]''' ||  || [[Abstract Meaning Representation]]
|-
|-
| '''[[AMT]]''' ||  || [[Automatic Music Transcription]]
| '''[[AMT]]''' ||  || [[Automatic Music Transcription]]
Line 27: Line 49:
| '''[[ANOVA]]''' ||  || [[Analysis of variance]]
| '''[[ANOVA]]''' ||  || [[Analysis of variance]]
|-
|-
| '''[[AR]]''' || || [[Augmented reality]]
| '''[[API]]''' ||  || [[Application Programming Interface]]
|-
| '''[[AR]]''' ||   || [[Augmented reality]]
|-
| '''[[ARNN]]''' ||  || [[Anticipation Recurrent Neural Network]]
|-
|-
| '''[[ASI]]''' ||  || [[Artificial superintelligence]]
| '''[[ASI]]''' ||  || [[Artificial superintelligence]]
|-
| '''[[ASIC]]''' ||  || [[Application-Specific Integrated Circuit]]
|-
|-
| '''[[ASR]]''' ||  || [[Automatic speech recognition]]
| '''[[ASR]]''' ||  || [[Automatic speech recognition]]
|-
|-
| '''[[AUC]]''' ||  || [[Area Under the Curve]]
| '''[[AUC]]''' ||  || [[Area Under the Curve]]
|-
| '''[[AutoML]]''' ||  || [[Automated Machine Learning]]
|-
| '''[[BB84]]''' ||  || [[A quantum key distribution protocol (named after its inventors, Bennett and Brassard, and the year 1984)]]
|-
| '''[[BBO]]''' ||  || [[Biogeography-Based Optimization]]
|-
|-
| '''[[BCE]]''' ||  || [[Binary cross-entropy]]
| '''[[BCE]]''' ||  || [[Binary cross-entropy]]
|-
| '''[[BDT]]''' ||  || [[Boosted Decision Tree]]
|-
|-
| '''[[BERT]]''' ||  || [[Bidirectional Encoder Representations from Transformers]]
| '''[[BERT]]''' ||  || [[Bidirectional Encoder Representations from Transformers]]
|-
| '''[[BFS]]''' ||  || [[Breadth-First Search]]
|-
| '''[[BI]]''' ||  || [[Business Intelligence]]
|-
| '''[[BiFPN]]''' ||  || [[Bidirectional Feature Pyramid Network]]
|-
| '''[[BILSTM]]''' ||  || [[Bidirectional Long Short-Term Memory]]
|-
|-
| '''[[BLEU]]''' ||  || [[Bilingual evaluation understudy]]
| '''[[BLEU]]''' ||  || [[Bilingual evaluation understudy]]
|-
|-
| '''[[BP]]''' || || [[Backpropagation]]
| '''[[BN]]''' ||   || [[Bayesian Network]]
|-
|-
| '''[[BPTT]]''' || || [[Backpropagation through time]]
| '''[[BNN]]''' ||   || [[Bayesian Neural Network]]
|-
|-
| '''[[BRNN]]''' || || [[Bidirectional Recurrent Neural Network]]
| '''[[BO]]''' ||   || [[Bayesian Optimization]]
|-
|-
| '''[[BRR]]''' || || [[Bayesian ridge regression]]
| '''[[BP]]''' ||   || [[Backpropagation]]
|-
|-
| '''[[CAE]]''' || || [[Contractive Autoencoder]]
| '''[[BPE]]''' ||   || [[Byte Pair Encoding]]
|-
|-
| '''[[CBOW]]''' || || [[Continuous Bag of Words]]
| '''[[BPMF]]''' ||   || [[Bayesian Probabilistic Matrix Factorization]]
|-
|-
| '''[[CCE]]''' || || [[Categorical cross-entropy]]
| '''[[BPN]]''' ||  || [[Backpropagation Neural Network]]
|-
| '''[[BPTT]]''' ||  || [[Backpropagation through time]]
|-
| '''[[BQML]]''' ||  || [[Big Query Machine Learning]]
|-
| '''[[BR]]''' ||  || [[Best-Response (in game theory)]]
|-
| '''[[BRNN]]''' ||  || [[Bidirectional Recurrent Neural Network]]
|-
| '''[[BRR]]''' ||  || [[Bayesian ridge regression]]
|-
| '''[[CAD]]''' ||  || [[Computer-Aided Design]]
|-
| '''[[CAE]]''' ||  || [[Contractive Autoencoder]]
|-
| '''[[CALA]]''' ||  || [[Continuous Action-set Learning Automata]]
|-
| '''[[CAM]]''' ||  || [[Computer-Aided Manufacturing]]
|-
| '''[[CAPTCHA]]''' ||  || [[Completely Automated Public Turing test to tell Computers and Humans Apart]]
|-
| '''[[CART]]''' ||  || [[Classification And Regression Tree]]
|-
| '''[[CASE]]''' ||  || [[Computer-Aided Software Engineering]]
|-
| '''[[CatBoost]]''' ||  || [[Categorical Boosting]]
|-
| '''[[CAV]]''' ||  || [[Concept Activation Vectors]]
|-
| '''[[CBAC]]''' ||  || [[Content-Based Access Control]]
|-
| '''[[CBI]]''' ||  || [[Counterfactual Bias Insertion]]
|-
| '''[[CBOW]]''' ||  || [[Continuous Bag of Words]]
|-
| '''[[CBR]]''' ||  || [[Case-Based Reasoning]]
|-
| '''[[CCA]]''' ||  || [[Canonical Correlation Analysis]]
|-
| '''[[CCC]]''' ||  || [[Canonical Correlation Coefficients]]
|-
| '''[[CCE]]''' ||   || [[Categorical cross-entropy]]
|-
| '''[[CDBN]]''' ||  || [[Convolutional Deep Belief Networks]]
|-
| '''[[CE]]''' ||  || [[Cross-Entropy]]
|-
| '''[[CEC]]''' ||  || [[Constant Error Carousel]]
|-
| '''[[CEGAR]]''' ||  || [[Counterexample-Guided Abstraction Refinement]]
|-
| '''[[CEGIS]]''' ||  || [[Counterexample-Guided Inductive Synthesis]]
|-
| '''[[CF]]''' ||  || [[Common Features]]
|-
| '''[[cGAN]]''' ||  || [[Conditional Generative Adversarial Network]]
|-
|-
| '''[[CL]]''' ||  || [[Confident learning]]
| '''[[CL]]''' ||  || [[Confident learning]]
|-
|-
| '''[[CLIP]]''' ||  || [[Contrastive Language-Image Pre-Training]]
| '''[[CLIP]]''' ||  || [[Contrastive Language-Image Pre-Training]]
|-
| '''[[CLNN]]''' ||  || [[ConditionaL Neural Networks]]
|-
| '''[[CMA]]''' ||  || [[Covariance Matrix Adaptation]]
|-
| '''[[CMA-ES]]''' ||  || [[Covariance Matrix Adaptation Evolution Strategy]]
|-
| '''[[CMAC]]''' ||  || [[Cerebellar Model Articulation Controller]]
|-
| '''[[CMMs]]''' ||  || [[Conditional Markov Model]]
|-
|-
| '''[[CNN]]''' ||  || [[Convolutional neural network]]
| '''[[CNN]]''' ||  || [[Convolutional neural network]]
|-
| '''[[COIN-OR]]''' ||  || [[Computational Infrastructure for Operations Research]]
|-
| '''[[ConvNet]]''' ||  || [[Convolutional Neural Network]]
|-
| '''[[COTE]]''' ||  || [[Collective of Transformation-Based Ensembles]]
|-
| '''[[CP]]''' ||  || [[Constraint Programming]]
|-
| '''[[CPLEX]]''' ||  || [[An optimization solver (from "C" programming language and "simplex")]]
|-
| '''[[CPN]]''' ||  || [[Colored Petri Nets]]
|-
| '''[[CRBM]]''' ||  || [[Conditional Restricted Boltzmann Machine]]
|-
| '''[[CRF]]''' ||  || [[Conditional Random Field]]
|-
| '''[[CRFs]]''' ||  || [[Conditional Random Fields]]
|-
| '''[[CRNN]]''' ||  || [[Convolutional Recurrent Neural Network]]
|-
| '''[[CSLR]]''' ||  || [[Continuous Sign Language Recognition]]
|-
| '''[[CSP]]''' ||  || [[Constraint Satisfaction Problem]]
|-
|-
| '''[[CSV]]''' ||  || [[Comma-separated values]]
| '''[[CSV]]''' ||  || [[Comma-separated values]]
|-
| '''[[CT-LSTM]]''' ||  || [[Convolutional Transformer Long Short-Term Memory]]
|-
| '''[[CTC]]''' ||  || [[Connectionist Temporal Classification]]
|-
| '''[[CTR]]''' ||  || [[Collaborative Topic Regression]]
|-
|-
| '''[[CUDA]]''' ||  || [[Compute Unified Device Architecture]]
| '''[[CUDA]]''' ||  || [[Compute Unified Device Architecture]]
|-
|-
| '''[[CV]]''' ||  || [[Computer Vision]], [[Cross validation]], [[Coefficient of variation]]
| '''[[CV]]''' ||  || [[Computer Vision, Cross validation, Coefficient of variation]]
|-
| '''[[Cyc]]''' ||  || [[CycL and OpenCyc, a knowledge representation and reasoning system]]
|-
| '''[[D*]]''' ||  || [[Dynamic A* Search Algorithm]]
|-
| '''[[DAAF]]''' ||  || [[Data Augmentation and Auxiliary Feature]]
|-
| '''[[DaaS]]''' ||  || [[Data as a Service]]
|-
| '''[[DAE]]''' ||  || [[Denoising AutoEncoder or Deep AutoEncoder]]
|-
| '''[[DAML]]''' ||  || [[DARPA Agent Markup Language]]
|-
| '''[[DART]]''' ||  || [[Disturbance Aware Regression Tree]]
|-
|-
| '''[[DBN]]''' || || [[Deep belief network]]
| '''[[DBM]]''' ||   || [[Deep Boltzmann Machine]]
|-
|-
| '''[[DCAI]]''' || || [[Data-centric AI]]
| '''[[DBN]]''' ||   || [[Deep belief network]]
|-
|-
| '''[[DE]]''' || || [[Differential evolution]]
| '''[[DBSCAN]]''' ||  || [[Density-Based Spatial Clustering of Applications with Noise]]
|-
| '''[[DCAI]]''' ||  || [[Data-centric AI]]
|-
| '''[[DCGAN]]''' ||  || [[Deep Convolutional Generative Adversarial Network]]
|-
| '''[[DCMDN]]''' ||  || [[Deep Convolutional Mixture Density Network]]
|-
| '''[[DDPG]]''' ||  || [[Deep Deterministic Policy Gradient]]
|-
| '''[[DE]]''' ||   || [[Differential evolution]]
|-
| '''[[DeconvNet]]''' ||  || [[DeConvolutional Neural Network]]
|-
| '''[[DeepLIFT]]''' ||  || [[Deep Learning Important FeaTures]]
|-
| '''[[DFS]]''' ||  || [[Depth-First Search]]
|-
|-
| '''[[DL]]''' ||  || [[Deep learning]]
| '''[[DL]]''' ||  || [[Deep learning]]
|-
|-
| '''[[DNN]]''' ||  || [[Deep neural network]]
| '''[[DNN]]''' ||  || [[Deep neural network]]
|-
| '''[[DP]]''' ||  || [[Dynamic Programming]]
|-
|-
| '''[[DQN]]''' ||  || [[Deep Q-Learning]]
| '''[[DQN]]''' ||  || [[Deep Q-Learning]]
|-
| '''[[DR]]''' ||  || [[Detection Rate]]
|-
| '''[[DRL]]''' ||  || [[Deep Reinforcement Learning]]
|-
| '''[[DS]]''' ||  || [[Data Science]]
|-
| '''[[DSN]]''' ||  || [[Deep Stacking Network]]
|-
| '''[[DSR]]''' ||  || [[Deep Symbolic Reinforcement Learning]]
|-
| '''[[DSS]]''' ||  || [[Decision Support System]]
|-
| '''[[DSW]]''' ||  || [[Data Stream Warehousing]]
|-
| '''[[DT]]''' ||  || [[Decision Tree]]
|-
| '''[[DTD]]''' ||  || [[Deep Taylor Decomposition]]
|-
| '''[[DWT]]''' ||  || [[Discrete Wavelet Transform]]
|-
|-
| '''[[EDA]]''' ||  || [[Exploratory data analysis]]
| '''[[EDA]]''' ||  || [[Exploratory data analysis]]
|-
|-
| '''[[FN]]''' || || [[False negative]]
| '''[[EKF]]''' ||   || [[Extended Kalman Filter]]
|-
| '''[[ELECTRA]]''' ||  || [[Efficiently Learning an Encoder that Classifies Token Replacements Accurately]]
|-
| '''[[ELM]]''' ||  || [[Extreme Learning Machine]]
|-
|-
| '''[[FNR]]''' || || [[False negative rate]]
| '''[[ELMo]]''' ||   || [[Embeddings from Language Models]]
|-
|-
| '''[[FP]]''' || || [[False positive]]
| '''[[ELU]]''' ||   || [[Exponential Linear Unit]]
|-
|-
| '''[[FPR]]''' || || [[False positive rate]]
| '''[[EM]]''' ||  || [[Expectation maximization]]
|-
| '''[[EMD]]''' ||  || [[Entropy Minimization Discretization]]
|-
| '''[[ERNIE]]''' ||  || [[Enhanced Representation through kNowledge IntEgration]]
|-
| '''[[ES]]''' ||  || [[Evolution Strategies]]
|-
| '''[[ESN]]''' ||  || [[Echo State Network]]
|-
| '''[[ETL]]''' ||  || [[Extract, Transform, Load]]
|-
| '''[[ETL Pipeline]]''' ||  || [[Extract Transform Load Pipeline]]
|-
| '''[[EXT]]''' ||  || [[Extremely Randomized Trees]]
|-
| '''[[F1]]''' ||  || [[F1 Score (harmonic mean of precision and recall)]]
|-
| '''[[F1 Score]]''' ||  || [[Harmonic Precision-Recall Mean]]
|-
| '''[[FALA]]''' ||  || [[Finite Action-set Learning Automata]]
|-
| '''[[Fast R-CNN]]''' ||  || [[Faster Region-based Convolutional Neural Network]]
|-
| '''[[FC]]''' ||  || [[Fully-Connected]]
|-
| '''[[FC-CNN]]''' ||  || [[Fully Convolutional Convolutional Neural Network]]
|-
| '''[[FC-LSTM]]''' ||  || [[Fully Connected Long Short-Term Memory]]
|-
| '''[[FCM]]''' ||  || [[Fuzzy C-Means]]
|-
| '''[[FCN]]''' ||  || [[Fully Convolutional Network]]
|-
| '''[[FER]]''' ||  || [[Facial Expression Recognition]]
|-
| '''[[FFT]]''' ||  || [[Fast Fourier transform]]
|-
| '''[[FL]]''' ||  || [[Federated Learning]]
|-
| '''[[FLOP]]''' ||  || [[Floating Point Operations]]
|-
| '''[[FLOPS]]''' ||  || [[Floating Point Operations Per Second]]
|-
| '''[[FN]]''' ||  || [[False negative]]
|-
| '''[[FNN]]''' ||  || [[Feedforward Neural Network]]
|-
| '''[[FNR]]''' ||  || [[False negative rate]]
|-
| '''[[FOAF]]''' ||  || [[Friend of a Friend (ontology)]]
|-
| '''[[FP]]''' ||  || [[False positive]]
|-
| '''[[FPGA]]''' ||  || [[Field-Programmable Gate Array]]
|-
| '''[[FPN]]''' ||  || [[Feature Pyramid Network]]
|-
| '''[[FPR]]''' ||   || [[False positive rate]]
|-
| '''[[FST]]''' ||  || [[Finite state transducer]]
|-
| '''[[FTL]]''' ||  || [[Few-Shot Learning]]
|-
| '''[[FWA]]''' ||  || [[Fireworks Algorithm]]
|-
| '''[[FWIoU]]''' ||  || [[Frequency Weighted Intersection over Union]]
|-
| '''[[GA]]''' ||  || [[Genetic Algorithm]]
|-
| '''[[GALE]]''' ||  || [[Global Aggregations of Local Explanations]]
|-
| '''[[GAM]]''' ||  || [[Generalized Additive Model]]
|-
|-
| '''[[GAN]]''' ||  || [[Generative Adversarial Network]]
| '''[[GAN]]''' ||  || [[Generative Adversarial Network]]
|-
| '''[[GAP]]''' ||  || [[Global Average Pooling]]
|-
| '''[[GBDT]]''' ||  || [[Gradient Boosted Decision Tree]]
|-
|-
| '''[[GBM]]''' ||  || [[Gradient Boosting Machine]]
| '''[[GBM]]''' ||  || [[Gradient Boosting Machine]]
|-
|-
| '''[[GD]]''' || || [[Gradient descent]]
| '''[[GBRCN]]''' ||   || [[Gradient-Boosting Random Convolutional Network]]
|-
|-
| '''[[GFNN]]''' || || [[Gradient frequency neural network]]
| '''[[GD]]''' ||  || [[Gradient descent]]
|-
| '''[[GEBI]]''' ||  || [[Global Explanation for Bias Identification]]
|-
| '''[[GFNN]]''' ||   || [[Gradient frequency neural network]]
|-
| '''[[GLCM]]''' ||  || [[Gray Level Co-occurrence Matrix]]
|-
|-
| '''[[GLM]]''' ||  || [[Generalized Linear Model]]
| '''[[GLM]]''' ||  || [[Generalized Linear Model]]
|-
|-
| '''[[GloVE]]''' || || [[Global Vectors]]
| '''[[GLOM]]''' ||  || [[A neural network architecture by Geoffrey Hinton]]
|-
| '''[[Gloss2Text]]''' ||  || [[A task of transforming raw glosses into meaningful sentences.]]
|-
| '''[[GloVE]]''' ||   || [[Global Vectors]]
|-
| '''[[GLPK]]''' ||  || [[GNU Linear Programming Kit]]
|-
|-
| '''[[GLUE]]''' ||  || [[General Language Understanding Evaluation]]
| '''[[GLUE]]''' ||  || [[General Language Understanding Evaluation]]
Line 105: Line 389:
| '''[[GMM]]''' ||  || [[Gaussian mixture model]]
| '''[[GMM]]''' ||  || [[Gaussian mixture model]]
|-
|-
| '''[[GPR]]''' || || [[Gaussian process regression]]
| '''[[GP]]''' ||  || [[Genetic Programming]]
|-
| '''[[GPR]]''' ||   || [[Gaussian process regression]]
|-
|-
| '''[[GPT]]''' ||  || [[Generative Pre-Training]]
| '''[[GPT]]''' ||  || [[Generative Pre-Training]]
|-
|-
| '''[[GPU]]''' ||  || [[Graphics processing unit]]
| '''[[GPU]]''' ||  || [[Graphics processing unit]]
|-
| '''[[GradCAM]]''' ||  || [[GRADient-weighted Class Activation Mapping]]
|-
|-
| '''[[GRU]]''' ||  || [[Gated recurrent unit]]
| '''[[GRU]]''' ||  || [[Gated recurrent unit]]
|-
|-
| '''[[HAN]]''' || || [[Hierarchical Attention Networks]]
| '''[[Gurobi]]''' ||  || [[An optimization solver (named after its founders, Zonghao Gu, Edward Rothberg, and Robert Bixby)]]
|-
| '''[[HamNoSys]]''' ||  || [[Hamburg Sign Language Notation System]]
|-
| '''[[HAN]]''' ||   || [[Hierarchical Attention Networks]]
|-
|-
| '''[[HF]]''' || || [[Hugging Face]]
| '''[[HC]]''' ||  || [[Hierarchical Clustering]]
|-
| '''[[HCA]]''' ||  || [[Hierarchical Clustering Analysis]]
|-
| '''[[HDP]]''' ||  || [[Hierarchical Dirichlet process]]
|-
| '''[[HF]]''' ||   || [[Hugging Face]]
|-
| '''[[HHDS]]''' ||  || [[HipHop Dataset]]
|-
| '''[[hLDA]]''' ||  || [[Hierarchical Latent Dirichlet allocation]]
|-
| '''[[HMM]]''' ||  || [[Hidden Markov Model]]
|-
| '''[[HNN]]''' ||  || [[Hopfield Neural Network]]
|-
| '''[[HOG]]''' ||  || [[Histogram of Oriented Gradients (feature descriptor)]]
|-
| '''[[Hopfield]]''' ||  || [[Hopfield Network]]
|-
| '''[[HPC]]''' ||  || [[High Performance Computing]]
|-
| '''[[HRED]]''' ||  || [[Hierarchical Recurrent Encoder-Decoder]]
|-
| '''[[HRI]]''' ||  || [[Human-Robot Interaction]]
|-
| '''[[HSMM]]''' ||  || [[Hidden Semi-Markov Model]]
|-
| '''[[HTM]]''' ||  || [[Hierarchical Temporal Memory]]
|-
| '''[[i.i.d]]''' ||  || [[Independent and Identically Distributed]]
|-
|-
| '''[[i.i.d.]]''' ||  || [[Independently and identically distributed]]
| '''[[i.i.d.]]''' ||  || [[Independently and identically distributed]]
|-
| '''[[IaaS]]''' ||  || [[Infrastructure as a Service]]
|-
|-
| '''[[ICA]]''' ||  || [[Independent component analysis]]
| '''[[ICA]]''' ||  || [[Independent component analysis]]
|-
| '''[[ICP]]''' ||  || [[Iterative Closest Point (point cloud registration)]]
|-
| '''[[ID3]]''' ||  || [[Iterative Dichotomiser 3]]
|-
| '''[[IDA*]]''' ||  || [[Iterative Deepening A* Search Algorithm]]
|-
| '''[[IDR]]''' ||  || [[Input dependence rate]]
|-
| '''[[IG]]''' ||  || [[Invariant Generation]]
|-
|-
| '''[[IID]]''' ||  || [[Independently and identically distributed]]
| '''[[IID]]''' ||  || [[Independently and identically distributed]]
|-
| '''[[IIR]]''' ||  || [[Input independence rate]]
|-
| '''[[ILASP]]''' ||  || [[Inductive Learning of Answer Set Programs]]
|-
| '''[[ILP]]''' ||  || [[Integer Linear Programming]]
|-
| '''[[INFD]]''' ||  || [[Explanation Infidelity]]
|-
| '''[[IoA]]''' ||  || [[Internet of Agents]]
|-
| '''[[IoE]]''' ||  || [[Internet of Everything]]
|-
| '''[[IoT]]''' ||  || [[Internet of Things]]
|-
| '''[[IoU]]''' ||  || [[Jaccard index (intersection over union)]]
|-
| '''[[IR]]''' ||  || [[Information Retrieval]]
|-
| '''[[ISIC]]''' ||  || [[International Skin Imaging Collaboration]]
|-
| '''[[IVR]]''' ||  || [[Interactive Voice Response]]
|-
| '''[[K-Means]]''' ||  || [[K-Means Clustering]]
|-
| '''[[KB]]''' ||  || [[Knowledge Base]]
|-
| '''[[KDE]]''' ||  || [[Kernel Density Estimation]]
|-
| '''[[KF]]''' ||  || [[Kalman Filter]]
|-
|-
| '''[[kFCV]]''' ||  || [[K-fold cross validation]]
| '''[[kFCV]]''' ||  || [[K-fold cross validation]]
|-
| '''[[KL]]''' ||  || [[Kullback Leibler (KL) divergence]]
|-
|-
| '''[[KNN]]''' ||  || [[K-nearest neighbors]]
| '''[[KNN]]''' ||  || [[K-nearest neighbors]]
|-
| '''[[KR]]''' ||  || [[Knowledge Representation]]
|-
| '''[[KRR]]''' ||  || [[Kernel Ridge Regression]]
|-
|-
| '''[[LAION]]''' ||  || [[Large-scale Artificial Intelligence Open Network]]
| '''[[LAION]]''' ||  || [[Large-scale Artificial Intelligence Open Network]]
Line 132: Line 502:
|-
|-
| '''[[LaMDA]]''' ||  || [[Language Models for Dialog Applications]]
| '''[[LaMDA]]''' ||  || [[Language Models for Dialog Applications]]
|-
| '''[[LBP]]''' ||  || [[Local Binary Pattern (texture descriptor)]]
|-
| '''[[LDA]]''' ||  || [[Latent Dirichlet Allocation]]
|-
| '''[[LDADE]]''' ||  || [[Latent Dirichlet Allocation Differential Evolution]]
|-
| '''[[LEPOR]]''' ||  || [[Language Evaluation Portal]]
|-
| '''[[LightGBM]]''' ||  || [[Light Gradient Boosting Machine]]
|-
| '''[[LIME]]''' ||  || [[Local Interpretable Model-agnostic Explanations]]
|-
| '''[[LINGO]]''' ||  || [[A software for linear, nonlinear, and integer optimization]]
|-
|-
| '''[[LLM]]''' ||  || [[Large language model]]
| '''[[LLM]]''' ||  || [[Large language model]]
|-
|-
| '''[[LLS]]''' ||  || [[Linear least squares]]
| '''[[LLS]]''' ||  || [[Linear least squares]]
|-
| '''[[LMNN]]''' ||  || [[Large Margin Nearest Neighbor]]
|-
|-
| '''[[LoLM]]''' ||  || [[Lots of Little Models]]
| '''[[LoLM]]''' ||  || [[Lots of Little Models]]
|-
| '''[[LP]]''' ||  || [[Linear Programming]]
|-
|-
| '''[[LPAQA]]''' ||  || [[Language model Prompt And Query Archive]]
| '''[[LPAQA]]''' ||  || [[Language model Prompt And Query Archive]]
|-
| '''[[LRP]]''' ||  || [[Layer-wise Relevance Propagation]]
|-
| '''[[LSA]]''' ||  || [[Latent semantic analysis]]
|-
| '''[[LSI]]''' ||  || [[Latent Semantic Indexing]]
|-
|-
| '''[[LSTM]]''' ||  || [[Long short-term memory]]
| '''[[LSTM]]''' ||  || [[Long short-term memory]]
|-
| '''[[LSTM-CRF]]''' ||  || [[Long Short-Term Memory with Conditional Random Field]]
|-
| '''[[LTR]]''' ||  || [[Learning To Rank]]
|-
| '''[[LVQ]]''' ||  || [[Learning Vector Quantization]]
|-
| '''[[M2M]]''' ||  || [[Machine to Machine]]
|-
| '''[[MADE]]''' ||  || [[Masked Autoencoder for Distribution Estimation]]
|-
|-
| '''[[MAE]]''' ||  || [[Mean absolute error]]
| '''[[MAE]]''' ||  || [[Mean absolute error]]
|-
| '''[[MAF]]''' ||  || [[Masked Autoregressive Flows]]
|-
| '''[[MAIRL]]''' ||  || [[Multi-Agent Inverse Reinforcement Learning]]
|-
| '''[[MAP]]''' ||  || [[Maximum A Posteriori (MAP) Estimation]]
|-
|-
| '''[[MAPE]]''' ||  || [[Mean absolute percentage error]]
| '''[[MAPE]]''' ||  || [[Mean absolute percentage error]]
|-
| '''[[MARL]]''' ||  || [[Multi-Agent Reinforcement Learning]]
|-
| '''[[MART]]''' ||  || [[Multiple Additive Regression Tree]]
|-
| '''[[MaxEnt]]''' ||  || [[Maximum Entropy]]
|-
| '''[[MAXSAT]]''' ||  || [[Maximum Satisfiability Problem]]
|-
| '''[[MCLNN]]''' ||  || [[Masked ConditionaL Neural Networks]]
|-
| '''[[MCMC]]''' ||  || [[Markov Chain Monte Carlo]]
|-
| '''[[MCS]]''' ||  || [[Model contrast score]]
|-
| '''[[MCTS]]''' ||  || [[Monte Carlo Tree Search]]
|-
| '''[[MDL]]''' ||  || [[Minimum description length (MDL) principle]]
|-
| '''[[MDN]]''' ||  || [[Mixture Density Network]]
|-
| '''[[MDP]]''' ||  || [[Markov Decision Process]]
|-
| '''[[MDRNN]]''' ||  || [[Multidimensional recurrent neural network]]
|-
| '''[[MER]]''' ||  || [[Music Emotion Recognition]]
|-
| '''[[METEOR]]''' ||  || [[Metric for Evaluation of Translation with Explicit ORdering]]
|-
| '''[[MIL]]''' ||  || [[Multiple Instance Learning]]
|-
| '''[[MILP]]''' ||  || [[Mixed-Integer Linear Programming]]
|-
| '''[[MINT]]''' ||  || [[Mutual Information based Transductive Feature Selection]]
|-
| '''[[MIoU]]''' ||  || [[Mean Intersection over Union]]
|-
| '''[[MIP]]''' ||  || [[Mixed-Integer Programming]]
|-
| '''[[ML]]''' ||  || [[Machine learning]]
|-
| '''[[MLaaS]]''' ||  || [[Machine Learning as a Service]]
|-
| '''[[MLE]]''' ||  || [[Maximum Likelihood Estimation]]
|-
| '''[[MLLM]]''' ||  || [[Multimodal large language model]]
|-
| '''[[MLM]]''' ||  || [[Music Language Models]]
|-
| '''[[MLP]]''' ||  || [[Multi-Layer Perceptron]]
|-
| '''[[MMI]]''' ||  || [[Maximum Mutual Information]]
|-
|-
| '''[[MNIST]]''' ||  || [[Modified National Institute of Standards and Technology]]
| '''[[MNIST]]''' ||  || [[Modified National Institute of Standards and Technology]]
|-
| '''[[MOEA]]''' ||  || [[Multi-Objective Evolutionary Algorithm]]
|-
| '''[[MPA]]''' ||  || [[Mean Pixel Accuracy]]
|-
| '''[[MR]]''' ||  || [[Mixed Reality]]
|-
| '''[[MRF]]''' ||  || [[Markov Random Field]]
|-
| '''[[MRR]]''' ||  || [[Mean Reciprocal Rank]]
|-
| '''[[MRS]]''' ||  || [[Music Recommender System]]
|-
| '''[[MSDAE]]''' ||  || [[Modified Sparse Denoising Autoencoder]]
|-
|-
| '''[[MSE]]''' ||  || [[Mean squared error]]
| '''[[MSE]]''' ||  || [[Mean squared error]]
|-
|-
| '''[[ML]]''' ||  || [[Machine learning]]
| '''[[MSR]]''' ||  || [[Music Style Recognition]]
|-
| '''[[MTL]]''' ||  || [[Multi-Task Learning]]
|-
| '''[[NARX]]''' ||  || [[Nonlinear AutoRegressive with eXogenous input (neural network model)]]
|-
| '''[[NAS]]''' ||  || [[Neural Architecture Search]]
|-
| '''[[NB]]''' ||  || [[Na ̈ıve Bayes]]
|-
| '''[[NBKE]]''' ||  || [[Na ̈ıve Bayes with Kernel Estimation]]
|-
| '''[[NDCG]]''' ||  || [[Normalized Discounted Cumulative Gain]]
|-
| '''[[NE]]''' ||  || [[Nash Equilibrium (in game theory)]]
|-
|-
| '''[[MLLM]]''' ||  || [[Multimodal large language model]]
| '''[[NEAT]]''' ||  || [[NeuroEvolution of Augmenting Topologies]]
|-
|-
| '''[[NER]]''' ||  || [[Named entity recognition]]
| '''[[NER]]''' ||  || [[Named entity recognition]]
|-
|-
| '''[[NST]]''' ||  || [[Neural style transfer]]
| '''[[NERQ]]''' ||  || [[Named Entity Recognition in Query]]
|-
| '''[[NEST]]''' ||  || [[Neural Simulation Tool]]
|-
| '''[[NF]]''' ||  || [[Normalizing Flow]]
|-
| '''[[NFL]]''' ||  || [[No Free Lunch (NFL) theorem]]
|-
| '''[[NISQ]]''' ||  || [[Noisy Intermediate-Scale Quantum (quantum computing)]]
|-
| '''[[NLG]]''' ||  || [[Natural Language Generation]]
|-
|-
| '''[[NLP]]''' ||  || [[Natural Language Processing]]
| '''[[NLP]]''' ||  || [[Natural Language Processing]]
|-
| '''[[NLT]]''' ||  || [[Neural Machine Translation]]
|-
|-
| '''[[NLU]]''' ||  || [[Natural Language Understanding]]
| '''[[NLU]]''' ||  || [[Natural Language Understanding]]
|-
|-
| '''[[NMF]]''' ||  || [[Non-negative matrix factorization]]
| '''[[NMF]]''' ||  || [[Non-negative matrix factorization]]
|-
| '''[[NMS]]''' ||  || [[Non Maximum Suppression]]
|-
| '''[[NMT]]''' ||  || [[Neural Machine Translation]]
|-
|-
| '''[[NN]]''' ||  || [[Neural network]]
| '''[[NN]]''' ||  || [[Neural network]]
|-
| '''[[NNMODFF]]''' ||  || [[Neural Network based Multi-Onset Detection Function Fusion]]
|-
| '''[[NPE]]''' ||  || [[Neural Physical Engine]]
|-
| '''[[NRMSE]]''' ||  || [[Normalized RMSE]]
|-
| '''[[NSGA-II]]''' ||  || [[Non-dominated Sorting Genetic Algorithm II]]
|-
| '''[[NST]]''' ||  || [[Neural style transfer]]
|-
| '''[[NTM]]''' ||  || [[Neural Turing Machine]]
|-
| '''[[NuSVC]]''' ||  || [[Nu-Support Vector Classification]]
|-
| '''[[NuSVR]]''' ||  || [[Nu-Support Vector Regression]]
|-
|-
| '''[[OBM]]''' ||  || [[One Big Model]]
| '''[[OBM]]''' ||  || [[One Big Model]]
|-
|-
| '''[[OCR]]''' ||  || [[Optical character recognition]]
| '''[[OCR]]''' ||  || [[Optical character recognition]]
|-
| '''[[OD]]''' ||  || [[Object Detection]]
|-
| '''[[ODF]]''' ||  || [[Onset Detection Function]]
|-
| '''[[OIL]]''' ||  || [[Ontology Inference Layer]]
|-
| '''[[OLR]]''' ||  || [[Ordinary Linear Regression]]
|-
| '''[[OLS]]''' ||  || [[Ordinary Least Squares]]
|-
| '''[[OMNeT++]]''' ||  || [[Objective Modular Network Testbed in C++]]
|-
| '''[[OMR]]''' ||  || [[Optical Mark Recognition]]
|-
|-
| '''[[OOF]]''' ||  || [[Out-of-fold]]
| '''[[OOF]]''' ||  || [[Out-of-fold]]
|-
| '''[[ORB]]''' ||  || [[Oriented FAST and Rotated BRIEF (feature descriptor)]]
|-
| '''[[OWL]]''' ||  || [[Web Ontology Language]]
|-
| '''[[PA]]''' ||  || [[Pixel Accuracy]]
|-
| '''[[PaaS]]''' ||  || [[Platform as a Service]]
|-
| '''[[PACO]]''' ||  || [[Poisson Additive Co-Clustering]]
|-
|-
| '''[[PaLM]]''' ||  || [[Pathways Language Model]]
| '''[[PaLM]]''' ||  || [[Pathways Language Model]]
|-
| '''[[PBAC]]''' ||  || [[Policy-Based Access Control]]
|-
|-
| '''[[PCA]]''' ||  || [[Principal component analysis]]
| '''[[PCA]]''' ||  || [[Principal component analysis]]
|-
| '''[[PCL]]''' ||  || [[Point Cloud Library (3D perception)]]
|-
| '''[[PECS]]''' ||  || [[Physics Engine for Collaborative Simulation]]
|-
| '''[[PEGASUS]]''' ||  || [[Pre-training with Extracted Gap-Sentences for Abstractive Summarization]]
|-
| '''[[PF]]''' ||  || [[Particle Filter]]
|-
|-
| '''[[PFE]]''' ||  || [[Probabilistic facial embedding]]
| '''[[PFE]]''' ||  || [[Probabilistic facial embedding]]
|-
| '''[[PLSI]]''' ||  || [[Probabilistic Latent Semantic Indexing]]
|-
| '''[[PM]]''' ||  || [[Project Manager]]
|-
| '''[[PMF]]''' ||  || [[Probabilistic Matrix Factorization]]
|-
| '''[[PMI]]''' ||  || [[Pointwise Mutual Information]]
|-
| '''[[PNN]]''' ||  || [[Probabilistic Neural Network]]
|-
| '''[[POC]]''' ||  || [[Proof of Concept]]
|-
| '''[[POMDP]]''' ||  || [[Partially Observable Markov Decision Process]]
|-
| '''[[POS]]''' ||  || [[Part of Speech (POS) Tagging]]
|-
| '''[[POT]]''' ||  || [[Partially Observable Tree (decision-making under uncertainty)]]
|-
| '''[[PPL]]''' ||  || [[Perplexity (a measure of language model performance)]]
|-
| '''[[PPMI]]''' ||  || [[Positive Pointwise Mutual Information]]
|-
|-
| '''[[PPO]]''' ||  || [[Proximal Policy Optimization]]
| '''[[PPO]]''' ||  || [[Proximal Policy Optimization]]
|-
| '''[[PReLU]]''' ||  || [[Parametric Rectified Linear Unit-Yor Topic Modeling]]
|-
| '''[[PRM]]''' ||  || [[Probabilistic Roadmap (motion planning algorithm)]]
|-
| '''[[PSO]]''' ||  || [[Particle Swarm Optimization]]
|-
| '''[[PU]]''' ||  || [[Positive Unlabaled]]
|-
| '''[[PYTM]]''' ||  || [[Pitman]]
|-
| '''[[QA]]''' ||  || [[Question Answering]]
|-
| '''[[QAOA]]''' ||  || [[Quantum Approximate Optimization Algorithm]]
|-
| '''[[QAP]]''' ||  || [[Quadratic Assignment Problem]]
|-
| '''[[QEC]]''' ||  || [[Quantum Error Correction]]
|-
| '''[[QFT]]''' ||  || [[Quantum Fourier Transform]]
|-
| '''[[QIP]]''' ||  || [[Quantum Information Processing]]
|-
| '''[[QKD]]''' ||  || [[Quantum Key Distribution]]
|-
| '''[[QML]]''' ||  || [[Quantum Machine Learning]]
|-
| '''[[QNN]]''' ||  || [[Quantum Neural Network]]
|-
| '''[[QP]]''' ||  || [[Quadratic Programming]]
|-
| '''[[QPE]]''' ||  || [[Quantum Phase Estimation]]
|-
| '''[[R-CNN]]''' ||  || [[Region-based Convolutional Neural Network]]
|-
|-
| '''[[R2]]''' ||  || [[R-squared]]
| '''[[R2]]''' ||  || [[R-squared]]
|-
|-
| '''[[RF]]''' ||  || [[Random forest]]
| '''[[RandNN]]''' ||  || [[Random Neural Network]]
|-
| '''[[RANSAC]]''' ||  || [[RANdom SAmple Consensus]]
|-
| '''[[RBAC]]''' ||  || [[Rule-Based Access Control]]
|-
| '''[[RBF]]''' ||  || [[Radial Basis Function]]
|-
| '''[[RBFNN]]''' ||  || [[Radial Basis Function Neural Network]]
|-
| '''[[RBM]]''' ||  || [[Restricted Boltzmann Machine]]
|-
| '''[[RDF]]''' ||  || [[Resource Description Framework]]
|-
|-
| '''[[REALM]]''' ||  || [[Retrieval-Augmented Language Model Pre-Training]]
| '''[[REALM]]''' ||  || [[Retrieval-Augmented Language Model Pre-Training]]
|-
| '''[[ReCAPTCHA]]''' ||  || [[Reverse CAPTCHA]]
|-
|-
| '''[[ReLU]]''' ||  || [[Rectified Linear Unit]]
| '''[[ReLU]]''' ||  || [[Rectified Linear Unit]]
|-
| '''[[REPTree]]''' ||  || [[Reduced Error Pruning Tree]]
|-
|-
| '''[[RETRO]]''' ||  || [[Retrieval Enhanced Transformer]]
| '''[[RETRO]]''' ||  || [[Retrieval Enhanced Transformer]]
|-
| '''[[RF]]''' ||  || [[Random forest]]
|-
|-
| '''[[RFE]]''' ||  || [[Recursive Feature Elimination]]
| '''[[RFE]]''' ||  || [[Recursive Feature Elimination]]
|-
| '''[[RGB]]''' ||  || [[Red Green Blue color model]]
|-
| '''[[RICNN]]''' ||  || [[Rotation Invariant Convolutional Neural Network]]
|-
| '''[[RIM]]''' ||  || [[Recurrent Interence Machines]]
|-
| '''[[RIPPER]]''' ||  || [[Repeated Incremental Pruning to Produce Error Reduction]]
|-
| '''[[RISE]]''' ||  || [[Random Interval Spectral Ensemble]]
|-
|-
| '''[[RL]]''' ||  || [[Reinforcement learning]]
| '''[[RL]]''' ||  || [[Reinforcement learning]]
|-
| '''[[RLFM]]''' ||  || [[Regression based latent factors]]
|-
|-
| '''[[RLHF]]''' ||  || [[Reinforcement Learning from Human Feedback]]
| '''[[RLHF]]''' ||  || [[Reinforcement Learning from Human Feedback]]
|-
|-
| '''[[RMSLE]]''' ||  || [[Root mean squared logarithmic error ]]
| '''[[RMSE]]''' ||  || [[Root mean squared error]]
|-
|-
| '''[[RMSE]]''' ||  || [[Root mean squared error ]]
| '''[[RMSLE]]''' ||  || [[Root mean squared logarithmic error]]
|-
| '''[[RMSprop]]''' ||  || [[Root Mean Square Propagation]]
|-
|-
| '''[[RNN]]''' ||  || [[Recurrent neural network]]
| '''[[RNN]]''' ||  || [[Recurrent neural network]]
|-
|-
| '''[[RoBERTa]]''' || || [[Robustly Optimized BERT Pretraining Approach]]
| '''[[RNNLM]]''' ||  || [[Recurrent Neural Network Language Model (RNNLM)]]
|-
| '''[[RoBERTa]]''' ||   || [[Robustly Optimized BERT Pretraining Approach]]
|-
|-
| '''[[ROC]]''' ||  || [[Receiver operating characteristic]]
| '''[[ROC]]''' ||  || [[Receiver operating characteristic]]
|-
| '''[[ROI]]''' ||  || [[Region Of Interest]]
|-
| '''[[ROS]]''' ||  || [[Robot Operating System]]
|-
| '''[[ROUGE]]''' ||  || [[Recall-Oriented Understudy for Gisting Evaluation (NLP metric)]]
|-
|-
| '''[[RPA]]''' ||  || [[Robotic Process Automation]]
| '''[[RPA]]''' ||  || [[Robotic Process Automation]]
|-
| '''[[RR]]''' ||  || [[Ridge Regression]]
|-
| '''[[RRT]]''' ||  || [[Rapidly-exploring Random Tree (motion planning algorithm)]]
|-
|-
| '''[[RSI]]''' ||  || [[Recursive self-improvement]]
| '''[[RSI]]''' ||  || [[Recursive self-improvement]]
|-
| '''[[RTRL]]''' ||  || [[Real-Time Recurrent Learning]]
|-
| '''[[SA]]''' ||  || [[Simulated Annealing]]
|-
| '''[[SaaS]]''' ||  || [[Software as a Service]]
|-
| '''[[SAC]]''' ||  || [[Soft Actor-Critic]]
|-
| '''[[SAE]]''' ||  || [[Stacked AE]]
|-
| '''[[SARSA]]''' ||  || [[State-Action-Reward-State-Action]]
|-
| '''[[SAT]]''' ||  || [[Satisfiability Problem]]
|-
| '''[[SBAC]]''' ||  || [[Situation-Based Access Control]]
|-
| '''[[SBM]]''' ||  || [[Stochastic block model]]
|-
| '''[[SBO]]''' ||  || [[Structured Bayesian optimization]]
|-
| '''[[SBSE]]''' ||  || [[Search-based software engineering]]
|-
| '''[[SCH]]''' ||  || [[Stochastic convex hull]]
|-
| '''[[SCIP]]''' ||  || [[Solving Constraint Integer Programs]]
|-
| '''[[SDAE]]''' ||  || [[Stacked DAE]]
|-
| '''[[seq2seq]]''' ||  || [[Sequence to Sequence Learning]]
|-
| '''[[SER]]''' ||  || [[Sentence Error Rate]]
|-
| '''[[SGBoost]]''' ||  || [[Stochastic Gradient Boosting]]
|-
|-
| '''[[SGD]]''' ||  || [[Stochastic gradient descent]]
| '''[[SGD]]''' ||  || [[Stochastic gradient descent]]
|-
| '''[[SGVB]]''' ||  || [[Stochastic Gradient Variational Bayes]]
|-
| '''[[SHAP]]''' ||  || [[SHapley Additive exPlanation]]
|-
| '''[[SHLLE]]''' ||  || [[Supervised Hessian Locally Linear Embedding]]
|-
| '''[[SIFT]]''' ||  || [[Scale-Invariant Feature Transform (feature detection)]]
|-
| '''[[Sign2(Gloss+Text)]]''' ||  || [[Sign to Gloss and Text]]
|-
| '''[[Sign2Gloss]]''' ||  || [[A one to one translation from the single sign to the single gloss.]]
|-
| '''[[Sign2Text]]''' ||  || [[A task of full translation from the sign language into the spoken one]]
|-
|-
| '''[[SL]]''' ||  || [[Supervised learning]]
| '''[[SL]]''' ||  || [[Supervised learning]]
|-
| '''[[SLAM]]''' ||  || [[Simultaneous Localization and Mapping]]
|-
| '''[[SLDS]]''' ||  || [[Switching Linear Dynamical System]]
|-
| '''[[SLP]]''' ||  || [[Single-Layer Perceptron]]
|-
| '''[[SLRT]]''' ||  || [[Sign Language Recognition Transformer]]
|-
| '''[[SLT]]''' ||  || [[Sign Language Translation]]
|-
| '''[[SLTT]]''' ||  || [[Sign Language Translation Transformer]]
|-
| '''[[SMA*]]''' ||  || [[Simplified Memory-bounded A* Search Algorithm]]
|-
| '''[[SMBO]]''' ||  || [[Sequential Model-Based Optimization]]
|-
| '''[[SMO]]''' ||  || [[Sequential Minimal Optimization]]
|-
| '''[[SMOTE]]''' ||  || [[Synthetic Minority Over-sampling Technique]]
|-
| '''[[SNN]]''' ||  || [[Sparse Neural Network]]
|-
| '''[[SOM]]''' ||  || [[Self-Organizing Map]]
|-
| '''[[SOTA]]''' ||  || [[State of the Art]]
|-
| '''[[SPARQL]]''' ||  || [[SPARQL Protocol and RDF Query Language]]
|-
| '''[[Spiking NN]]''' ||  || [[Spiking Neural Network]]
|-
| '''[[SPM]]''' ||  || [[SentencePiece Model (subword tokenization)]]
|-
| '''[[SpRay]]''' ||  || [[Spectral Relevance Analysis]]
|-
| '''[[SSD]]''' ||  || [[Single-Shot Detector]]
|-
| '''[[SSL]]''' ||  || [[Self-Supervised Learning]]
|-
| '''[[SSVM]]''' ||  || [[Smooth support vector machine]]
|-
|-
| '''[[ST]]''' ||  || [[Style transfer]]
| '''[[ST]]''' ||  || [[Style transfer]]
|-
|-
| '''[[STaR]]''' ||  || [[Self-Taught Reasoner]]
| '''[[STaR]]''' ||  || [[Self-Taught Reasoner]]
|-
| '''[[STDA]]''' ||  || [[Style Transfer Data Augmentation]]
|-
| '''[[STDP]]''' ||  || [[Spike Timing-Dependent Plasticity]]
|-
| '''[[STL]]''' ||  || [[Selt-Taught Learning]]
|-
| '''[[SUMO]]''' ||  || [[Simulation of Urban Mobility]]
|-
| '''[[SURF]]''' ||  || [[Speeded-Up Robust Features (feature detection)]]
|-
| '''[[SVC]]''' ||  || [[Support Vector Classification]]
|-
| '''[[SVD]]''' ||  || [[Singing Voice Detection]]
|-
|-
| '''[[SVM]]''' ||  || [[Support vector machine]]
| '''[[SVM]]''' ||  || [[Support vector machine]]
|-
|-
| '''[[SVR]]''' ||  || [[Support Vector Regression]]
| '''[[SVR]]''' ||  || [[Support Vector Regression]]
|-
| '''[[SVS]]''' ||  || [[Singing Voice Separation]]
|-
| '''[[SWI-Prolog]]''' ||  || [[Semantic Web Interface for Prolog]]
|-
| '''[[t-SNE]]''' ||  || [[t-distributed stochastic neighbor embedding]]
|-
|-
| '''[[T5]]''' ||  || [[Text-To-Text Transfer Transformer]]
| '''[[T5]]''' ||  || [[Text-To-Text Transfer Transformer]]
|-
| '''[[TD]]''' ||  || [[Temporal Difference]]
|-
| '''[[TDA]]''' ||  || [[Targeted Data Augmentation]]
|-
| '''[[TDE]]''' ||  || [[Time Domain Ensemble]]
|-
|-
| '''[[tf-idf]]''' ||  || [[term frequency–inverse document frequency]]
| '''[[tf-idf]]''' ||  || [[term frequency–inverse document frequency]]
|-
|-
| '''[[TN]]''' || || [[True negative]]
| '''[[TGAN]]''' ||  || [[Temporal Generative Adversarial Network]]
|-
| '''[[THAID]]''' ||  || [[THeta Automatic Interaction Detection]]
|-
| '''[[TINT]]''' ||  || [[Tree-Interpreter]]
|-
| '''[[TL]]''' ||  || [[Transfer Learning]]
|-
| '''[[TLFN]]''' ||  || [[Time-Lagged Feedforward Neural Network]]
|-
| '''[[TN]]''' ||   || [[True negative]]
|-
| '''[[TNR]]''' ||  || [[True negative rate]]
|-
| '''[[ToM]]''' ||  || [[Theory of Mind]]
|-
| '''[[TP]]''' ||  || [[True positive]]
|-
| '''[[TPOT]]''' ||  || [[Tree-based Pipeline Optimization Tool]]
|-
| '''[[TPR]]''' ||  || [[True positive rate]]
|-
| '''[[TPU]]''' ||  || [[Tensor Processing Unit]]
|-
| '''[[TRPO]]''' ||  || [[Trust Region Policy Optimization]]
|-
| '''[[TS]]''' ||  || [[Tabu Search]]
|-
| '''[[TSF]]''' ||  || [[Time Series Forest]]
|-
|-
| '''[[TNR]]''' || || [[True negative rate]]
| '''[[TSP]]''' ||   || [[Traveling Salesman Problem]]
|-
|-
| '''[[ToM]]''' || || [[Theory of Mind]]
| '''[[TTS]]''' ||   || [[Text-to-Speech]]
|-
|-
| '''[[TP]]''' || || [[True positive]]
| '''[[UCT]]''' ||   || [[Upper Confidence bounds applied to Trees (Monte Carlo Tree Search variant)]]
|-
|-
| '''[[TPR]]''' || || [[True positive rate]]
| '''[[UDA]]''' ||   || [[Unsupervised Data Augmentation]]
|-
|-
| '''[[t-SNE]]''' ||  || [[t-distributed stochastic neighbor embedding]]
| '''[[UKF]]''' ||  || [[Unscented Kalman Filter]]
|-
|-
| '''[[UL]]''' ||  || [[Unsupervised learning]]
| '''[[UL]]''' ||  || [[Unsupervised learning]]
|-
| '''[[ULMFiT]]''' ||  || [[Universal Language Model Fine-Tuning]]
|-
| '''[[UMAP]]''' ||  || [[Uniform Manifold Approximation and Projection]]
|-
| '''[[V-Net]]''' ||  || [[Volumetric Convolutional neural network]]
|-
| '''[[VAD]]''' ||  || [[Voice Activity Detection]]
|-
| '''[[VAE]]''' ||  || [[Variational AutoEncoder]]
|-
| '''[[VGG]]''' ||  || [[Visual Geometry Group]]
|-
| '''[[VHRED]]''' ||  || [[Variational Hierarchical Recurrent Encoder-Decoder]]
|-
| '''[[VISSIM]]''' ||  || [[A traffic simulation software (from "Verkehr In Städten]]
|-
| '''[[ViT]]''' ||  || [[Vision Transformer]]
|-
| '''[[VPNN]]''' ||  || [[Vector Product Neural Network]]
|-
| '''[[VQ-VAE]]''' ||  || [[Vector Quantized Variational Autoencoders]]
|-
| '''[[VQE]]''' ||  || [[Variational Quantum Eigensolver]]
|-
|-
| '''[[VR]]''' ||  || [[Virtual reality]]
| '''[[VR]]''' ||  || [[Virtual reality]]
|-
|-
| '''[[ViT]]''' ||  || [[Vision Transformer]]
| '''[[VRP]]''' ||  || [[Vehicle Routing Problem]]
|-
| '''[[VUI]]''' ||  || [[Voice User Interface]]
|-
| '''[[WCSP]]''' ||  || [[Weighted Constraint Satisfaction Problem]]
|-
| '''[[WER]]''' ||  || [[Word Error Rate]]
|-
| '''[[WFST]]''' ||  || [[Weighted finite-state transducer (WFST)]]
|-
| '''[[WGAN]]''' ||  || [[Wasserstein Generative Adversarial Network]]
|-
| '''[[WMA]]''' ||  || [[Weighted Majority Algorithm]]
|-
| '''[[WPE]]''' ||  || [[Weighted Prediction Error]]
|-
| '''[[XAI]]''' ||  || [[Explainable Artificial Intelligence]]
|-
| '''[[XGBoost]]''' ||  || [[eXtreme Gradient Boosting]]
|-
| '''[[XOR]]''' ||  || [[Exclusive OR (a common problem in neural networks)]]
|-
| '''[[YOLO]]''' ||  || [[You Only Look Once]]
|-
| '''[[ZSL]]''' ||  || [[Zero-Shot Learning]]
|-
|-
|}
|}


[[Category:Guides]]
[[Category:Guides]]