Machine learning terms: Difference between revisions
(Created page with "{{see also|Machine learning|Artificial intelligence terms}} ==Fundamentals==") |
No edit summary |
||
Line 1: | Line 1: | ||
{{see also|Machine learning|Artificial intelligence terms}} | {{see also|Machine learning|Artificial intelligence terms}} | ||
==Fundamentals== | ==Fundamentals== | ||
*[[accuracy]] | |||
*[[activation function]] | |||
*[[artificial intelligence]] | |||
*[[AUC (Area under the ROC curve)]] | |||
*[[backpropagation]] | |||
*[[batch]] | |||
*[[batch size]] | |||
*[[bias (ethics/fairness)]] | |||
*[[bias (math) or bias term]] | |||
*[[binary classification]] | |||
*[[bucketing]] | |||
*[[categorical data]] | |||
*[[class]] | |||
*[[classification model]] | |||
*[[classification threshold]] | |||
*[[class-imbalanced dataset]] | |||
*[[clipping]] | |||
*[[confusion matrix]] | |||
*[[continuous feature]] | |||
*[[convergence]] | |||
*[[DataFrame]] | |||
*[[data set or dataset]] | |||
*[[deep model]] | |||
*[[dense feature]] | |||
*[[depth]] | |||
*[[discrete feature]] | |||
*[[dynamic]] | |||
*[[dynamic model]] | |||
*[[early stopping]] | |||
*[[embedding layer]] | |||
*[[epoch]] | |||
*[[example]] | |||
*[[false negative (FN)]] | |||
*[[false positive (FP)]] | |||
*[[false positive rate (FPR)]] | |||
*[[feature]] | |||
*[[feature cross]] | |||
*[[feature engineering]] | |||
*[[feature set]] | |||
*[[feature vector]] | |||
*[[feedback loop]] | |||
*[[generalization]] | |||
*[[generalization curve]] | |||
*[[gradient descent]] | |||
*[[ground truth]] | |||
*[[hidden layer]] | |||
*[[hyperparameter]] | |||
*[[independently and identically distributed (i.i.d)]] | |||
*[[inference]] | |||
*[[input layer]] | |||
*[[interpretability]] | |||
*[[iteration]] | |||
*[[L0 regularization]] | |||
*[[L1 loss]] | |||
*[[L1 regularization]] | |||
*[[L2 loss]] | |||
*[[L2 regularization]] | |||
*[[label]] | |||
*[[labeled example]] | |||
*[[lambda]] | |||
*[[layer]] | |||
*[[learning rate]] | |||
*[[linear model]] | |||
*[[linear]] | |||
*[[linear regression]] | |||
*[[logistic regression]] | |||
*[[Log Loss]] | |||
*[[log-odds]] | |||
*[[loss]] | |||
*[[loss curve]] | |||
*[[loss function]] | |||
*[[machine learning]] | |||
*[[majority class]] | |||
*[[mini-batch]] | |||
*[[minority class]] | |||
*[[model]] | |||
*[[multi-class classification]] | |||
*[[negative class]] | |||
*[[neural network]] | |||
*[[neuron]] | |||
*[[node (neural network)]] | |||
*[[nonlinear]] | |||
*[[nonstationarity]] | |||
*[[normalization]] | |||
*[[numerical data]] | |||
*[[offline]] | |||
*[[offline inference]] | |||
*[[one-hot encoding]] | |||
*[[one-vs.-all]] | |||
*[[online]] | |||
*[[online inference]] | |||
*[[output layer]] | |||
*[[overfitting]] | |||
*[[pandas]] | |||
*[[parameter]] | |||
*[[positive class]] | |||
*[[post-processing]] | |||
*[[prediction]] | |||
*[[proxy labels]] | |||
*[[rater]] | |||
*[[Rectified Linear Unit (ReLU)]] | |||
*[[regression model]] | |||
*[[regularization]] | |||
*[[regularization rate]] | |||
*[[ReLU]] | |||
*[[ROC (receiver operating characteristic) Curve]] | |||
*[[Root Mean Squared Error (RMSE)]] | |||
*[[sigmoid function]] | |||
*[[softmax]] | |||
*[[sparse feature]] | |||
*[[sparse representation]] | |||
*[[sparse vector]] | |||
*[[squared loss]] | |||
*[[static]] | |||
*[[static inference]] | |||
*[[stationarity]] | |||
*[[stochastic gradient descent (SGD)]] | |||
*[[supervised machine learning]] | |||
*[[synthetic feature]] | |||
*[[test loss]] | |||
*[[training]] | |||
*[[training loss]] | |||
*[[training-serving skew]] | |||
*[[training set]] | |||
*[[true negative (TN)]] | |||
*[[true positive (TP)]] | |||
*[[true positive rate (TPR)]] | |||
*[[underfitting]] | |||
*[[unlabeled example]] | |||
*[[unsupervised machine learning]] | |||
*[[validation]] | |||
*[[validation loss]] | |||
*[[validation set]] | |||
*[[weight]] | |||
*[[weighted sum]] | |||
*[[Z-score normalization]] |
Revision as of 22:36, 27 January 2023
- See also: Machine learning and Artificial intelligence terms
Fundamentals
- accuracy
- activation function
- artificial intelligence
- AUC (Area under the ROC curve)
- backpropagation
- batch
- batch size
- bias (ethics/fairness)
- bias (math) or bias term
- binary classification
- bucketing
- categorical data
- class
- classification model
- classification threshold
- class-imbalanced dataset
- clipping
- confusion matrix
- continuous feature
- convergence
- DataFrame
- data set or dataset
- deep model
- dense feature
- depth
- discrete feature
- dynamic
- dynamic model
- early stopping
- embedding layer
- epoch
- example
- false negative (FN)
- false positive (FP)
- false positive rate (FPR)
- feature
- feature cross
- feature engineering
- feature set
- feature vector
- feedback loop
- generalization
- generalization curve
- gradient descent
- ground truth
- hidden layer
- hyperparameter
- independently and identically distributed (i.i.d)
- inference
- input layer
- interpretability
- iteration
- L0 regularization
- L1 loss
- L1 regularization
- L2 loss
- L2 regularization
- label
- labeled example
- lambda
- layer
- learning rate
- linear model
- linear
- linear regression
- logistic regression
- Log Loss
- log-odds
- loss
- loss curve
- loss function
- machine learning
- majority class
- mini-batch
- minority class
- model
- multi-class classification
- negative class
- neural network
- neuron
- node (neural network)
- nonlinear
- nonstationarity
- normalization
- numerical data
- offline
- offline inference
- one-hot encoding
- one-vs.-all
- online
- online inference
- output layer
- overfitting
- pandas
- parameter
- positive class
- post-processing
- prediction
- proxy labels
- rater
- Rectified Linear Unit (ReLU)
- regression model
- regularization
- regularization rate
- ReLU
- ROC (receiver operating characteristic) Curve
- Root Mean Squared Error (RMSE)
- sigmoid function
- softmax
- sparse feature
- sparse representation
- sparse vector
- squared loss
- static
- static inference
- stationarity
- stochastic gradient descent (SGD)
- supervised machine learning
- synthetic feature
- test loss
- training
- training loss
- training-serving skew
- training set
- true negative (TN)
- true positive (TP)
- true positive rate (TPR)
- underfitting
- unlabeled example
- unsupervised machine learning
- validation
- validation loss
- validation set
- weight
- weighted sum
- Z-score normalization