Interface administrators, Administrators (Semantic MediaWiki), Curators (Semantic MediaWiki), Editors (Semantic MediaWiki), Suppressors, Administrators
7,785
edits
(Created page with "{{see also|Machine learning terms}} ==Introduction== Machine learning Anomaly detection is the process of recognizing data points that deviate from normal behavior in a dataset. These abnormal outcomes are known as anomalies, outliers, or exceptions. Anomaly detection plays an integral role in many domains such as fraud detection, network intrusion detection, and fault detection in industrial systems. ==Applications== Anomaly detection is used in many fields to detect a...") |
No edit summary |
||
Line 43: | Line 43: | ||
Anomalies can be divided into three primary categories: point anomalies, contextual anomalies and collective anomalies. | Anomalies can be divided into three primary categories: point anomalies, contextual anomalies and collective anomalies. | ||
===Point Anomalies== | ===Point Anomalies=== | ||
Point anomalies, also referred to as global anomalies, refer to individual data points that differ significantly from the majority. Examples of point anomalies include credit card fraud, sensor glitches and network intrusions. They can be detected using statistical methods like the z-score, interquartile range or Mahalanobis distance; or machine learning techniques like isolation forest, one-class SVM or autoencoder. | Point anomalies, also referred to as global anomalies, refer to individual data points that differ significantly from the majority. Examples of point anomalies include credit card fraud, sensor glitches and network intrusions. They can be detected using statistical methods like the z-score, interquartile range or Mahalanobis distance; or machine learning techniques like isolation forest, one-class SVM or autoencoder. | ||
===Contextual Anomalies== | ===Contextual Anomalies=== | ||
Contextual anomalies, also referred to as conditional anomalies, refer to data points that are anomalous only within certain contexts or subpopulations of the data. For instance, a high heart rate may be considered normal during physical exercise but abnormal when sleeping. To detect contextual anomalies, context information must be integrated into the equation; this can be done through rule-based systems, Bayesian networks or decision trees. | Contextual anomalies, also referred to as conditional anomalies, refer to data points that are anomalous only within certain contexts or subpopulations of the data. For instance, a high heart rate may be considered normal during physical exercise but abnormal when sleeping. To detect contextual anomalies, context information must be integrated into the equation; this can be done through rule-based systems, Bayesian networks or decision trees. | ||
===Collective Anomalies== | ===Collective Anomalies=== | ||
Collective anomalies, also referred to as group anomalies, refer to a collection of data points that exhibit unusual behavior when taken together but not individually. Examples include sudden spikes in web traffic or power outages in an area. To detect collective anomalies requires the detection of patterns or dependencies between data points and identification of subpopulations that show anomalous behaviour. Clustering, principal component analysis or local outlier factor can all be utilized for detection. | Collective anomalies, also referred to as group anomalies, refer to a collection of data points that exhibit unusual behavior when taken together but not individually. Examples include sudden spikes in web traffic or power outages in an area. To detect collective anomalies requires the detection of patterns or dependencies between data points and identification of subpopulations that show anomalous behaviour. Clustering, principal component analysis or local outlier factor can all be utilized for detection. | ||
Line 55: | Line 55: | ||
Anomaly detection presents several obstacles, making it a complex and often unsolved challenge. | Anomaly detection presents several obstacles, making it a complex and often unsolved challenge. | ||
===Data Imbalance== | ===Data Imbalance=== | ||
One of the major obstacles lies in data imbalance, where anomalies make up a small fraction of all instances compared to normal data points. This makes it difficult for machine learning models to learn characteristics about anomalies and distinguish them from regular instances. | One of the major obstacles lies in data imbalance, where anomalies make up a small fraction of all instances compared to normal data points. This makes it difficult for machine learning models to learn characteristics about anomalies and distinguish them from regular instances. | ||
===Labeling== | ===Labeling=== | ||
Another challenge lies in labeling, where labeled anomalies may be scarce or unavailable and the definition of what constitutes an anomaly may be uncertain or context dependent. To address this, unsupervised or semi-supervised techniques that do not require labeled data may be utilized instead, along with expert knowledge and feedback to refine the definition of anomalies. | Another challenge lies in labeling, where labeled anomalies may be scarce or unavailable and the definition of what constitutes an anomaly may be uncertain or context dependent. To address this, unsupervised or semi-supervised techniques that do not require labeled data may be utilized instead, along with expert knowledge and feedback to refine the definition of anomalies. | ||
===High Dimensionality== | ===High Dimensionality=== | ||
Anomaly detection often faces the problem of high dimensionality, where data may contain many features or variables that make it challenging to detect anomalies and visualize them. To address this challenge, feature selection, dimensionality reduction techniques or visualization strategies can be employed in order to simplify the data and focus on the most pertinent ones. | Anomaly detection often faces the problem of high dimensionality, where data may contain many features or variables that make it challenging to detect anomalies and visualize them. To address this challenge, feature selection, dimensionality reduction techniques or visualization strategies can be employed in order to simplify the data and focus on the most pertinent ones. | ||