Our global survey finds that security organizations face more — and more serious — challenges than ever. But they’re evolving their strategies to stay ahead of threats.
Published Date: October 1, 2022
Anomaly detection is the process of locating unusual points or patterns in a set of data. Anything that deviates from an established baseline (within a certain, predefined tolerance) is considered an anomaly. Detecting these anomalies is now a critical practice, as anomalies can be indicators of a security breach, a hardware or software problem, shifting customer demands, or any number of challenges that require immediate attention.
Anomalies aren’t always bad. If sales suddenly spike because a famous social media influencer has written about a company’s product, this anomalous behavior could be beneficial. But is the business prepared for this sales spike? What’s more important is that the organization needs to have a system in place to become aware of any anomalous behavior, good or bad, so that it may respond accordingly — whether that is patching a security flaw, replacing a failing component or deploying additional servers to keep up with rising sales. Anomaly detection — especially unsupervised anomaly detection, which identifies previously unseen rare events without prior knowledge — is also playing an increasingly important role in cybersecurity, particularly with the rise of zero trust methodologies that rely on constant network surveillance for bad actors.
The process of anomaly detection revolves around the use of statistical tools and other methodologies applied to metrics or a dataset. Machine learning techniques are also becoming increasingly important to help discover anomalies as datasets become very large and complex.
In this article, we’ll discuss the various types of anomalies and the benefits of detecting them, while investigating the process of anomaly detection in greater detail.
What is an anomaly?
In the broadest sense, an anomaly is any event, action, observation, abnormal behavior or item that is out of the ordinary. Anomalies are also known as outliers, exceptions, spikes, deviations and other similar terms indicating an occurrence that signals a developing problem.
In computing, anomalies are inextricably related to data: An anomaly can be any type of unexpected activity in any type of dataset. If an e-commerce business’s average sales invoice typically totals $10 and it abruptly receives an order for $10,000, that’s an anomaly. If that business typically makes one sale per minute and suddenly receives thousands of orders at once, that’s also an anomaly. In both cases, events have fallen outside of expected patterns, and both should raise the interest of security and IT professionals to verify that the transactions are not fraudulent.
Anomalies can also include such behaviors as network latency spikes, changing web traffic patterns and even the rising temperature of a server’s CPU. All of these occurrences, when detected, are cause for further investigation.
How does anomaly detection work?
Anomaly detection methods are fundamentally a statistical process. Big data is processed to determine various statistical values such as the mean and standard deviation of the dataset. Depending on the specifics of the data, it may be fit to a curve, particularly if the data has any type of seasonal fluctuation. A typical pattern for hourly website traffic, for example, will usually show rising traffic each morning and falling traffic at night, often normalized in order to make outlier detection easier. Other anomaly detection techniques that make better sense of this information include processing the data into logical groups, or classifiers. For example, the data may be organized to consider only the traffic from noon to 1 p.m. each day, or it may be organized to calculate traffic for each 24-hour period instead of by the hour.
From here, an anomaly detection algorithm — perhaps combined with data science — can help the data make sense. Ways to visualize this information include a histogram to show how historical data is distributed, as well as decision trees and charts representing neural networks.

How is anomaly detection used?
Anomaly detection is used for various functions, ranging from analyzing business conditions to solving technical problems to detecting instances of security breaches and fraud.
Here are a few use cases:
- Business analysis: Anomaly detection can be used to detect outliers in a retailer’s sales data. For example, information about a certain product selling much more this month than it did last month could help the organization better plan the business’s product mix for the following month and therefore better optimize processes.
- Monitoring system performance: Anomalies in infrastructure can include from spiking or rapidly dropping input/output rates, a sudden rise in error rates or a jump in the temperature of a computer component. Anomalies like these are common indications of an imminent system failure.
- Manufacturing defect detection: On the production floor, anomaly detection is often used to indicate when a machine is failing or is falling out of tolerance. These anomalies can include the specifications of finished products or data about machine operations (such as the machine’s throughput rate or operating temperature).
- User experience measurement: Anomaly detection is commonly used to determine if a web server has crashed, has become overloaded or has begun generating errors. When spikes in these areas begin to emerge, the business must typically work to improve the user experience by deploying additional resources (such as provisioning additional cloud servers).
- Security management and fraud detection: Intrusion detection systems can alert administrators to spikes in the number of attempted logins, a flood of web traffic, or an unexpected packet signature — all potential signs that an attack is taking place, and that the web server should be locked down.
- Database management: Anomalies can be generated when data is incorrectly deleted, added, updated or duplicated within the database, ultimately introducing corruption to the dataset. Detecting these types of anomalies is important for ensuring accurate, high-quality data.

Anomaly detection is used to detect when machines are failing on the manufacturing floor
What’s an example of an anomaly?
Anomalies can appear in any set of data. The most common examples involve potentially disastrous situations that are either about to happen or have already taken place. For example, a server or application that stops responding, sending its response time to increasingly higher numbers; incoming web traffic that suddenly skyrockets, indicating a possible DDoS attack; or a credit card charge higher than a business’s typical sales.
What are the three types of anomalies?
When looking at a time series of data (data that is collected sequentially, over a period of time), there are three main types of anomalies: global (or point) anomalies, contextual anomalies and collective anomalies.
- Global (or point) anomalies: This anomaly is a piece of data that is simply much higher or lower than the average. If your average credit card bill is $2,000 and you receive one for $10,000, that’s a global anomaly.
- Contextual anomalies: These outliers depend on context. Your credit card bill probably fluctuates over time (due to holiday gift giving, for example). These spikes may look strange if you consider your spending in the aggregate, but in the context of the season, the anomaly is expected.
- Collective anomalies: These anomalies represent a collection of data points that individually don’t seem out of the ordinary but collectively represent an anomaly, one of which is only detectable when you look at a series over time. If your $2,000 credit card bill hits $3,000 one month, this may not be especially eyebrow-raising, but if it remains at the $3,000 level for three or four months in a row, an anomaly may become visible. These types of anomalies are often easiest to see in “rolling average” data that smooths a time series graph to more clearly show trends and patterns.
What are different methods for anomaly detection?
A variety of anomaly detection systems are used to detect anomalies, but the appropriate method depends largely on the size, type and complexity of the dataset being analyzed.
A basic statistical methodology, some of which we’ve outlined in previous sections, is most common for anomaly detection. Mathematical analysis is used on a dataset to identify its mean and standard deviation, among other statistical values, and a data scientist or algorithm determines what qualifies as an anomalous data point (e.g., anything that is two or more standard deviations from the mean). Time series data can be fit to a curve or a rolling average.
This type of anomaly detection works for relatively simple datasets, but when datasets become very large, or are subject to rapid change, more advanced techniques are often required, such as artificial intelligence and machine learning. One ML technique looks for dense groups of data, identifying outliers that fall too far away from these groups. Another approach, called clustering, looks for similarities in complex datasets, identifying the outliers in new data when their characteristics don’t fit those of the field. Yet another method, a local outlier factor algorithm, is an unsupervised anomaly detection technique that calculates the local density of deviation for a given data point with respect to its neighbors.
How do you avoid anomalies?
In many cases, anomalies are outside the control of the organization — an earthquake that disrupts a server farm or a DDoS attack launched against the company. But while these anomalies may not be avoidable, they can at least be mitigated with an appropriate disaster recovery plan. The specifics of that plan depend on the characteristics of said dataset in which anomalies are undesired. For example, if your goal is to avoid anomalies sourced to spikes in web server response times, you should create a contingency plan that spins up additional servers when traffic begins to rise. If your goal is to avoid anomalies related to credit card fraud, you need a plan that initiates additional checks with large purchases. Many of these types of contingency routines are now commonplace in various software packages.
In cases of anomalies where data corruption, duplication, or deletion is concerned (such as in a database), anomalies are avoided through normalization, the process of structuring a database such that it reduces redundancy in its tables and encourages data integrity through the creation of logical constraints.
What is zero trust?
Zero trust is a security model (also called a network architecture) that rethinks the way security is deployed and managed on an enterprise network. Traditional security has historically played the role of gatekeeper: Credentials are checked at the perimeter of the network, and authorized users are allowed access.
The concept of zero trust, which was developed in 2010 by Forrester analyst John Kindervag, is built on the idea that no traffic is inherently trustworthy, even if it has been successfully authenticated. This is increasingly important as the proliferation of cloud-based networks, IoT devices and mobile technologies make the concept of the edge more elusive, with no single point of entry to the network that can be effectively managed. As such, zero trust requires that the enterprise constantly monitor users, devices and traffic in real time to ensure that none of it is malicious.

John Kindervag developed the concept of zero trust, the concept that no traffic is inherently trustworthy
How is zero trust related to the importance of anomaly detection?
Anomaly detection is one of the key technologies that make zero trust security possible. Rather than relying on traditional systems of usernames and passwords, zero trust assumes that breaches are bound to happen or are already underway and works instead to detect them and prevent any damage from occurring. In a zero trust environment, the system is constantly scanning for anomalies that might indicate malicious activity. This type of constant monitoring is ingrained in the system: Without the inclusion of anomaly detection, zero trust is impossible to implement.
What are the benefits of anomaly detection?
Anomaly detection offers myriad real-world benefits, depending on the environment in which it is used. Some of the most tangible include:
- Automated discovery of security breaches and attempted security breaches.
- Fraud detection in a sales environment, particularly credit card fraud, as well as false positive identification.
- Adaptive and interactive business intelligence, allowing the enterprise to determine whether certain products are suddenly becoming popular or are falling out of favor, whether profitability is unexpectedly changing, and whether the cost of materials is fluctuating dramatically.
- Insight into whether critical technology systems have failed or are about to fail.
- Analysis of machine conditions on the factory floor, and whether manufactured product quality is suffering.
- Real-time measurements of the user experience, such as whether web services have become unresponsive or if there is an unexpected level of errors.
- Improvements around database reliability, and the ability to quickly detect errors before significant data corruption.
How do you get started with anomaly detection?
A variety of tools enable anomaly detection within any type of dataset, often with a specific anomalous use case in mind. Sales analysis tools include logic that can look for outliers in purchasing patterns to improve your marketing, while security analysis tools are purpose-built to enable zero trust architectures.
Regardless of your anomaly detection goal, a key first step is to ensure that your enterprise and its data are prepared to implement the necessary tools, and a number of tutorials can help you begin this process. First, you’ll need to consolidate data as much as possible; datasets scattered throughout the enterprise will make it difficult to obtain meaningful insights, particularly real-time analysis.
That said, anomaly detection can basically be conducted with statistical tools and machine learning systems designed to do this type of work. A variety of cloud and on-premises software packages are designed to hunt for anomalies by analyzing training data to build a model that represents normal behavior, then testing new data as it is generated to determine whether or not it is anomalous. When outliers are encountered, they are generally populated on a dashboard that indicates the type of anomalous behavior, along with its severity.
Anomaly detection is generally not a standalone technology — the fundamentals are built into a wide range of software tools that rely on this type of advanced analysis to work. However, standalone anomaly detection tools do exist; any software that performs data mining, data regression or data visualization activities against business data is at least in part an anomaly detection tool. Whether your objectives revolve around sales and marketing or involve security or system reliability, anomaly detection is a key tool for improving your ability to make better business decisions.

Splunk Data Security Predictions 2023
Our security experts predict an action-packed year. Get the latest on key trends.