Anomaly detection approaches to threat detection have traditionally struggled to make good on the efficacy claims of vendors once deployed in real environments. Rarely have the vendors lied about their products capability – rather, the examples and stats they provide are typically for contrived and isolated attack instances; not representative of a deployment in a noisy and unsanitary environment.
Where anomaly detection approaches have fallen flat and cast them in a negative value context is primarily due to alert overload and “false positives”. False Positive deserves to be in quotations because (in almost every real-network deployment) the anomaly detection capability is working and alerting correctly – however the anomalies that are being reported often have no security context and are unactionable.
Tuning is a critical component to extracting value from anomaly detection systems. While “base-lining” sounds rather dated, it is a rather important operational component to success. Most false positives and nuisance alerts are directly attributable to missing or poor base-lining procedures that would have tuned the system to the environment it had been tasked to spot anomalies in.
Assuming an anomaly detection system has been successfully tuned to an environment, there is still a gap on actionability that needs to be closed. An anomaly is just an anomaly after all.
Closure of that gap is typically achieved by grouping, clustering, or associating multiple anomalies together in to a labeled behavior. These behaviors in turn can then be classified in terms of risk.
While anomaly detection systems dissect network traffic or application hooks and memory calls using statistical feature identification methods, the advance to behavioral anomaly detection systems requires the use of a broader mix of statistical features, meta-data extraction, event correlation, and even more base-line tuning.
Because behavioral threat detection systems require training and labeled detection categories (i.e. threat alert types), they too suffer many of the same operational ill effects of anomaly detection systems. Tuned too tightly, they are less capable of detecting threats than an off-the-shelf intrusion detection system (network NIDS or host HIDS). Tuned to loosely, then they generate unactionable alerts more consistent with a classic anomaly detection system.
The middle ground has historically been difficult to achieve. Which anomalies are the meaningful ones from a threat detection perspective?
Inclusion of machine learning tooling in to the anomaly and behavioral detection space appears to be highly successful in closing the gap.
What machine learning brings to the table is the ability to observe and collect all anomalies in real-time, make associations to both known (i.e. trained and labeled) and unknown or unclassified behaviors, and to provide “guesses” on actions based upon how an organization’s threat response or helpdesk (or DevOps, or incident response, or network operations) team has responded in the past.
Such systems still require baselining, but are expected to dynamically reconstruct baselines as it learns over time how the human operators respond to the “threats” it detects and alerts upon.
Machine learning approaches to anomaly and behavioral threat detection (ABTD) provide a number of benefits over older statistical-based approaches:
- A dynamic baseline ensures that as new systems, applications, or operators are added to the environment they are “learned” without manual intervention or superfluous alerting.
- More complex relationships between anomalies and behaviors can be observed and eventually classified; thereby extending the range of labeled threats that can be correctly classified, have risk scores assigned, and prioritized for remediation for the correct human operator.
- Observations of human responses to generated alerts can be harnesses to automatically reevaluate risk and prioritization over detection and events. For example, three behavioral alerts are generated associated with different aspects of an observed threat (e.g. external C&C activity, lateral SQL port probing, and high-speed data exfiltration). The human operator associates and remediates them together and uses the label “malware-based database hack”. The system now learns that clusters of similar behaviors and sequencing are likely to classified and remediated the same way – therefore in future alerts the system can assign a risk and probability to the new labeled threat.
- Outlier events can be understood in the context of typical network or host operations – even if no “threat” has been detected. Such capabilities prove valuable in monitoring the overall “health” of the environment being monitored. As helpdesk and operational (non-security) staff leverage the ABTD system, it also learns to classify and prioritize more complex sanitation events and issues (which may be impeding the performance of the observed systems or indicate a pending failure).
It is anticipated that use of these newest generation machine learning approaches to anomaly and behavioral threat detection will not only reduce the noise associated with real-time observations of complex enterprise systems and networks, but also cause security to be further embedded and operationalized as part of standard support tasks – down to the helpdesk level.
— Gunter Ollmann, Founder/Principal @ Ablative Security
(first published January 13th – “From Anomaly, to Behavior, and on to Learning Systems“) Continue reading Machine Learning Approaches to Anomaly and Behavioral Threat Detection→