If one of these statements sounds familiar, this blog post is for you:
- “I’m not getting enough value out of my SIEM"
- “My MSSP floods me with alerts that are not relevant, and they’re not willing or equipped to filter or make good judgements on their end for most of the cases they send”
- “Our Junior Analysts have a hard time making heads or tails out of an alert”
First the good news: you are not alone. However, a single thread does link all of these problems to a common cause - your model. There is an old saying that "The map is not the territory, but if correct, it has a similar structure... which accounts for its usefulness".
So, what on earth is a model? Per the ThetaPoint Security Reference Architecture, a Model is used to organize information about business processes, assets, and entities in a machine-readable format that people can use to make sense of a technical environment. In the Reference Architecture, your entire security technology stack leverages the model to make data management, analysis, and workflow more consistent and easier to think about. This practice developing and maintaining a reference model has become so successful for our clients that we recommend making it part of your formal change control process.
Gain the Upper Hand on Your Adversaries
Threat actors have a major disadvantage during the early and middle stages of an intrusion. They must learn about their target through trial and error as they advance into an operation. This exposes any Tactics Techniques and Procedures (TTPs) to detection without providing feedback about how much the defender can see. It is at this point where a properly modeled SIEM can shine: the noise made stumbling around your network should be obvious compared to normal activity. Actively modeling your infrastructure, entities, business functions, and data will better enable your SIEM to highlight this anomalous behavior.
Business-level attributes (organization, process, function, criticality) attached to hard facts (attributes of assets, users/entities, and data) make it possible to baseline normal or expected behavior. For example, a content author would expect vulnerability scanning engines to generate stable sets of security events that automated analytics can safely ignore. However, those same events originating from another system should generate an alert. If your Model does not have the concept of a ‘scanning engine’, then you must rely on a blunter tool like a global whitelist to tune out the noise.
This is one of the most common trade-offs that a content author must weigh:
- Does the risk of a missed detection merit the effort to be more specific?
- Will less precision generate enough invalid alerts to drown out more valuable analysis?
The SOC needs the ability to dial sensitivity up or down to manage the overall alert load, and a whitelist-style compromise could seem justified at first because a little bit of effort at a micro-level can have macro-level impact. However, these small compromises accumulate as more automated analyses re-use the blunter abstraction. Add time and staff churn to the mix and the organization starts to forget why some things should not be added to the global whitelist. This is the moment when the SOC loses the home-field advantage: the analysts are as unfamiliar with their own environment as the threat actors.