Top
ThetaPoint Security Reference Architecture - SRA

IOTA in Action: Your SIEM isn’t the Problem, but Your Model is

If one of these statements sounds familiar, this blog post is for you:

  • “I’m not getting enough value out of my SIEM”
  • “My MSSP floods me with alerts that are not relevant, and they’re not willing or equipped to filter or make good judgements on their end for most of the cases they send”
  • “Our Junior Analysts have a hard time making heads or tails out of an alert”

First the good news: you are not alone. However, a single thread does link all of these problems to a common cause – your model. There is an old saying that “The map is not the territory, but if correct, it has a similar structure… which accounts for its usefulness“.

So, what on earth is a model?  Per the ThetaPoint Security Reference Architecture, a Model is used to organize information about business processes, assets, and entities in a machine-readable format that people can use to make sense of a technical environment. In the Reference Architecture, your entire security technology stack leverages the model to make data management, analysis, and workflow more consistent and easier to think about. This practice developing and maintaining a reference model has become so successful for our clients that we recommend making it part of your formal change control process.

Gain the Upper Hand on Your Adversaries

Threat actors have a major disadvantage during the early and middle stages of an intrusion. They must learn about their target through trial and error as they advance into an operation. This exposes any Tactics Techniques and Procedures (TTPs) to detection without providing feedback about how much the defender can see. It is at this point where a properly modeled SIEM can shine: the noise made stumbling around your network should be obvious compared to normal activity. Actively modeling your infrastructure, entities, business functions, and data will better enable your SIEM to highlight this anomalous behavior.

Business-level attributes (organization, process, function, criticality) attached to hard facts (attributes of assets, users/entities, and data) make it possible to baseline normal or expected behavior. For example, a content author would expect vulnerability scanning engines to generate stable sets of security events that automated analytics can safely ignore. However, those same events originating from another system should generate an alert. If your Model does not have the concept of a ‘scanning engine’, then you must rely on a blunter tool like a global whitelist to tune out the noise.

This is one of the most common trade-offs that a content author must weigh:

  • Does the risk of a missed detection merit the effort to be more specific?
  • Will less precision generate enough invalid alerts to drown out more valuable analysis?

The SOC needs the ability to dial sensitivity up or down to manage the overall alert load, and a whitelist-style compromise could seem justified at first because a little bit of effort at a micro-level can have macro-level impact. However, these small compromises accumulate as more automated analyses re-use the blunter abstraction. Add time and staff churn to the mix and the organization starts to forget why some things should not be added to the global whitelist. This is the moment when the SOC loses the home-field advantage: the analysts are as unfamiliar with their own environment as the threat actors.

The Model is your Tutor for your Analysts and your MSSP

The same set of facts and labels should be readily available regardless of who or what is doing analysis.  When a common Model applies across the infrastructure, every investigation into an alert also presents the correct contextual data to the investigator, regardless of how familiar they are with the tooling or business itself. You no longer need to go digging around just to find out what is important about an asset. This consistency and immediacy make it easier to internalize the normal quirks and behaviors that help adjudication. When your infrastructure is easy to learn, junior analysts and MSSPs have a clear path to making sound judgements.

Making your Model part of the culture also facilitates collaboration during Incident Response. Even when working in different organizations, personnel start out speaking the same language and understanding the same context and risk to the business.

Our Approach to Effective Modeling – Automate, Synthesize, Validate, Propagate

ThetaPoint’s SOC Optimization Service prescribes a blended approach of technical and interview-driven data gathering to establish a framework for business-level concepts. We then layer automated collection with business rules that can derive the model from automatically collected data. We layer business processes and other perspectives to validate model accuracy and collection scope. This rules-driven approach lets you push business-level information down to assets and events and derive higher-level relations from low-level facts.

Additionally, we build on the Baseline Infrastructure by treating model data as an event stream. This lets you reuse the Utility and Workflow stacks to automate Model synthesis. That may sound like a lot is going on, but let’s take a look at an example event stream that starts with sequences of observations of live IPv4 addresses, and then builds up something more useful. We can use logstash to generate our observation events:

Second, we can set some general-purpose event decorators to make sure that IPv4 segments are labeled consistently:

Separately, we can list installed software packages from another tool like osquery to generate event records observing software packages that have been installed on server hosts.

When we ingest these osquery events, we can apply the same CIDR-based designation to identify production software packages either through the logstash filter, or directly in the osquery config. More importantly, we can create a business rule to derive a functional abstraction from the low-level data:

Systems with Postgres server installed in a customer-facing production network are part of the ‘ecommerce’ organization

Building the Model from an event stream also lets us automatically link complementary events to increase accuracy and verify scope. For example, the nmap sweep data should not identify any active addresses that cannot be accounted for by an osquery task that selects for configured NICs in an ‘up’ state connected to a default route. Once your validated Model has been assembled, we can use the Utility platform to push it into the SIEM and as many repositories and monitoring surfaces as possible.

Ultimately, properly configured, updated and maintained Model insures your SIEM is delivering/deriving the value you expected and maintaining its currency to the organization.  Additionally, your Analysts and MSSP vendor will also deliver more value to you and your organization, ultimately reducing dwell time of the adversary and improving your overall security posture.  Again, this often-overlooked step, is a surefire way to maximize the value of your SIEM and IR Tool investments.

Key Takeaways:

  1. Don’t be that guy. Gain the upper hand on your adversaries to make aggressor TTPs more obvious.
  2. A refined Model makes automated analysis easier to think about and decreases buried complexity caused by over-use of simpler abstractions.
  3. An omnipresent Model builds a consistent understanding of business context across personnel and enhances collaboration by allowing more effective communication and alignment to business risk.

What’s Next

We have published the framework in high level detail on our Blog, and hope to engage you in a collaborative discussion of the challenges you are experiencing and the solutions we have developed. Please contact us to continue the dialog.

No Comments

Leave a Comment