ThetaPoint Security Reference Architecture - SRA

Security Reference Architecture – Process

Repeatable processes create consistent outcomes – for better or worse. Defects in those processes can cause flawed investigations, system outages, and missed incidents. Your SOC produces its most valuable data from human judgement. However, this value comes from uniformity and reliability. The effects of poor processes ripple outward to distort incident reviews, assessments, and forecasts. Processes that do not capture the right data at the right time risk validity under future scrutiny. Looking back on your process data is always difficult, but it’s even harder when you don’t have the right data to look at.

More toil within a frequently-performed task makes process flaws more likely. Manual data entry creates tedium as analysts make repetitive descriptions of similar events. The need to manually describe context and create narratives also introduces decision fatigue, which slows work down further.

Finally, processes that do not structure their outcomes as easily-queried datasets lose future value and reduce your return on investment in labor and technology. Your historical analyses will hit a wall if you always need to extract data from prose. Until advances in Natural Language Processing make extracting data from text as easy as a database query, you can bet that you will get more value from applying the structure up front.

Things change. Analysts leave. Organizations forget. Piecing together events after the fact takes more effort compared to contemporaneous investigation. Any of these flaws can create serious challenges for an organization. Have analysts correctly analyzed and adjudicated all incidents? Have all systems continued to work, even when under maintenance? Will your most important analytics function when vendors release new versions or change formats? Are there any trends that point to a long-term need?

If you have found yourself revisiting these kinds of questions or find the need to spend effort out-of-band to recreate information that your processes should have already captured, you might need to think about restructuring the workflows that underlie your SOC’s procedural framework.

Goals in the Reference Processes

We set some high-level goals for our processes to make them more useful and flexible:

These goals point to a need for a process lifecycle that fits within your organization’s internal controls. If your organization does not have an accommodating internal controls process, then you can manage your SOC processes as though they were any software development or IT implementation project that needs regular refreshes. If you regularly reassess the value of your work through the lens of the goals above, you can prevent those “What were we thinking?” and “Why are we doing this?” moments.

Reference Processes


SOC systems support many business functions and their users expect reliable operation 24×7. This expectation demands a disciplined approach to systems administration. Small errors could not only affect the productivity of the analysts on shift but could also lead to loss of crucial events and system integrity or worse. To start our discussion, we have focused on three key processes that mostly fall within the Engineer role. These processes have important contact points with the rest of the SOC by enabling workflows and ensuring reliable functionality in the presence of constant change.


SOCs have many needs for system, data, and control surface integration. The Baseline Infrastructure emphasizes a more API-friendly environment, and implementations of the Reference Architecture tend to look more like microservice ensembles. We have abstracted the technical implementation of this baseline on the Utility platform so that scaling across more data sources and more control surfaces becomes possible with less complexity.

Once on the Baseline Infrastructure, you can evaluate new technologies by how they can consume or emit pieces of the system Model, what data they produce and consume, or what commands they can service and issue. We will have much more to say about integration in a future blog about the Use Case Development process.

DevOps has transformed how enterprises run their production systems. Continuous Integration and Deployment (CI/CD) pipelines solve many basic operational problems that usually require considerable investment to implement. A smoothly-functioning CI/CD platform can service many types of requirements, from deployment automation to disaster recovery. You can also use them to create governance artifacts (like change control records), security scans, and operational metrics. This style of administration creates economies of scale that let you manage more data more reliably and can thereby increase the effectiveness of your SOC’s business services.

Change and Maintenance

Even a perfectly executed CI/CD pipeline cannot service all needs for change across the SOC. Many security systems expose no APIs or have maintenance tasks that cannot be automated (like firmware updates). Furthermore, you still have a responsibility to create governance artifacts and other documentation about maintenance needed to ensure reliable operation.

Enterprise IT departments tend to have a controlled change process, and the SOC can reap the benefits of organizational knowledge by primarily using it before developing standalone processes. This includes performing tests and coordinating system changes with SOC users to ensure continuous operation. These lines of communication serve a dual purpose – to show that administrators follow enterprise procedures and to prove that the SOC has reliable data to investigate when the need arises.

System Health Monitoring

Events and their dataflows are the lifeblood of the SOC. This ever-growing dataset creates many tactical challenges: to ensure that sensors reliably send event data for analysis, that your workflows can correctly process event data, and that analysts can reliably issue automated commands when intervening in a suspected incident.

Since the Reference Architecture leans so heavily on automation and heavy system integration, we recommend that adopters carve out effort to focus solely on reliable functionality. This can take the form of automated functional tests, checklists, or using an enterprise monitoring system to keep an eye on things. We also recommend that every use case and analytic have its own instrumentation for source data as well as its results so that your engineering staff can quickly spot problems at an infrastructure layer or trends in data that can tip analytics into an unreliable state.

You should accept event floods and event loss as a fundamental fact of running a data-intensive business service. Accurately spotting data-driven reliability issues should take center stage when considering what to instrument and how to watch it.

Key Takeaways

  1. The process layer ties personnel to actions and performs the control functions of the SOC
  2. Too much process is as bad as too little – we are focusing on the most important ones that directly support the SOC mission
  3. Once in place, you should expect to revise processes as the organization learns

What’s Next

We have published the framework in high level detail on our Blog, and hope to engage you in a collaborative discussion of the challenges you are experiencing and the solutions we have developed. Please contact us to continue the dialog.

About ThetaPoint, Inc.

ThetaPoint is a leading provider of strategic consulting and managed security services.  We help clients plan, build and run successful SIEM and Log Management platforms and work with the leading technology providers to properly align capabilities to client’s needs.  Recognized for our unique technical experience, in addition to our ability to quickly and rapidly solve complex customer challenges, ThetaPoint partners with some of the largest and most demanding clients in the commercial and public sector.  For more information, visit or follow us on Twitter or Linked-In. 

No Comments

Leave a Comment