top of page

פורטל ידע

AI in Cyber Security Detection and Response Part 1/2 - AI's Role in SOC Detection

25.03.2025

By: Dori Fisher




 

When asking Generative AI on its role in the SOC, the words revolutionary and transformative appear early in the description. This article will describe actual use cases, as the revolution unfortunately has not yet arrived.


The topics the industry describes in the AI role in the SOC include the following:

  1. Threat Detection

  2. Threat Hunting

  3. Incident Response

  4. Threat intelligence

  5. Alert management

  6. False positive reduction

  7. Forensic analysis

  8. Automation


This article will review the use cases of AI, based on actual SOC usage of the latest and greatest technologies by the leading vendors in an environment serving over 100,000 users in over 60 organizations.


1.  Threat detection

At the 2014 RSA conference in San Francisco, while walking through the booths searching for innovative detection solutions and cool t-shirts I found one that read “I broke the rules”, both company and t-shirt promised that SIEM rules, prevalent from the early 2000’s were evolving into machine-based analytics (sometimes referred to as ML or AI), changing the methods we create and managed detection rules.

Threat detection is a wide topic; we will limit the review scope to (near) real time alert and incident creation.

The best place to look for AI success in alert creation is among the large SIEM / XDR vendors, who lead the way advancing AI use.


Approach

As we work with multiple vendors that use AI and ML, the SOC has years of experience investigating ML-based incidents.

We have chosen a 5,000-user organization which has multiple vendors that generate AI-based alerts and reviewed 1 year of incidents to learn more about the quality of AI-created alerts and incidents.


Statistics:

  • ~100 GB logs collected per day in a cloud SIEM.

o   Collection - 20 unique vendors.       

o   Collection - 75 unique products.

  • ~13,000 incidents created.

o   ~7,800 tickets opened (~5,200 incidents automatically closed or deduplicated).

  • 524 tickets escalated, of which:

o   106 AI-based tickets escalated,

o   418 Rule-based incidents escalated.


AI/ML produced the following alert types:

  • Unfamiliar client properties.

  • Atypical travel.

  • Rare operations.

  • New behavior.

  • Uncommon process action.

  • Large upload.

 

Reviewing the final resolutions of all tickets escalated, no true positives were based on AI detection.


Hypothesis

Tickets that were closed and not escalated by the SOC, were false positives or alternatively missed due to human mistake equally across all types of incidents.


Conclusion

AI / ML can detect deviations from baselines that otherwise can unnoticed. These capabilities are highly effective to improve understanding of the environment and enriching incidents. However, anomalies do not imply maliciousness; these have to be additionally correlated and investigated to reduce false positives. The analysis of AI / ML-driven incidents shows that these technologies still require significant improvement, particularly in the quality of detection and the ability to identify real threats in billions of noisy logs. While AI/ML can assist SOC teams in threat detection, they are not yet capable of eliminating rule creation.


2.  Threat Hunting

We define “hunting” or “threat hunting” as the asynchronous detection of attacks or compromises using different techniques including:

  • Event anomalies

  • Indicators of compromise (IOCs) or attack indicators

  • Pivoting on a known bad (alert, incident, intelligence)

  • Hypothesis-driven investigation

 

Successful hunting requires detecting anomalies and connecting the dots. In these, AI/ML excels. Although we can define adding different artifacts or minor alerts to create an incident as “detection”, and some vendors do exactly that. We refer to the more manual or asynchronous process of detection. Although, in essence, in anomaly hunting or pivoting, analysts are stitching and correlating manually, which, in our view, can be done better using ML, after proper training. 

AI and ML assist in every mentioned type of threat hunting as even when pivoting or hitting on an indicator, understanding prevalence, rarity or anomalous behavior can be key for a successful hunt. For example: an IP address related to specific malware was accessed by several assets, understanding if these assets are different from other assets or that the process making the connection is rare, is a task ML is efficient at. Combining indicators that generate false positives with behavioral prevalence and rarity analysis is uniquely achievable using ML.


3. Incident Response


Many clients regard SOC Incident Response (IR) as specific actions like blocking and isolation. However, we define IR as: "The actions taken to reduce the time and scope of an incident and mitigate adversaries’ malicious actions".

Without confidence—i.e., being certain of a malicious actor or compromised asset, actions like blocking or isolation would not be performed. Machine learning, by correlating alerts, can increase certainty to a level where action would be taken.

Scoping the breadth of a compromise can also be assisted by AI, by answering key questions like – which assets have vulnerabilities, software or characteristics similar to the compromised asset.


After the log4j vulnerability discovery, organizations were focused on locating vulnerable assets and prioritizing remediation based on risks. These risks can be quantified using statistical analysis like the one AI/ML can provide.

Similarly, when a phishing email is successful, the time spent in finding similar emails based on source, subject, headers, links and content to delete from users’ mailboxes can be shortened by using AI/ML.


In one incident, we detected an adversary who had taken control over an email server. We (external incident response team) were not allowed to isolate the machine for two hours until we understood the business implications and convincing system and business that the actions were necessary to stop the adversary.


We believe it would take time for responders to trust AI to allow mass isolation and devastating decisions like blocking segments and networks although we trust AI with lives when driving cars, the difference is the log based digital detection world is still untrusted thus AI will keep assisting and consulting, but critical, impactful decisions will still be left to humans. As of now, we recommend considering AI/ML as an assistant rather than a decision-maker, especially when it comes to intrusive actions. We expect trust in AI actions will increase as more and more decisions in different areas of life will be assisted by AI.

Comments


bottom of page