Main image of article How Cybersecurity Pros Can Use AI Skills to Reduce Alert Fatigue

For cybersecurity teams, security alerts continue to multiply. In 2024, cyber researchers published more than 40,000 CVEs warning of hardware and application bugs. In the first half of 2024, an average of 113 CVEs were published per day. By comparison, the first half of 2025 is running at a rate of 131 CVEs per day, according to Cisco Talos.

This constant activity within security operations centers (SOCs) often leads to alert fatigue for cybersecurity and technology professionals who are charged with responding to these alerts, judging their danger to the organization, and working on patching vulnerabilities in hardware and software applications. 

After some time, these alerts affect security teams. A Splunk report of more than 2,000 security professionals and executives found that 59% of study participants reported having too many alerts, and 55 percent reported having to address too many false positives.

At this year’s RSA Conference in San Francisco, numerous discussions involving industry experts attempted to address alert fatigue and how best to solve the issue and reduce burnout among cyber pros. For many, applying a smarter security triage system that reduces noise and surfaces high-priority security issues is critical, and many see artificial intelligence filling that role. 

This approach includes using AI to filter alerts and enrich them with real-time context rather than generating more noise. In turn, cyber professionals and security analysts must incorporate knowledge of generative AI and other platforms into their skill sets to take advantage of the technology’s capabilities while working toward this vision of a more modern SOC.

“SOC analysts need to understand how AI models work, their limitations, and how to interpret AI-driven insights,” Casey Ellis, founder of Bugcrowd, recently told Dice. “This isn’t about turning analysts into data scientists but equipping them to work alongside AI effectively – understanding when to trust it, when to question it, and how to leverage it to reduce noise and focus on high-priority threats.”

While AI is likely to automate more mundane, routine cybersecurity tasks, especially within the SOC, experts also note that the technology is expected to free security pros to pursue more important, complex tasks and reduce the fatigue that comes with ever-increasing vulnerability alerts.

A study released by cybersecurity firm Darktrace found that about two in three organizations – 64 percent – are planning on adding AI-powered features to their security stacks within the next year.

With growing CVE alerts and vulnerabilities to track, it’s becoming more critical for cyber professionals, especially those junior analysts working in SOCs, to cross-skill in AI technologies that will begin to play a bigger role in their day-to-day work, said Nicole Carignan, senior vice president for security and AI strategy, and field CISO, at Darktrace.

“Cybersecurity professionals must become fluent in AI and data, developing a deeper understanding of data classification, governance, and model behavior. This is especially important as we see a shift toward agentic AI systems, which introduce autonomy and complex decision-making capabilities that must be managed with precision and oversight,” Carignan told Dice. “Professionals with cross-skills in data science, machine learning and cybersecurity will be invaluable to organizations looking to safely and securely scale AI in their security operations.”

Training for security organizations and cyber professionals needs to focus on integrating AI into workflows, emphasizing its role in augmenting human decision-making rather than replacing that process, Bugcrowd’s Ellis added.

Conversely, the proliferation of AI-powered vulnerability discovery tools – as well as the growth of AI-assisted code generation – means that security pros are likely to confront an expanding attack surface. In turn, threat actors are becoming efficient at exploiting this attack surface.

“Human incentives are still the primary driver here, and traditional SOC training, understanding threat landscapes, attacker behavior, and incident response, remains critical,” Ellis added. “AI can handle repetitive, low-order tasks like triaging alerts or identifying patterns, but it lacks the creativity and contextual understanding that humans bring to the table. SOC training will evolve to include AI literacy, but foundational skills will remain essential.”

With AI technologies altering the cybersecurity landscape, constant training for SOC analysts and security pros needs to be the new norm for organizations, said Kris Bondi, CEO and co-founder at Mimoto. At the same time, it’s important to understand how these new tools will impact their jobs.

“The age of AI in the SOC may see more creative types of attacks. This isn’t a question of learning a handful of new indicators to look for. AI-created threats can evolve and are automated,” Bondi told Dice. “While training will be helpful, more important is putting better tools in the SOC. Recognition of an incident must be coupled with context. This is where AI can greatly help the SOC. Automated contextual analysis, as well as intent predictions to help SOC teams prioritize what to respond to and how must be a priority.”

While AI is seen as one way to ease alert fatigue, the technology is also a threat to entry-level cybersecurity employees and recent graduates who are watching as these virtual chatbots take over tasks once assigned to SOC analysts who are in the process of learning cybersecurity basics.

Experts, however, see the role of SOC analysts changing. These new opportunities require knowledge and skills in AI, but replacing humans seems unlikely.

“The foundational knowledge of incident response, network defense and adversary behavior remains crucial. But future SOC roles will also demand the ability to assess AI-generated insights, monitor for underlying issues like model drift, and ensure that autonomous actions align with organizational risk tolerance. These are new responsibilities that make the role more complex and strategic, not less relevant,” Carignan noted. “Ultimately, the future of the SOC will be built on human-AI collaboration. The organizations that succeed will be those that invest in continuous education, embrace transparency and explainability in AI design, and empower their security teams with the knowledge needed to lead with – not just adopt – AI-powered defenses.”

As AI technologies mature, more mundane tasks will be automated and free analysts to focus on complex, high-value work like threat hunting and strategic defense, Ellis added. 

“The role of SOC analysts will shift toward managing AI systems, interpreting their outputs, and addressing the nuanced, creative challenges that machines can’t handle. Jobs won’t disappear, they’ll evolve,” Ellis added. “The key is ensuring that SOC professionals are prepared for this shift through ongoing education, training, and tooling.”