Main image of article AI Skills Increasingly Crucial for Cyber Pros Looking for Jobs and Advancement

Cybersecurity, like overall tech hiring, has slowed over the last six months as businesses reassessed their budgets amid a slowing U.S. economy, while at the same time, the federal government also reduced agency funding and positions. Still, the CyberSeek job board currently lists more than 514,000 open cyber positions. 

A look at the open position numbers shows that the cybersecurity profession has reached a major milestone – CyberSeek found that 10 percent of all these available security positions specifically state that the job requires candidates to have some type of artificial intelligence (AI) skill to be considered. 

“Over the past 12 months, approximately 10% of employers recruiting for cybersecurity positions cited AI as a requirement. For other segments of employers, it may be an implied skill requirement not explicitly mentioned in the job listing,” according to CyberSeek, a joint initiative of NICE (a program of the U.S. National Institute of Standards and Technology focused on advancing cybersecurity education and workforce development), analytics firm Lightcast, and CompTIA. 

While generative AI, agentic AI and other versions of these technologies are viewed as ways to automate many manual security processes, the CyberSeek numbers, along with other research data, show that AI is driving cybersecurity professionals to quickly develop new skill sets to meet the evolving demands of these types of jobs. 

While there is concern that AI could eliminate entry- and junior-level positions, research firm Gartner found that organizations that want to deploy AI for security, especially in areas such as their security operations center (SOC), still need skilled cyber professionals. These employees, however, must also be trained in AI and related areas so they understand the intricacies of how these virtual chatbots and other platforms work. 

“Use of AI in security operations roles, like any other technology, will require new skills and training,” according to Gartner. “Senior analyst roles and staff with programming/code-first skill sets are more likely to offer value to the SOC organization when considering how to use automation and AI on a day-to-day basis.” 

Cybersecurity experts have noted that while AI is changing the market and the skill sets that security professionals need to pursue an open position or a potential career path, it’s not yet clear how the technology will ultimately affect the job market and hiring.  

Experts also noted that the threat of AI taking jobs does not appear to have materialized. In fact, the technology is creating new positions that are only now emerging. 

“As the CyberSeek numbers point out, while some jobs will be replaced, others will evolve, and new jobs will be created such as AI researcher,” Diana Kelley, CISO at Noma Security, told Dice. “I’ve also seen a number of postings for AI security prompt engineers that weren’t open headcount 12 months ago. This isn’t about AI replacing security experts, but about augmenting them.” 

The integration of AI into core business operations has significant implications for the workforce. This includes security practitioners, as well as legal, compliance, and risk teams, which will now require upskilling in areas such as AI technologies and data governance.  

With research showing that 64 percent of organizations are planning to add AI-powered platforms to their security stack in the next year, professionals must start to cross-skill in these technologies, said Nicole Carignan, the senior vice president for security and AI strategy, and field CISO, at security firm Darktrace. 

“As AI becomes further embedded across the cyber landscape, it’s reshaping the structure, skillsets and strategies of all areas of the business, in particular the SOC,” Carignan told Dice. “The tools used by attackers and defenders are evolving rapidly, and AI offers critical support in helping teams keep pace. When implemented responsibly, AI can augment the existing cyber workforce – expanding situational awareness, accelerating mean time to action, and enabling SOC teams to be more efficient, reduce fatigue, and better prioritize cyber investigation workloads.” 

Another cyber expert, Bugcrowd CEO Dave Gerry, noted that while AI might seem like a worthy security investment for an organization with a limited budget, the deployment of these platforms and virtual chatbots still requires human oversight and supervision. This means investing in training and ensuring cyber pros have these skills. 

“AI has the potential to level the playing field for these under-resourced operations, but only if it's deployed safely, securely, and with the right human oversight,” Gerry told Dice. “Without the combination of clear guardrails and experienced staff to monitor the outcomes, there is a risk of automation of failure at scale.” 

Cybersecurity workforce studies show that organizations have shifted away from hiring lots of workers and are now focused more on skills, especially regarding AI. While some see the use of AI taking roles away from younger workers, others remain skeptical since these modern SOCs rely on technology that must be carefully managed, even if many manual processes are automated. 

An example that Noma Security’s Kelley pointed to is that SOC analysts are now more likely to use AI-assisted hunting tools for analysis, which are faster and often more accurate than the mostly manual log review of the past. 

AI security literacy and the ability to use AI to help improve security outcomes will be a core capability for cyber professionals moving forward, Kelley added.  

“As organizations adopt agentic AI systems – software agents that leverage large language models for ‘reasoning’ and then act through connected tools – mean that the security stakes will rise,” Kelley said. “Knowing how to secure their use will be one of the most valuable skill sets in the next decade. A security professional fluent in traditional cyber defense and AI threat modeling will be positioned not just to keep today’s SOC effective, but to safeguard the autonomous platforms of tomorrow.” 

When AI technologies are added to a SOC, they act as a force multiplier, automating tasks such as triage and performing autonomous investigation – allowing security teams to pivot from reactive alert-handling to more strategic initiatives like threat hunting, cyber resilience planning and risk mitigation, Darktrace’s Carignan said. 

“Realizing this benefit requires a workforce that understands how to effectively use, operationalize, govern, and most importantly, trust these technologies. It’s not enough to simply deploy an AI solution – security practitioners must understand how the underlying machine learning techniques function, what their strengths and limitations are, and how to evaluate their outputs,” Carignan added. “Without explainability and trust, AI risks exacerbating alert fatigue rather than solving it.” 

As more AI technologies make their way into SOCs – and cybersecurity defenses as a whole – Carignan sees a future where security remains a human-centric occupation as long as employees have the skills to match the demand. 

“GenAI has many helpful use cases in the SOC. However, if models are not rooted in transparency, explainability, privacy and control, hallucinations or inaccurate outputs may cause erroneous information to be fed into workflows, exacerbating issues of alert fatigue and burnout,” Carignan added. “Ultimately, the future of the SOC, and the cybersecurity industry as a whole, will be built on human-AI collaboration. Organizations that succeed will invest in continuous education, embrace transparency and explainability in AI design, and empower their security teams with the knowledge to lead with – not just adopt – AI-powered defenses.”