Main image of article Govern It. Secure It. Own It. The Cyber Skills Agentic AI Now Demands

During the recent 2026 RSA Conference in San Francisco, conversations among attendees made clear that artificial intelligence is proliferating across the cybersecurity space, from security operations centers to vulnerability management. These developments, in turn, are reshaping the roles and skills cybersecurity professionals need.

A specific driver of this trend is the growing interest in agentic AI over the past year.

A recent survey published by consulting firm KPMG found growing interest in agentic AI tools and technologies that can autonomously perform various functions across an organization’s IT infrastructure, including making decisions about actions or tasks without human intervention.

In the KPMG report, about half of the 2,100 survey respondents reported they are deploying these autonomous agents throughout their organization, with 30 percent saying they are still testing these technologies.

While these tests and implementations demonstrate AI's transformative potential, what remains undeniable is the need for guardrails for agentic AI in security. In the KPMG survey, 43 percent of respondents noted that they are embedding security controls into autonomous agents, “along with clear procedures for monitoring and evaluation.”

These developments show that agentic AI deployments offer both rewards and risks that must be carefully managed. The increasing interest in these tools also means a broad range of new responsibilities for cybersecurity leaders and their teams, including developing guardrails to reduce those risks.

In the current landscape, organizations must rely on humans to monitor AI advances, which means cybersecurity professionals need to continue developing their skill sets to demonstrate their knowledge of how these technologies function and how they fit into an organization’s overall security strategy.

“Human oversight remains vital when using AI in offensive cybersecurity. While AI is highly efficient in automating and scaling tasks, human expertise is necessary to interpret complex results, make critical decisions, and apply context-specific reasoning,” Amit Zimerman, co-founder and chief product officer at Oasis Security, recently told Dice. “Humans are essential for ensuring that AI-driven tools are used responsibly and for validating the results of AI processes, especially when it comes to the nuances of certain vulnerabilities or threat landscapes.”

Coming to Grips With Agentic AI

As more and more organizations experiment with AI tools and attempt to embed these tools into daily workflows, experts note that cybersecurity professionals are increasingly called upon to assess the risks associated with these technologies.

One area of increasing interest is governance, risk, and compliance (GRC), where numerous job opportunities remain open as increasing government regulations drive companies to develop rules and standards for AI. This is only expected to increase as interest in and deployment of agentic AI grow.

“As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies,” Nicole Carignan, senior vice president for security and AI strategy and field CISO at Darktrace, told Dice. “However, there is no one-size-fits-all approach. Each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements. That’s why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions.”

While AI platforms are replacing some entry-level positions, the deployment of agentic AI tools requires more skilled cyber roles, especially as many organizations still want the work of these autonomous tools checked by skilled workers, including in cybersecurity, said Ram Varadarajan, CEO of security firm Acalvio.

“Cyber roles are shifting from manual, tool-driven operations to supervising and orchestrating autonomous security agents,” Varadarajan told Dice. “Routine tasks like alert triage and rule-writing are going to be increasingly automated, compressing entry-level roles. At the same time, higher-level positions will expand around governance, exception handling, and accountability for AI-driven decisions.”

Developing Skills Around Agentic AI Technologies

For many cybersecurity professionals already working in the field, it makes sense to begin developing AI skills now, especially hands-on knowledge of how these tools work.

“The next wave of cyber talent needs to design, secure, and govern AI agents as naturally as they run cloud security or investigations today,” Diana Kelley, CISO at Noma Security, told Dice. “The most valuable skills right now are hands-on. Understand how AI actually behaves: context windows, non-determinism, prompt injection, agentic identity, and shifting trust boundaries. Learn how to threat model agentic AI systems.”

There are several examples of using current AI technologies to develop the skills needed for this market. In one scenario that Kelley details, a cybersecurity professional can test a SOC agent for prompt injection by feeding malicious inputs and identifying where guardrails fail. From there, they can validate data pipelines to prevent poisoning and use AI observability to review decision logs and traces to understand why an agent blocked a user or escalated an alert.

From there, cyber pros can implement safe deployment patterns like human-in-the-loop approvals for high-risk actions and rollback triggers when agents behave unexpectedly.

“Equally critical is operational control. Know how to set clear boundaries on what agents are allowed to do, require human approval for high-risk actions, and build rollback and containment when things go wrong. This is where most real-world failures will happen,” Kelley added.

At the same time, cyber professionals need to understand that adversarial agentic systems reason in real time, at scale. A key implication is that cyber defenses will need to do the same. Cyber pros must then gain familiarity with game theory and AI-driven tripwires and deception, Acalvio’s Varadarajan said.

“Cyber professionals will have to continuously track how agentic systems are evolving, including their capabilities, limitations, and failure modes,” Varadarajan added. “This includes understanding new attack surfaces such as prompt injection, model drift, model context protocol, and agent-to-agent exploitation. Staying current now requires hands-on familiarity with AI systems, not just traditional security tools.”

Identity Remains Critical for Agentic AI Security

One area that is getting more attention as organizations deploy agentic AI tools is identity. In recent months, the number of non-human identities – also known as NHIs – has increased. These digital identities can help organizations or be used against them by allowing attackers to access weak points in the network using fake credentials.

The increasing number of AI-based agents and the rapidly evolving threat landscape are prompting a shift toward identity as the critical element of enterprise security. Cybersecurity professionals need identity security skills that can help them govern and secure who – or what – has access to enterprise networks and the data that’s inside, said Mark McClain, CEO at security firm SailPoint.

“The modern enterprise requires a new control plane, driven by unifying identity, data, and security. The combined power of these contexts enables real-time decisions to reduce risk without impacting the business,” McClain told Dice. “These decisions can be driven by the nature of the identity, the context of the apps and data it can access, the behavior around how it is using these apps and data and the security signals and risk warnings that may surround it. To combat this new era of threats, driven by the force multiplier of AI, we need to embrace a new approach of adaptive identity.”