Over the past three years, artificial intelligence has dominated conversations across businesses as IT and security organizations come to grips with technologies that have the potential to transform enterprises. At the same time, these tools can empower attackers to launch sophisticated attacks against vulnerable targets.
The duality of AI — the technology’s ability to automate standard IT processes to improve outcomes or enable more advanced attacks — is more visible than ever as virtual and autonomous chatbots proliferate.
A recent study published by security firm Kaseya showed that 2025 marked an “inflection point” for AI and cybersecurity, particularly in phishing attacks supercharged by AI. The survey found that about 83 percent of phishing emails use some type of AI-generated content, while 40 percent of business email compromise (BEC) techniques utilize generative AI.
These cybersecurity issues also affect the AI companies themselves. In March, Anthropic inadvertently released internal Claude Code source material as part of an npm package that included a large source map file. Within a day, attackers created fake GitHub repositories to distribute malware disguised as the leaked code, according to security firm Trend Micro.
The safe development and deployment of AI, along with better insight into how attackers use these technologies, is pushing cybersecurity professionals to the forefront of this evolving landscape.
At the recent RSA Conference in San Francisco, Executive Chairman Hugh Thompson argued that cybersecurity professionals are increasingly responsible for the safe deployment of AI tools and platforms — including governance — as well as ensuring that lines of business understand the risks associated with AI.
“I would argue AI just made our jobs way bigger in cybersecurity,” Thompson noted.
Other cyber experts readily agreed.
“AI can’t operate sustainably without strong security safeguards. That reality is raising the stakes on cybersecurity work and changing the shape of the job,” Diana Kelley, CISO at Noma Security, told Dice. “Traditionally, security teams focused on protecting systems and data. Now we are helping to govern AI systems and agents that make recommendations and decisions — and in some cases take action on behalf of the business — while enabling the business to adopt AI quickly and safely.”
While AI may threaten some lower- and entry-level jobs, many cybersecurity professionals can find new career opportunities as AI deployments increase, provided they keep their skills current and understand how these changes affect the entire organization, including both internal and external threats.
Human Intervention in an AI World
Discussion at the RSA Conference showed that even as AI advances — especially into autonomous, agentic tools — these technologies will require significant human oversight to ensure chatbots function properly and risks are addressed.
This opens doors for cybersecurity professionals who understand AI.
“Organizations don’t need a handful of AI security experts. They need enterprise security teams that can ask the right questions and deploy the right controls to ensure that when AI shows up, it can be adopted quickly without introducing unnecessary risk,” Kelley said.
As AI becomes more integrated into daily workflows, organizations will need cybersecurity professionals who understand the risks these tools pose, how attackers might use them, and how best to secure data, Kelley noted.
“Going forward, AI will be embedded in all aspects of our businesses, and every security professional needs a working understanding of AI and agent risk. That includes how models are trained, where data exposure can occur, how outputs can be manipulated, agentic blast radius, and how AI integrates into business workflows,” Kelley added. “In the real world, those risks show up inside existing domains like productivity tools, data loss prevention, access control, application security, cloud security, and risk management.”
Professional cybersecurity organizations that offer training and education are responding by integrating more AI concepts into certification programs.
Recently, ISC2 announced its Exam Guidance for Artificial Intelligence, designed to address the growing need for cyber professionals to secure AI systems and manage AI-related risks as adoption increases. The guidance provides insight into how AI security concepts are incorporated into ISC2 certification exam outlines.
The ISC2 guidance is architecturally significant because it treats AI security as a cross-domain competency rather than a siloed specialty. It also demonstrates that adversarial AI now touches everything from network telemetry to the inference path, said Acalvio CEO Ram Varadarajan.
“This shift is critical as the threat landscape bifurcates. We are now facing both machine-speed, autonomous agentic attacks — like the Claude Code-based espionage campaigns — and the subtle risk of emergent misalignment, where agents pursue the wrong objectives without ever triggering a policy alert,” Varadarajan told Dice.
AI Powers New Threats That Cyber Pros Must Anticipate
While securing AI and reducing risk is a significant undertaking, cybersecurity professionals must also consider how adversaries use these tools to increase attack speed and make social engineering schemes more realistic.
In addition to the Kaseya data, Mika Aalto, co-founder and CEO at Hoxhunt, pointed to research showing a fourteen-fold increase in AI-generated phishing attacks between the end of 2025 and the start of 2026.
These figures demonstrate how attackers are capitalizing on AI tools and underscore the need for a well-trained cybersecurity workforce.
“Organizations must adopt AI safely in their technology stack and use their own AI capabilities to understand user risk patterns and deploy personalized training at scale,” Aalto told Dice. “ISC2’s new AI Exam Guidance signals that securing AI is becoming a foundational skill across the profession — not an optional specialization — and reflects the reality security teams face today.”
The rise in AI-powered phishing attacks also reinforces that attackers are still primarily targeting credentials to gain broader network access. This makes identity security an increasingly critical focus for cybersecurity professionals, said Rex Booth, CISO at SailPoint.
“The true danger of many phishing schemes lies in their ability to grant attackers access to credentials, enabling them to masquerade as trusted insiders,” Booth told Dice. “With AI in play, these campaigns are becoming increasingly sophisticated and harder to detect. This makes it imperative for users to adopt robust identity security best practices, including frequent password changes and enabling multi-factor authentication, and for organizations to prioritize identity as the new control plane.”
While AI tools are often positioned as a way for enterprises to cut costs and improve efficiency, cybercriminals are leveraging these same capabilities, Kelley said — reinforcing the dual-use nature of the technology.
“We’ve been waiting for this offensive disruption from AI for a while now. Attacks at scale and at superhuman speed are the most obvious first step. Fortunately, many campaigns still require human intervention to execute,” Booth added. “The scarier scenario is when adversary AI starts running rampant through your enterprise without the need for action by the victim.”