Artificial intelligence is reshaping the cybersecurity landscape. While many conversations focus on whether these technologies will eliminate entry-level cyber jobs, others see these virtual chatbots and platforms as opening up fresh approaches to security that require new skills and a willingness to learn.
Experts point out that cyber and tech professionals who study generative and agentic AI technologies and understand how these technologies can be integrated into an organization’s overall infrastructure and security strategy can find themselves with job opportunities even as budgets and spending remain uncertain heading into the new year.
The question is: Where should cyber pros begin to learn how to incorporate AI into their organization’s security workflows? The National Institute of Standards and Technology (NIST) is offering guidance through the recent draft publication of its Cybersecurity Framework Profile for Artificial Intelligence.
Released at the end of December 2025, the Cybersecurity Framework Profile for Artificial Intelligence (NISTIR 8596) offers cyber and tech professionals guidelines and tips for incorporating the NIST Cybersecurity Framework (CSF 2.0) into their plans to securely adopt AI technologies within their organizations.
The goal of this NIST profile is to help cyber professionals and their organizations think strategically about how to adopt AI while addressing emerging risks that stem from rapid technology advances.
While only a draft, the Cybersecurity Framework Profile for Artificial Intelligence breaks down AI into three specific categories:
- Securing AI systems: identifying cybersecurity challenges when integrating AI into organizational ecosystems and infrastructure
- Conducting AI-enabled cyber defense: identifying opportunities to use AI to enhance cybersecurity, and understanding challenges when leveraging AI to support defensive operations
- Thwarting AI-enabled cyberattacks: building resilience to protect against new AI-enabled threats
“The three focus areas reflect the fact that AI is entering organizations’ awareness in different ways,” Barbara Cuthill, one of the profile’s authors, noted in a statement. “But ultimately, every organization will have to deal with all three.”
The profile itself remains a work in progress, as NIST, part of the U.S. Department of Commerce, continues to gather additional comments and will release a further draft later this year. Still, experts note that the document offers CISOs and cybersecurity and technology professionals a roadmap to examine AI during a time of change and uncertainty, helping organizations incorporate the technology and minimize security risks.
In many ways, the document demonstrates that AI adoption must include buy-in throughout the entire organization, said Morey Haber, chief security advisor at BeyondTrust.
“From an executive perspective, the Cybersecurity Framework Profile for Artificial Intelligence clarifies that AI security is not solved by tools alone. It requires new personnel, organizational maturity, and operating models that blend cybersecurity, data science, legal oversight, and engineering accountability,” Haber told Dice.
“AI risk lives at the intersection of technology, autonomy, and trust, which means traditional siloed teams are structurally insufficient,” he added. “AI represents a new form of technology middleware, and traditional silos need to work together, more than ever before, in order for secure-by-design AI computing to actually be achieved.”
For cybersecurity professionals seeking to understand how AI is affecting their current jobs and potential career prospects, the NIST profile provides a guide for assessing risk and ensuring the secure use of these technologies throughout an organization.
How Cyber Pros Should Understand AI and Cybersecurity
Over several years, NIST has released multiple documents and papers concerning AI and cybersecurity. A 2024 paper, for example, detailed security and privacy issues organizations face when deploying AI and machine learning, including several scenarios that make these technologies vulnerable to attack.
The long-standing NIST Cybersecurity Framework also remains an industry standard.
With the release of the new AI profile, NIST is not trying to build a whole new security framework. Instead, it builds on the Cybersecurity Framework to incorporate AI-specific considerations. This matters because organizations need consistency and clarity as they adopt AI technologies and face AI-enabled threats, said Margaret Cunningham, vice president of security and AI strategy at Darktrace.
This is important for cyber professionals since AI encompasses issues such as governance, risk and compliance (GRC).
“Today’s cybersecurity skills conversation often focuses on AI development, but the critical need is for professionals who can integrate AI risk into governance, compliance, and operational security,” Cunningham told Dice. “That means understanding how AI changes attack surfaces, how to secure models and data pipelines, and how to validate AI-driven decisions. It also means collaboration, including expert communication and conflict resolution skills, which is critical for anyone working in domains such as GRC, AI/ML, security engineering, and data science teams.”
Diana Kelley, CISO at Noma Security, noted that by breaking down AI into three areas, the profile helps cyber pros better integrate these technologies across an organization. The secure part focuses on risks cyber pros face when integrating AI. The defend part addresses the responsible use of AI to strengthen cybersecurity operations. Finally, the thwart section centers on preparing for and responding to attacks that leverage AI.
“All personnel need training that goes beyond traditional security awareness to understand the basic use and limitations of AI systems,” Kelley told Dice. “This includes recognizing that AI models can produce confident but incorrect answers, reflect bias, or behave unpredictably. Employees also need to understand that AI-driven systems and agents can take actions or generate outputs that are inaccurate, unsafe, or even malicious in ways that differ from traditional software.”
As AI develops, cyber pros seeking more advanced roles will require additional skills, such as knowledge of adversarial machine learning concepts, including prompt injection, data poisoning, model drift, agentic blast radius, and AI forensics.
“These skills support both secure and defend objectives,” Kelley added. “Teams must be able to analyze and validate AI-driven security actions, such as automated blocking or prioritization, and apply human judgment before final risk decisions are made. They also need to know how to examine AI-specific artifacts, including model logs, decision traces, agentic connections, actions, and flows, as well as data provenance, to understand what happened during an incident and why.”
At the same time, defensive personnel can support the thwart focus area by recognizing and responding to AI-enabled threats. This includes misuse of AI agents, AI-enhanced phishing, social engineering, and fraud, Kelley noted. Here, attacks often scale faster and appear more convincing than traditional techniques, increasing operational and business risk and requiring updated detection and response capabilities.
BeyondTrust’s Haber added that the pragmatism found in the AI profile is what will make this a must-read for cyber professionals.
“Unlike guidance that either over-prescribes controls or remains purely conceptual, NIST deliberately aligns AI risk to existing cybersecurity controls through CSF 2.0. This allows leaders to integrate AI security into enterprise risk management without completely rebooting governance structures,” Haber added. “It is an evolution, not a revolution, for GRC teams, helping prepare organizations not just for today’s threats, but for the operational and regulatory scrutiny that will define AI at scale in the future.”
Other AI Publications to Consider
While the NIST AI profile offers some of the latest thinking on how to approach AI, there are other publications, frameworks, and papers to consider.
Agnidipta Sarkar, chief evangelist at security firm ColorTokens, points to ISO 42001, which is essential for structured governance of AI-enabled cybersecurity, cyber defense, and cyber resilience. In addition, NIST 800-207 is designed for adopting zero trust-enabled controls. Additionally, cyber pros implementing and operating controls must consider the highly technical, adversarial-focused guidance from the OWASP Top 10 and the MITRE ATLAS to build controls.
Operational technology practitioners will need to understand the latest AI guidance in ISA/IEC 62443, while cloud experts will need to understand the Cloud Security Alliance AI Controls Matrix.
“But the source at the top should be the NIST Cyber AI profile, with everything dovetailing into control operations, as determined for cybersecurity, defense, and resilience, if organizations are to be breach-ready in 2026 and beyond,” Sarkar told Dice.