Main image of article AI Governance: What Tech and Security Pros Need to Know

Artificial intelligence (AI)—whether generative AI, agentic AI or simply referred to  as automation—is having a broad effect on lines of business, including cybersecurity. A Cloud Security Alliance study published in April found that nine in 10 organizations are currently adopting or planning to adopt generative AI for security purposes.

The same CSA study, based on 2,486 responses from IT and security professionals, noted that 63 percent of cybersecurity professionals believe in AI’s potential to enhance security. This includes threat detection and response capabilities.

While enterprises and even small businesses are experimenting with AI adoption to evaluate how the technology benefits their bottom lines, there is also growing concern about how these tools are used. In some cases, organizations are already confronting “shadow AI” usage among employees, and cybercriminals and threat actors are also deploying these tools for their use.

These developments are leading to a greater interest in AI governance, which looks to address issues within the artificial intelligence field such as algorithmic biases, breaches and ethical concerns. At the same time, government agencies, including regulators in the U.S. and the EU, are examining stricter regulations of this technology, according to a survey published by Markets & Markets.

While the AI governance market was worth about $890 million in 2024, that number is expected to increase to $5.8 billion in the next four years—an annual growth rate of approximately 45 percent, the study notes. If organizations are not grappling with these issues now, their security and privacy teams will soon find themselves tasked with trying to answer these difficult questions, said Gal Ringel, co-founder and CEO of Mine, a global data privacy management firm.

“AI governance is moving from abstract theory to operational urgency, and the RSA Conference [in April] made that crystal clear,” Ringel recently told Dice. “For security teams, the question isn’t whether AI introduces risk. It’s how fast we can adapt our frameworks to manage it without slowing innovation.”

With an AI governance market poised for significant growth over the next several years, cybersecurity experts noted that organizations will need security pros who understand AI platforms and who are versed in privacy, regulation and cybersecurity issues related to this new field.

“While new AI models emerge, the governance of these AI models is in its infancy,” said Satish Swargam, principal security consultant at Black Duck. “Regulations are not clearly defined to address safety, security and data privacy concerns on the use of AI models, and AI anomalies or outlier detections should include detection of these violations.”

Ringel and others noted that at the recent RSA Conference in San Francisco, the chatter from the show floor showed tech and security pros attempting to come to terms with building ethical, compliant AI systems that do not compromise user trust or introduce organizational risk. In some cases, this feels like building the plane while flying it.

“The strongest teams today are treating AI governance like they would any other critical security domain: through visibility, automation and repeatable processes,” Ringel added. “That means knowing which models you’re using, where data flows, who has access and how decisions are being made. It’s not glamorous work, but it’s foundational.”

At the same time, security pros are beginning to discuss frameworks, playbooks and other ways to build up AI governance. Privacy foundations such as Record of Processing Activities (RoPAs) and Data Protection Impact Assessments (DPIAs) are increasingly useful guides.

“It’s proof that security and privacy aren’t separate silos, they’re compounding forces. Governance doesn’t need to be perfect,” Ringel said. “However, it does need to be proactive, transparent and embedded into the core of how AI is built and deployed.”

Black Duck’s Swargam also wants to see organizations adopt an AI Bill of Materials (AIBOM) and other safeguards to track the code used to create these AI tools.

“To ensure uncompromised trust in software for the increasingly regulated, AI-powered world, AI governance has to mature with the inclusion of AIBOM, detection of copyright violations on the use of AI code suggestions, redaction of sensitive data when AI is used for analytics only and better techniques to test the relevance and security of AI models,” Swargam told Dice.

When Nicole Carignan, senior vice president, security and AI strategy, and field CISO at Darktrace, thinks about AI governance, she sees a number of parallels to the early days of cloud computing. With the latter example, organizations were eager to adopt newer technology, but privacy, security, regulatory and other issues began to spring up—complicating the picture.

In turn, organizations should look at a shared security model for AI governance and start considering these issues now rather than wait for them to manifest later.

“For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption,” Carignan told Dice. “When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries and applying security controls that account for external dependencies.”

Effective AI governance requires a skilled workforce that understands deep cross-functional collaboration. This can include security and privacy teams as well as legal, HR, compliance, data, and product leaders.

Cyber professionals also need industry collaboration. By sharing successful governance models and insights, they can help create the standards for securing AI and ensuring compliance with a host of regulatory agencies and lawmakers. But this requires many security pros to begin upskilling now.

“Security practitioners—and teams in legal, compliance, and risk—must upskill in AI technologies and data governance. Understanding system architectures, communication pathways, and agent behaviors will be essential to managing risk,” Carignan added. “As these systems evolve, so must governance strategies. Static policies won’t be enough. AI governance must be dynamic, real-time and embedded from the start. Organizations that treat governance and security as strategic enablers will be best positioned to harness the full potential of AI safely and responsibly.”

For organizations adopting AI, security teams and cybersecurity professionals must understand that, without carefully constructed guardrails, AI can discover and utilize personal information without regard to impact on privacy or personal rights. Similarly, AI used in advanced cybersecurity technologies can be exploited, said Kris Bondi, CEO and co-founder of Mimoto.

To help mitigate these issues, security teams must develop skills and then take lessons from areas such as privacy to understand how to best secure AI technology to ensure proper governance is followed.

“To prepare to address today’s AI concerns and what is to come, organizations should add privacy professionals to their security planning. There is a misconception that protecting privacy limits security teams’ ability to detect malicious activity. This is a fallacy,” Bondi told Dice. “Security efforts that rely on surveillance are creating their version of organizational risks. If privacy foundations aren’t well-established and unknown to users, an AI tool could be collecting and storing personal information. This quickly becomes a privacy and security issue with the AI tech turning into a breach target.”