Main image of article AI Literacy and Cybersecurity

While rushing to implement artificial intelligence, businesses are confronting significant security concerns. A study by cybersecurity vendor AvePoint found that 75 percent of organizations surveyed reported at least one data security incident in which oversharing sensitive information during an AI rollout program negatively impacted them.

The study, based on responses from 775 global business leaders, also found that two reasons why AI implementations have not become more widespread are inaccurate data output (68.7 percent) and data security concerns (68.5 percent).

One way to counter these security issues is through the process of AI literacy – the “ability to comprehend various aspects of artificial intelligence – including its capabilities, limitations and ethical considerations – and to use it for practical purposes,” according to one published definition. The AvePoint research noted that nearly all organizations surveyed (99.5 percent) are now using a range of interventions to strengthen AI literacy among employees.

Organizations, in turn, need cybersecurity leaders to help develop these AI literacy skills, both within their security teams and to the wider group of employees, said Dana Simberkoff, chief risk, privacy and information security officer at AvePoint

“Security leaders must perform due diligence to educate employees on how to use these tools safely, how AI uses their data, and which tools are safe to share company information with,” Simberkoff told Dice. “Getting ahead is now critical – employees are now adopting these tools, and providing proper use policies is the first step in preventing potential data breaches.”

For cyber professionals, developing AI literacy skills is essential. The need for these skills is reflected in the current security job market, where nearly 10 percent of all posted security positions require some type of AI skill to be considered for hiring, according to job board CyberSeek.

At the same time, cybersecurity professionals who have these skills or are developing them are valuable teachers for the rest of the organization as employees adopt AI for a wide variety of tasks—sometimes in accordance with policy and sometimes on their own, which can create shadow AI instances that put internal data at risk.

The AvePoint research shows that AI literacy skills are essential at a time when these virtual chatbots and other platforms are changing how organizations and their employees work – sometimes not for the better. 

The study cites three instances where AI adoption, coupled with a lack of AI literacy, are causing concerns for organizations and their cybersecurity teams:

  • A model collapse that occurs when training data for large language models (LLMs) becomes less effective over time, resulting in degraded outputs.
  • Employees who use generative AI to confirm their positions – rather than to challenge and extend them – can fall victim to extreme confirmation bias.
  • A reduction in deep work happens when employees rely on AI for quick answers and stop engaging critically with their tasks.

All of these can cause cybersecurity concerns within an organization, including exposing company or customer data, or giving cyber criminals vulnerabilities to exploit. 

“New AI tools and open-source software are being rapidly created and shared every day, and many are being used without the right guardrails and education. Developing AI literacy is essential due to the growing list of challenges for humans when AI is used,” Simberkoff added.

With organizations looking to invest more in AI platforms and technologies over the next several years, experts noted that cybersecurity professionals who begin developing AI literacy now can find themselves with valuable skill sets as adoption increases.

“The best way to stay ahead of the AI curve is to develop skills that complement AI rather than compete with it. AI will handle repetitive, time-consuming tasks; however, human expertise is still essential for interpreting results, assessing risk and making strategic decisions,” Darren Guccione, CEO and co-founder of Keeper Security, told Dice. “Security professionals should focus on AI literacy – understanding how AI models work, where they excel and where they fall short.”

Keeper Security’s own research found that organizational AI training is split between formal (37 percent), peer-based (36 percent) and informal (19 percent) approaches. This suggests that many internal teams and business units are still experimenting with the best ways to build AI literacy and cybersecurity awareness.

At the same time, Guccione added that AI should be viewed as changing the nature of security and other jobs rather than eliminating them. For instance, cyber professionals with expertise in AI-driven threat detection, automation and risk analysis will be highly sought after as AI tools become further integrated into business and security operations. 

Even with those benefits, it’s important to remember that humans still make decisions about what to do with data.

“While AI can process vast amounts of data at machine speed, it still lacks the intuition and strategic thinking that human analysts bring to the table. Security teams will increasingly need to balance technical expertise with the ability to interpret and act on AI-generated intelligence,” Guccione added.

While AI literacy is important for the entire organization, Casey Ellis, founder at Bugcrowd, noted that one area where cybersecurity professionals can most effectively apply these techniques within their team is through the security operations center (SOC).

Although AI has the potential to automate mundane tasks, this shift in tasks means that analysts will be able to focus on complex, high-value work like threat hunting and strategic defense. In turn, the role of SOC analysts will shift toward managing AI systems, interpreting their outputs, and addressing the nuanced, creative challenges that machines are not able to handle, Ellis noted.

“Human incentives are still the primary driver here, and traditional SOC training, understanding threat landscapes, attacker behavior, and incident response, remains critical,” Ellis told Dice. “AI can handle repetitive, low-order tasks like triaging alerts or identifying patterns, but it lacks the creativity and contextual understanding that humans bring to the table. SOC training will evolve to include AI literacy, but foundational skills will remain essential.”

While SOC analysts are likely to see their roles change sooner rather than later, AI innovations will mean that all manner of cyber pros will need additional skills. 

“Moving forward, cybersecurity professionals must develop a deep understanding of AI fundamentals. AI literacy isn’t optional anymore; it’s essential,” Ellis added. “This includes gaining proficiency in machine learning, deep learning and natural language processing. These skills are crucial for understanding how AI tools function and how they can be effectively applied in cybersecurity.”

When it comes to retraining staff for AI, Diana Kelley, CISO at Noma Security, noted that many organizations are moving their new hires into areas of the business that require more experience and knowledge of data science and machine learning.

This makes AI literacy that much more important, since employees might not fully understand the security implications of these new platforms. That’s why it is imperative to ensure cybersecurity pros are working with these business units.

“From the CISO’s perspective, AI security is not the same as traditional cloud, application or IT security. The CISO’s team is retraining staff to understand and address novel AI threats and risks that were never a consideration before,” Kelley told Dice.