Main image of article AI and Cybersecurity Engineers: What to Know, How to Grow

Cybersecurity engineers play a critical role in safeguarding an organization’s information systems and data from unauthorized access, cyberattacks, and other forms of cyber threats. In simplest terms, they design, develop, and implement security architectures that align with the organization’s tech stack.

While the role of a cybersecurity engineer varies from one organization to another—some may focus on automation, others on application security—many are tasked daily with configuring firewalls, Intrusion Detection Systems (IDS)/Intrusion Prevention Systems (IPS), and other critical security tools. They also need to understand and protect cloud, Secure Access Service Edge (SASE), and zero-trust environments. 

These professionals don’t usually shy away from technically challenging issues, even during their more junior years. However, the rise of artificial intelligence (AI) has made the cybersecurity environment infinitely more complex—both from a threat and a tool perspective.

How AI Can Help

Aamir Lakhani, global security strategist with Fortinet’s FortiGuard Labs, said cybersecurity engineers need to be increasingly aware of AI tools.

They must leverage these technologies to enhance defenses and help them do their jobs more efficiently, as well as understand how adversaries can exploit AI in cyberattacks. “As AI and machine learning become integral to cybersecurity, engineers must develop new skills and knowledge areas to effectively integrate and defend against AI-powered systems,” he said.

AI can help detect patterns in large datasets that would be impossible or time-consuming for humans to analyze. Sajeeb Lohani, senior director of cybersecurity at Bugcrowd, said AI can help with correlating some areas of information and searching bodies of knowledge at a fast pace (when tagged appropriately). “This implies that, given the appropriate context, the AI models will be able to perform some of the groundwork, and somewhat simple tasks,” he said.

This also implies that AI tools can spare a cybersecurity engineer from spending too much time looking for information, and can make informed decisions in a more expedited manner. 

Ian Campbell, senior security operations engineer at DomainTools, noted that, although generative AI (GenAI) powered by Large Language Models (LLMs) has significant limitations, it can help improve certain scenarios by analyzing available data and providing possible context to help the defender or investigator.

“This includes the ability to analyze data sources so large or real-time volumetric that humans may not be able to keep up,” he said.

Campbell cautioned that, while LLM summaries are not always correct, they can provide great jumping-off points and first steps for cybersecurity responders, or pull context from an internal knowledge base that contains documentation on the systems and services SecOps is responsible for protecting. 

Essential Understanding

As Lohani noted, AI tools aren't all made the same when it comes to cybersecurity engineering. When evaluating these tools, he recommended following the regimen of “trust but verify” to ensure that no false positives are missed and false negatives are filtered out accordingly.

“A security engineer should also be wary of the implications of using said AI tools,” he added.

AI tools require some type of learning to build the model, so it is strongly recommended that engineers ensure that these tools do not learn on their own production data or alerts, as it may cause data leakage.

Campbell admitted many security professionals (himself included) are uncomfortable with AI in security roles, but they must accept the discomfort and start understanding where its strengths lie. “We are at best in the ‘horseless carriage’ stage of our AI development, though, so staying current and keeping open to learning is crucial,” he said.

AI Training: Where to Go

Fortunately, cybersecurity engineers looking to enhance their knowledge and skills in AI have a variety of training options available.

These programs range from short online courses to more in-depth certifications and degree programs. 

  • Coursera: Lakhani recommends AI For Everyone and Machine Learning Specialization by Andrew Ng for foundational AI knowledge and machine learning algorithms applicable to cybersecurity. “These courses give cybersecurity engineers the context they need to understand AI’s role in modern security challenges,” he said.
     
  • Udemy: AI for Cyber Security is designed specifically for cybersecurity professionals, offering real-world use cases on how AI can bolster system defenses. It focuses on using AI tools for security, which Lakhani said is key for engineers looking to automate their threat detection.
     
  • Google Cloud Training: Google’s AI and Machine Learning certification equips engineers with cloud-based AI skills, which are becoming essential for managing security in cloud environments. “Cloud integration is increasingly important, and Google’s program helps cybersecurity experts stay current,” Lakhani noted.
     
  • OpenAI Codex and GPT-3 Sandbox: Engineers can experiment with these AI models to automate cybersecurity tasks such as script writing and phishing detection.
     
  • Industry Conferences: Lakhani said attending conferences like Black Hat or DEF CON offers hands-on experience with AI tools in cybersecurity and provides networking opportunities with industry leaders.

Securing Executive Buy-In for Upskilling

When approaching higher-ups about whether to integrate AI tools into the cybersecurity tech stack, multiple perspectives must be advocated, from productivity and cost to performance gains and accuracy.

“A security engineer should prove the value of what they are intending to learn and tie that back to the greater strategy on hand,” Lohani said.

From his perspective, they must treat this current, industry-wide AI focus as a "spike" (a software engineering term for documented learning / discovery process) and document the learnings, so that others can benefit from the work performed and the skills can be taught to others in an efficient manner. 

“Provide a plan of implementation and how this will be rolled out, providing more confidence to the investment they make,” Lohani added.

To encourage management to support upskilling in AI, Campbell suggests approaching the conversation with both a problem and a proposed solution: “If you don’t clearly define the problem you’re addressing, it’s hard for management to see the need for resources.”

Presenting a potential solution, even if it’s not the final one, helps make the discussion more productive.

Campbell also advises doing your homework, researching how upskilling aligns with organizational risks and challenges, and consulting peers in and outside the company: “Community can be a game-changer.”