Main image of article Offensive Security Skills: What Cybersecurity Pros Need to Know

When building a better cybersecurity defense, the offensive side is playing a greater role.

At this year’s RSA Conference in San Francisco, hallway conversations focused on offensive security, whether it’s called red teaming, ethical hacking, White Hat security or penetration testing. With cyber threats increasing and attackers tapping into artificial intelligence (AI) to assist with their malicious operations, more organizations need security professionals who think like adversaries, experts noted.

Offensive security positions are critical for vulnerability detection and building resilient response strategies for many organizations. As attackers evolve their techniques, security teams must adopt new methods to counter these threats. At the same time, enterprises are increasingly hiring for specialized offensive roles, which is leading cybersecurity pros to look for red team simulations and capture-the-flag competitions to hone their skills.

“Offensive security and red teaming are critical because they flip the script on traditional defense. Instead of waiting for attackers to strike, red teams simulate real-world threats, probing systems for weaknesses and revealing them so that they can be dealt with before adversaries can exploit them,” said Casey Ellis, founder of Bugcrowd, which creates crowdsourced security platforms and tools. “It’s about thinking like a hacker to outsmart a hacker—a proactive approach that’s essential in today’s threat landscape, especially with the rise of AI-driven attacks, and an AI-generated attack surface.”

Cybersecurity career statistics point to the need for more offensive positions, as well. A study published by CyberSN, a security and IT workforce management platform provider, looked at cybersecurity job postings across the U.S. from January 2022 to December 2024 using data sourced from more than 30 job boards and Fortune 500 companies. The researchers found that postings for red team members increased nearly 30 percent during this time.

Currently, more than 4,300 penetration testing positions are listed as open in the U.S., according to CyberSeek. For those with the right skill sets, these jobs offer an average annual salary of $132,457, the site noted.

Developing those offensive skills takes time and effort. At the same time, the growing use of AI, supply-chain attacks and ransomware are all altering what organizations need when it comes to red team members or pen testers. For cybersecurity pros looking to enter the field or move up the career ladder to a more advanced position or leadership role, developing the right skill sets to meet current challenges can help.

One of the first steps cyber pros can take to develop the skills needed for a red team or pen testing role is through certifications. CyberSeek lists the following certs as the most requested by organizations looking to fill these roles:

The hands-on skills that cyber pros need include vulnerability assessments, network security and exploit development, as well as knowledge of how cloud-based systems and platforms work, said Mayuresh Dani, security research manager with Qualys Threat Research Unit.

“If you already have some of these skills, learning to automate your workflows by learning rapid development programming languages and using cloud environments such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud are increasingly becoming pertinent to stay ahead of the herd,” Dani told Dice. “At the very minimum, having an adversarial mindset where we think like attackers, staying up-to-date with the latest threats and attack techniques and understanding how to achieve offensive objectives stealthily is needed.”

While hard skills are a must, those organizations hiring red teamers or pen testers are also looking for candidates who can apply that knowledge to address complex tasks or creative ways to solve problems.

“The most important skill I look for in new team members is their ability to think creatively to solve complex problems,” Thomas Richards, infrastructure security practice director at Black Duck, told Dice. “Oftentimes, we find ourselves in situations where we need to use a tool or software not for its intended purpose but for what it can do. When it comes to technical skill, there are a number of training platforms and capture-the-flag events for someone to gain an understanding of the process and tools to conduct offensive engagements.”

Bugcrowd’s Ellis also noted that, while hard technical skills count, good red teamers and pen testers have skills that go beyond those.

“Yes, you need to know your way around penetration testing tools, scripting and exploit development, but the real magic comes from creativity, adaptability, and ‘opposite thinking,’” Ellis told Dice. “It’s about understanding how attackers think, leveraging social engineering and even mastering the psychology of deception. If you’re already in security, adding skills like adversarial AI testing, cloud-native security, code-lead application exploitation or infrastructure hardening can make you a standout candidate.”

While most large-scale organizations and enterprises need some type of red team or pen testing services, those experts and insiders who attended the RSA Conference noted that several industries are now in greater need of these services. These include the healthcare and manufacturing industries that have been especially targeted by ransomware groups and supply-chain attacks.

“Manufacturing and healthcare are particularly ripe for red teaming, given their reliance on operational technology (OT) and the critical nature of their systems,” Ellis noted. “These sectors are waking up to the fact that a single breach can have catastrophic consequences—not just financially, but in terms of human safety.”

Richards also noted that the manufacturing sector is in greater need of offensive security talent and has fallen behind other industries, such as financial services.

“Industries like manufacturing don’t have such strict government oversight on their business processes, so cybersecurity may not be a primary component of their operations,” Richards said. “We see cyber criminals moving from one business sector to another, looking for easy targets to compromise.  With that being said, I would encourage all organizations to integrate red teaming services, either with an in-house team or a third-party vendor, regularly as part of their cybersecurity program.”

As with other aspects of cybersecurity, AI is having an impact on offensive security, red teams and pen testers. 

For many organizations, AI represents a way to scale up these operations by automating tasks such as vulnerability assessment without needing an accompanying increase in human resources. AI can also simulate attacks, analyze responses and uncover vulnerabilities at speeds that far exceed human capabilities, allowing teams to operate more efficiently, said Amit Zimerman, co-founder and chief product officer at Oasis Security.

There is also a downside. AI systems can generate false positives or miss certain vulnerabilities that require human intuition and contextual understanding. AI systems must also be properly trained, which can be resource-intensive, and may not always account for the nuances of every unique environment or attack vector. Finally, the AI in offensive operations could be misused, potentially leading to ethical dilemmas if AI-driven tools fall into the wrong hands.

All this requires human intervention and oversight, which is why cybersecurity professionals must stay on top of the latest AI developments as the technology matures.

“Human oversight remains vital when using AI in offensive cybersecurity,” Zimerman told Dice. “While AI is highly efficient in automating and scaling tasks, human expertise is necessary to interpret complex results, make critical decisions and apply context-specific reasoning. Humans are essential for ensuring that AI-driven tools are used responsibly and for validating the results of AI processes, especially when it comes to the nuances of certain vulnerabilities or threat landscapes.”

Other experts like Jason Soroko, a senior fellow at security firm Sectigo, added that an overreliance on AI tools and platforms for offensive security needs can backfire. It’s another reason why human oversight and cybersecurity skills are needed.

“Human oversight ensures accurate interpretation of AI findings and informed decision-making. Experts validate and contextualize AI results, reducing risks of false positives or negatives,” Soroko told Dice. “Human judgment addresses ethical considerations, ensuring activities comply with legal and moral standards. Additionally, humans can adapt to unexpected situations and evolving threats beyond AI’s current capabilities. Large Language Models are susceptible to problems that require human intervention and should be seen as a tool rather than a fully autonomous solution.”

At the same time, AI can allow red teamers and pen testers to find vulnerabilities faster and work with IT and development teams to address those flaws before attackers do. “Early detection allows developers to address security issues promptly, reducing costs and improving overall security posture,” Soroko added. “AI tools provide real-time feedback and integrate with development environments, making secure coding practices more accessible and efficient.”