Main image of article Generative A.I.: Friend or Foe for Cybersecurity Job Seekers?

For months, the tech and business worlds have been obsessed with generative artificial intelligence (A.I.), especially the virtual chatbot ChatGPT and its ability to produce human-like text and decent code snippets. This focus has raised questions (some serious, some not) about whether this technology will create or destroy jobs, essentially rendering humans obsolete.

The potential growth and possibly disruptive nature of generative A.I. are still being calculated and hotly contested. A recent article published by Goldman Sachs finds that the technology could raise global gross domestic product (GDP) by 7 percent. A Gartner report that looked at OpenAI’s ChatGPT finds these types of virtual chatbots will have a major impact on marketing and video content by 2025.

During April’s RSA Conference in San Francisco, one of the world’s largest gatherings of cybersecurity professionals, talk of generative A.I. and its role in the cybersecurity field filled the exhibition halls. Experts quickly noted, however, that the ultimate future of the technology is still being written.

In these early stages, significant questions about generative AI remain: What about cybersecurity jobs and careers? Will tech and cybersecurity pros need to significantly upskill to keep pace? Will generative A.I. reduce or eliminate a skills gap that finds over 700,000 open security positions in the U.S. alone?

There was consensus among the cybersecurity experts who attended the 2023 RSAC or watched the proceedings online: While generative A.I. has potential, the technology will not eliminate or destroy cybersecurity jobs, especially in the short term. Instead, these virtual chatbots and other applications should be viewed as useful tools for tech and security professionals that can augment cybersecurity work and help in the protection of data and networks.

“It’s unlikely that A.I. will completely take over cybersecurity functions, as human operators bring intuition and ethical decision-making to the task, skills which will be difficult if not impossible to fully replace with A.I.,” Dave Gerry, CEO at crowdsourcing security firm Bugcrowd, told Dice.

“That said, A.I. will continue to play an increasingly important role in cybersecurity as it becomes more advanced, and a human-machine combination will be necessary to effectively defend against evolving threats,” Gerry added. “While A.I. won’t replace human creativity and resiliency, it does hold the potential to fill some of the current talent gaps we see in the industry by automating tasks that will allow human defenders to focus on higher priority concerns.”

Learning to Live with Generative A.I.

Although generative A.I. in its current incarnation is not ready to eliminate positions and upend careers, tech and cybersecurity professionals must learn to live with the technology, especially as companies incorporate more of these applications into their daily workflows. 

At RSAC, companies such as Google announced that they are building generative A.I. and large language models (LLMs) technology into their security and cloud computing products, meaning tech pros must learn how to make these work and incorporate the tech into their security plans.

“Artificial intelligence in one form or another has been part of the cybersecurity landscape for years, with ‘A.I.’ and ‘machine learning’ being buzzwords that got a lot of attention but often had less impact than the marketing hype would imply. Now though, advances in machine learning, deep learning and the introduction of large language models, have made the technology much more functional,” Mike Parkin, senior technical engineer at security firm Vulcan Cyber, told Dice.

“We’ll see the impact of these new capabilities in multiple areas,” Parkin added. “Improved A.I. will make it easier for security teams to parse the massive volumes of data they are dealing with, letting them focus more on the highest risks with the resources they have. That is already happening now but will continue to improve.”

As with previous iterations of machine learning, experts note that the biggest current benefit of generative A.I. is helping to automate manual processes, which can then free up tech and security pros for other tasks. Tech pros need to learn these skills now to take advantage of potential benefits.

“We already see A.I. providing tangible benefits in the areas of automating access requests, both in terms of accelerated routing of requests as well as the informed setting of entitlements based on extenuating circumstances,” Piyush Pandey, CEO of security company Pathlock, told Dice. “The same goes for periodic access reviews—A.I. has shown considerable value in recommending role optimization to reduce risks. Acquiring skills that relate to understanding the fundamentals of machine learning and deep learning algorithms is crucial, as they form the backbone of A.I. systems.”

Putting A.I. to Work for Security Teams

While generative A.I. apps are in the developmental stage, security experts already see real-world scenarios where the technology can provide assistance. For Georgia Weidman, a security architect at Zimperium, this includes helping with blue- and red-team exercises.

“Models can run against specific targets and identify the higher probability of successful attacks and lateral movements,” Weidman told Dice. “On the blue-team side, what machine learning models do best is sift through tons and tons of information to better discern the usable, actionable data. Both will have the ability to potentially make security practitioners more productive because these models can remove quite a lot of root work from day-to-day execution.”

While useful, Weidman warns that any use of generative A.I. technology as part of the security process requires skilled tech pros who know what it can do… as well as its limits.

“For the foreseeable future, generative A.I. requires a skilled user who can discern the difference between a sentence that reads well from a sentence that is presenting correct information,” Weidman added. “As some security firms are already using ChatGPT to assist with writing penetration testing reports, there are inevitably privacy concerns involved, especially around feeding a customer’s private data into a public chat API.”

Security leaders also see the promise and limitations of generative A.I. in its current form. “When we talk to our customers, CISOs representing several industries, they collectively agree that A.I. tools help their teams work more efficiently but will not replace security professionals,” Patrick Harr, CEO at SlashNext, told Dice. “Generative AI tools will enable security organizations to improve security, automate manual, time-consuming tasks, and give them the time to focus on the security and availability of data in their organization.”

Calculating Adversaries’ Skills

For several security observers, developing skills to make generative A.I. work is only one part of the process. Another concern is how malicious actors are using this technology and how these new threats will require tech and security pros to rethink their ability to react.

Already, tools such as ChatGPT are being used to write malicious code and phishing emails. In addition, future developments may require security professionals to prevent A.I. from escaping a data center or accessing specific types of information—shifting how the industry defines an adversary, said Davis McCarthy, principal security researcher at Valtix, which is part of Cisco.

“A.I. will scale threat actor resources, reduce their time-to-compromise, shorten malware development cycles and lower the bar for affiliate cybercriminals,” McCarthy told Dice. “Because the output from an A.I. system is benign until the prompt engineer makes it actionable, the true change to the threat landscape is in how efficient the skilled hackers will become, and how effective the unskilled hackers will be.”

This is another area where some experts see generative A.I. falling short: the business realm that requires security teams to assess risk. That risk can affect an organization; it can come from a malicious incident, a business or regulatory decision or an unforeseen issue, which will then require a more creative, human response.

“As we’ve seen throughout the course of history, and despite concerns to the contrary, whenever there’s a new advancement in technology, more jobs are created than replaced,” Timothy Morris, chief security advisor at Tanium, told Dice.

“The bottom line is that while A.I. can consume a great deal of information, it cannot replace human intuition and experience and take into account more than what is presented by the data,” Morris added. “The most successful security professionals display a great deal of collaboration and empathy in their interactions with customers and teammates—these displays of emotional intelligence cannot be effectively replicated by A.I.”