If cybersecurity professionals felt inundated with information and hype about artificial intelligence over the last 12 months, they will need to steel themselves for even more in 2026. Deloitte predicts the market for agentic AI alone is expected to reach $8.5 billion next year and grow to $35 billion by 2030.
As enterprises and organizations signal their eagerness to invest in various AI platforms, attackers are also demonstrating their ability to leverage these technologies. The Wall Street Journal recently reported on a Stanford University study that found some AI platforms are becoming as proficient as human penetration testers at uncovering application vulnerabilities and developing methods to exploit them.
All of this is happening as AI disrupts the career and job market for cybersecurity professionals. While AI appears to be eliminating some entry-level roles, workers with AI skills are in particularly high demand among large enterprises.
For these and numerous other reasons, the AI field is one of the most significant trends that cybersecurity and technology professionals must follow in 2026, according to several experts and industry insiders who shared their predictions for the coming 12 months with Dice.
“While AI is powering a whole new generation of defensive tools, it also makes the types of attacks that were once the domain of only very experienced threat actors much more accessible,” said Seth Spergel, managing partner at Merlin Ventures, a venture capital firm focused on cybersecurity investments. “As a result, organizations around the world are seeing both nation-states and criminals probe their defenses at a significantly higher volume than in years past. Combine that with the geopolitical tensions we are witnessing around the globe, and there is an obvious driver for investing in the cybersecurity market.”
Summary
Trend 1: Artificial Intelligence Will Only Grow More Important
After a year of nearly nonstop artificial intelligence headlines, virtual chatbots and other AI platforms are expected to grow even more important to businesses and organizations seeking to increase productivity and improve bottom-line results.
There is also a double-edged nature to AI, as cybercriminals and nation-state groups attempt to harness these technologies for their own purposes.
Over the last several months, Rajeev Gupta, co-founder and chief product officer at Cowbell, a provider of cyber insurance for small- to medium-sized enterprises (SMEs) and middle-market businesses, noted that while AI is revolutionizing his industry, it’s also empowering cybercriminals.
The same tools used to streamline underwriting and claims are being weaponized by bad actors to launch automated, scalable cyberattacks. These attacks, Gupta noted, require no human oversight and can continuously crawl, exploit, and deploy malware across systems. With funding cuts to key government agencies such as the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the threat landscape is expected to worsen, putting even more pressure on insurers to evolve.
“Generative AI’s ability to interpret complex vulnerability data, such as CVEs and exploit databases, will be essential in building more accurate and responsive risk models. In 2026, cybersecurity best practices must evolve alongside AI adoption,” Gupta said. “Companies should verify AI tools, avoid inputting sensitive data into chatbots, and remain vigilant against increasingly sophisticated phishing attacks. Building a culture of awareness and implementing robust AI use policies will be critical to mitigating these emerging risks.”
Other experts see a convergence of emerging technologies, such as AI and quantum computing, beginning to reshape cybersecurity practices and security team operations.
“AI-generated voice and video deepfakes are becoming increasingly realistic and accessible. Voice- and video-based authentication techniques will become less useful in 2026 as attackers start to exploit this technology,” said Adam Everspaugh, a cryptography expert at Keeper Security. “This will cause a rise in breaches and account takeovers, forcing firms to replace long-standing verification methods with fake-resistant alternatives.”
Trend 2: New Skills for a New Era
With the growing use of AI by legitimate and illegitimate organizations alike, cybersecurity professionals will need to expand their skill sets in 2026 to meet these challenges while improving long-term career prospects.
For some, this means focusing more on application security, or AppSec, and secure software engineering, which can help cyber and technology professionals better leverage AI and machine learning capabilities, said Dipto Chakravarty, chief product officer at Black Duck.
“The increasing sophistication of AI-enabled attacks and the growing importance of securing AI systems will require organizations to invest in talent with expertise in AI governance, AI security, and machine learning,” Chakravarty added.
Chakravarty also identified several key areas where cybersecurity professionals will need to develop new skills, including:
- Developing and implementing AI models and algorithms while securing AI systems
- Deepening expertise in cloud security as cloud adoption continues to grow
- Understanding zero trust implementation to protect against identity-based attacks
As more code is written by AI agents, the number of vulnerabilities is expected to increase, leaving many security teams without the skills needed to confront these risks, said Krishna Vishnubhotla, vice president for product strategy at Zimperium.
“The organizations that succeed in 2026 will be the ones that adopt AI-driven security tools,” Vishnubhotla said. “These tools help teams understand issues faster, triage intelligently, and fix problems before attackers exploit them. The skills gap won’t disappear, but AI-driven security can help bridge it and keep mobile apps resilient as development speed accelerates.”
Trend 3: Regulatory Shifts
AI is changing the way enterprises and organizations conduct their business, and it’s also drawing the attention of government regulatory agencies.
In 2026, three regulatory shifts will dominate the compliance and security agenda, experts noted.
The EU AI Act will require organizations to classify systems by risk, complete conformity assessments, and maintain documentation that reshapes how AI is deployed.
At the same time, state‑level AI bills in Colorado, California, and New York are advancing, creating a fragmented U.S. landscape that demands careful navigation.
Beyond AI, data localization and digital sovereignty mandates are accelerating worldwide, including new rules in China and India. Supply chain and third-party risk transparency is also becoming nonnegotiable, driven by the EU’s Digital Operational Resilience Act, the U.S. Securities and Exchange Commission’s cybersecurity disclosure rules, and expanding critical infrastructure mandates globally.
“Security practices will evolve in parallel. Continuous controls monitoring is bifurcating, with leading organizations in financial services and regulated technology operationalizing real-time monitoring, while many others remain in pilot phases and struggle with foundational data gaps,” said Chris Radkowski, a GRC expert at Pathlock. “Infrastructure and identity controls, such as access monitoring, configuration drift, and patch compliance, are increasingly automated, while process- and judgment-based controls like segregation of duties reviews remain periodic.”
For security teams, this regulatory environment will require skilled employees who know how to use technology to automate compliance systems, said Dana Simberkoff, chief risk, privacy, and information security officer at AvePoint.
“In 2026, organizations need to focus on building flexible, automated compliance systems that can quickly adapt to new regulations, while investing in technology that can help track and manage compliance across multiple jurisdictions,” Simberkoff said. “The companies that will succeed are those that view compliance not as a checkbox exercise, but as a fundamental part of their business strategy and operations.”
Trend 4: Identity In an AI Age
Over the last year, cybersecurity experts have warned about the rise of non-human identities and the threats these artificial identities can pose to organizations. A traditional reliance on passwords and multifactor authentication is likely to prove insufficient against this evolving threat.
Instead, as the sheer volume of digital identities for human users, devices, code, and AI models continues to skyrocket, digital certificates are emerging as a scalable, cryptographically sound approach to identity management, said Tim Callan, chief compliance officer at Sectigo.
“Consequently, the ability to automate the entire certificate lifecycle — from issuance to increasingly rapid renewal cycles — will shift from a tactical IT function to a critical, strategic element of enterprise identity and access management,” Callan said. “This move will ensure the necessary crypto-agility to combat advanced attacks and future-proof enterprise security against quantum threats.”
Rhys Downing, a threat researcher at Ontinue, noted that attackers can use collaboration tools such as Microsoft Teams to impersonate employees and gain deeper access to enterprise networks.
In this scenario, attackers can purchase a Teams license, spin up a tenant, and send an invitation directly to a user’s inbox and chat window. Once the victim joins, the threat actor can impersonate IT staff or colleagues, deliver malicious files, or socially engineer the user in real time. Because external chat invitations may bypass — or quietly weaken — existing communication controls, many organizations may not realize how exposed they are until attackers are already inside the chat interface.
“Identity-based attacks will evolve beyond credential phishing into real-time impersonation inside collaboration apps, driving higher rates of malware delivery, unauthorized access, and employee compromise,” Downing said. “Collaboration platforms are on track to become the next major identity threat vector, one that businesses must urgently prepare for in 2026.”
Trend 5: Zero Trust Still Matters
While AI dominates headlines and industry discourse, security experts caution that organizations must continue to prioritize zero trust principles as part of their broader security strategies.
When implemented correctly, zero trust can help address many of the security challenges AI has introduced, said Negin Aminian, senior manager of cybersecurity strategy at Menlo Security.
“In 2026, organizations will continue to pursue zero trust, and yes, we will get there — but only by changing how we implement it. As the browser becomes increasingly central to work, where employees, partners, and contractors access business-critical applications and use AI, security models must adapt,” Aminian said. “The key is to pivot away from agent-heavy [Zero Trust Network Access] models and focus on where the risk actually lives: the browser. This approach can make zero trust less resource-intensive and far more effective for the modern workforce.”