• Homearrow image
  • Blogarrow image
  • Recruiting for Cybersecurity:...

Recruiting for Cybersecurity: Keep A.I. in Mind

Hiring Candidates
Sourcing Candidates
  • July 27th, 2017
  • 2 min read

Summary

Already have an account? Log in.

Within the cybersecurity world, artificial intelligence (A.I.) and machine learning represent the next great frontier of danger and opportunity. In theory, A.I. platforms will harden network defenses—but they could also provide new ways to hack systems. For employers, it’s easy to picture a nightmare scenario in which an ultra-sophisticated A.I. right out of a science-fiction movie plows through your network defenses, destroying or corrupting systems. But the humans tasked with network defense may soon have a powerful ally: A.I. designed to prevent A.I.-enhanced hacks. Google Brain, a research project within Google dedicated to the advancement of deep learning and A.I., is hosting a contest on Kaggle, a platform for data-science competitions. The goal: have A.I. researchers design attacks and defenses on machine-learning systems that recognize and react to images. Specifically, Google Brain wants tech pros to design two kinds of attacks: a “Non-Targeted Adversarial Attack” that slightly modifies a source image to the point where a machine-learning platform classifies it incorrectly, and a “Targeted Adversarial Attack,” in which that modified image is classified in a specific way. It also wants pros to come up with a defense (“Defense Against Adversarial Attack”) against those attacks. What’s so key about modifying source images? If an A.I. platform regards a modified image as “true,” it may draw incorrect conclusions. For example, if a hacker trains an autonomous-driving platform to mischaracterize red lights as green, it could lead to unmitigated chaos. (Last year, Popular Science had an extensive article detailing how the use of modified images can corrupt even a sophisticated deep-learning network.) A research note posted to OpenAI frames these “adversarial images” as the equivalent of optical illusions, or hallucinations. Whatever the analogy, it’s clear that such attacks are a major issue, considering the number of tech firms interested in leveraging A.I. for mission-critical apps and functions. Developing A.I. systems capable of thwarting these (and other) attacks could prove vital. For employers (and the tech recruiters who provide them with fresh talent), the need to recruit cybersecurity experts who have at least some A.I. background will likely become critical in coming years. Keep that in mind when sourcing and interviewing candidates.

Author

Summary

Dice Hire Insights Newsletter

Already have an account? Log in.

You May Also Like

View All Posts
Building a Tech Employee Onboarding Program to Aid Retention

Building a Tech Employee Onboarding Program to Aid Retention

  • April 30th, 2025
  • 4 min read
Read now
HR's Guide to Tech Certification Programs and Their Value

HR's Guide to Tech Certification Programs and Their Value

  • April 30th, 2025
  • 4 min read
Read now
What Is Technical Upskilling? Business Benefits and Tips

What Is Technical Upskilling? Business Benefits and Tips

  • April 29th, 2025
  • 4 min read
Read now
View All Posts