Within the cybersecurity world, artificial intelligence (A.I.) and machine learning represent the next great frontier of danger and opportunity. In theory, A.I. platforms will harden network defenses—but they could also provide new ways to hack systems. For employers, it’s easy to picture a nightmare scenario in which an ultra-sophisticated A.I. right out of a science-fiction movie plows through your network defenses, destroying or corrupting systems. But the humans tasked with network defense may soon have a powerful ally: A.I. designed to prevent A.I.-enhanced hacks. Google Brain, a research project within Google dedicated to the advancement of deep learning and A.I., is hosting a contest on Kaggle, a platform for data-science competitions. The goal: have A.I. researchers design attacks and defenses on machine-learning systems that recognize and react to images. Specifically, Google Brain wants tech pros to design two kinds of attacks: a “Non-Targeted Adversarial Attack” that slightly modifies a source image to the point where a machine-learning platform classifies it incorrectly, and a “Targeted Adversarial Attack,” in which that modified image is classified in a specific way. It also wants pros to come up with a defense (“Defense Against Adversarial Attack”) against those attacks. What’s so key about modifying source images? If an A.I. platform regards a modified image as “true,” it may draw incorrect conclusions. For example, if a hacker trains an autonomous-driving platform to mischaracterize red lights as green, it could lead to unmitigated chaos. (Last year, Popular Science had an extensive article detailing how the use of modified images can corrupt even a sophisticated deep-learning network.) A research note posted to OpenAI frames these “adversarial images” as the equivalent of optical illusions, or hallucinations. Whatever the analogy, it’s clear that such attacks are a major issue, considering the number of tech firms interested in leveraging A.I. for mission-critical apps and functions. Developing A.I. systems capable of thwarting these (and other) attacks could prove vital. For employers (and the tech recruiters who provide them with fresh talent), the need to recruit cybersecurity experts who have at least some A.I. background will likely become critical in coming years. Keep that in mind when sourcing and interviewing candidates.
Find Your Next Hire
Post your open jobs and reach a database of skilled technologists, with tools you need to seamlessly transition from posting to hiring.
Learn More
Already have an account? Log in.
Tech recruiting advice to help you win talent now
Sign up to receive access to our latest ebooks, articles, webinars and more.
Thank you for subscribing!