AI Penetration Tester

Overview

Remote
Depends on Experience
Contract - Independent
Contract - W2
Contract - 6 Month(s)
No Travel Required
Unable to Provide Sponsorship

Skills

Artificial Intelligence
Mobile Applications
Penetration Testing
Vulnerability Assessment
Threat Modeling
Web Applications
Burp Suite
Machine Learning (ML)
Collaboration
Communication

Job Details

       Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporating AI/ML, objective-based testing of AI-driven features, and coverage of both traditional and AI-centric attack surfaces.

       Perform threat modeling for AI-powered software systems, evaluate AI-related business logic, and conduct architecture reviews. Focus on adversarial ML vectors, prompt-based vulnerabilities, and other AI-specific security risks.

       Develop and improve AI-driven tools and methodologies for offensive security tasks such as discovery, exploitation, fuzzing, and adversarial ML testing, emphasizing web apps, APIs, and mobile clients.

       Demonstrate AI penetration testing findings to technical and non-technical audiences, including live demos.

       Collaborate with engineering, development, and security teams to communicate findings, lead remediation discussions, and advise on secure AI model development and deployment best practices.

       Research emerging AI attack techniques and evaluate their potential impact, identify vulnerabilities, and provide actionable recommendations to strengthen AI defenses.

       Collaborate with internal Red Teams, SOC analysts, and AI security researchers, sharing insights and data to enhance AI-driven offensive security methodologies. Refine existing AI red teaming approaches by integrating new adversarial ML techniques and proven exploitation tactics.

Act independently on AI penetration testing with minimal oversight, guiding engagements from planning through execution and reporting.

Qulifications:-

       Minimum three (3) years of recent penetration testing experience focused on APIs, web applications, and mobile applications. Experience with AI model testing or AI security is highly desirable.

       Proven background in AI red teaming and adversarial attack development, including prompt engineering attacks, LLM-based vulnerability analysis, and model evasion techniques.

       Proficiency with penetration testing tools (e.g., Burp Suite Pro, Netsparker, Checkmarx) and AI security frameworks (e.g., TensorFlow, PyTorch, LLM APIs, LangChain).

       Strong communication and presentation skills to explain AI-related vulnerabilities to technical and non-technical stakeholders and drive remediation.

       One or more major ethical hacking certifications (e.g., GWAPT, CREST, OSWE, OSWA) and certifications or training in AI security techniques.

       Bachelor’s degree from an accredited college/university or equivalent industry experience.

       Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.