Overview
Skills
Job Details
Duration: 6+ Months;
Location: Hybrid 2-3 days in Boston, MA or Dallas, TX office. Within 25 miles distance.
Adversarial Testing: Threat Monitoring & Intelligence: Cross-Functional Collaboration: Continuous Improvement & Innovation: Must-Have Requirements: Nice-to-Haves:
* Design and execute controlled adversarial attacks (prompt injection, input/output evaluation, data exfiltration, misinformation generation)
* Evaluate GenAI models against known and emerging AI-specific attack vectors.
* Develop reusable test repositories, scripts, and automation to continuously challenge models.
* Partner with developers to recommend remediation strategies for discovered vulnerabilities.
* Continuously monitor the external threat landscape for new GenAI-related attack methods (e.g., malicious prompt engineering, fine-tuned model abuse).
* Correlate findings with internal AI deployments to identify potential exposure points.
* Complete assessment of existing technical controls and identify enhancements.
* Build relationships with threat intelligence providers, industry groups, and government regulators to stay ahead of adversarial AI trends.
* Partner with Cybersecurity, SOC, and DevSecOps teams to integrate adversarial testing into the broader enterprise security framework.
* Collaborate with AI/ML engineering teams to embed adversarial resilience into the development lifecycle ( shift-left AI security).
* Provide training and awareness sessions for business units leveraging GenAI.
* Develop custom adversarial testing frameworks tailored to the organization s specific use cases.
* Evaluate and recommend security tools and platforms for AI model monitoring, testing, and threat detection.
* Contribute to enterprise AI security strategy by bringing forward new practices, frameworks, and technologies.
* 5+ years of experience
* Hands-on adversarial testing of GenAI systems (prompt injection/jailbreaks, input-output evals, data-exfil testing) with actionable remediation
* Cybersecurity red team / penetration testing background and strong Python/scripting for automation and test harnesses
* ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection).
* Familiarity with AI security frameworks: NIST AI RMF or MITRE ATLAS or OWASP Top 10 for LLMs
* Experience with AI/MLOps platforms & integration frameworks (Azure AI or AWS SageMaker; OpenAI API/Hugging Face; LangChain or equivalent) in an enterprise setting
* Exposure to governance/risk for AI (model risk, policy alignment)
* SIEM/SOAR & threat-intel integration and monitoring
* Experience with building reusable adversarial test repos, scripts, and automation