AI Security Analyst

  • Allentown, PENNSYLVANIA
  • Posted 3 hours ago | Updated moments ago

Overview

On Site
DOE
Contract - W2

Skills

Generative Artificial Intelligence (AI)
Security Controls
Risk Management
Software Development Methodology
ISO 9000
Regulatory Compliance
Privacy
Data Security
Computer Science
Information Security
Threat Modeling
Training
Cyber Security
Authentication
Encryption
Network Security
Communication
Collaboration
Agile
SAFE
Analytical Skill
Microsoft
Cloud Computing
SaaS
Microsoft Azure
Microsoft Power BI
Data Governance
DLP
Artificial Intelligence
Machine Learning (ML)
Scripting
Security QA
Cloud Security
Certified Ethical Hacker

Job Details

JOB SUMMARY Client is seeking a passionate and technically skilled Junior to Mid-Level AI Security Analyst to join the Product Security team. This role is ideal for candidates with a strong foundation in cybersecurity and growing expertise in AI/ML systems. The Analyst will implement and maintain security guardrails for AI solutions, including Traditional ML, Generative AI, and Agentic AI, within the established AI Security Controls framework. This framework emphasizes observability, traceability, risk management, and specialized safeguards for Generative and Agentic AI. The role collaborates with Data & AI and Product Teams to ensure AI-driven applications adhere to enterprise security standards and policies, supporting responsible and resilient adoption of AI technologies. Key Responsibilities Collaborate with product teams to embed security into AI/ML models, pipelines, and applications throughout the SDLC Conduct security reviews for AI systems, including LLMs, generative models, and data pipelines Support the development of AI security policies, standards, and controls aligned with NIST, ISO, and emerging AI regulations Define and implement AI-specific risk controls, including model validation, bias mitigation, and explainability Collaborate with legal, compliance, and data privacy teams to ensure adherence to evolving AI regulations Assist in evaluating and implementing AI security tools for observability, model scanning, and data protection Help build awareness and training materials for secure AI development practices across agile teams Perform other duties and projects as assigned Required Qualifications Bachelors degree in computer science, Information Security, or a related field 2+ years of experience in cybersecurity, with exposure to AI/ML technologies Familiarity with secure coding practices, threat modeling, and cloud-native environments Understanding of AI/ML concepts such as model training, inference, data labeling, and adversarial attacks Knowledge of common AI risks (prompt injection, data poisoning, model misuse, etc.) and cybersecurity concepts (authentication, encryption, network security) Strong communication and collaboration skills in agile environments (SAFe experience a plus) Strong analytical skills to assess risks and vulnerabilities in complex systems Preferred Qualifications Professional certifications such as CCSK, CEH, or AI-specific credentials Experience with Microsoft AI security tools (MS Defender for Cloud, MS Defender for Cloud Apps, Azure AI Content Safety, MS Purview) Experience with AI security tools (e.g., Zenity, HiddenLayer) Exposure to Power Platform, Power BI, or other low-code tools, especially with data governance or DLP implementation Experience specifically in AI security or ML model governance Proficiency in scripting and automation for security testing Education: Bachelors Degree Certification: Certificate of Cloud Security Knowledge , Certified Ethical Hacker
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.