Overview
On Site
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
AI Security and Controls
AI assurance strategy
risk and control matrix
Internal Audit
Job Details
Role: Senior AI Security and Controls Engineer/SME
Location : NYC, NY 10019
AI Security and Controls Subject Matter Expert to design and execute an define a AI assurance strategy, risk and control matrix, guidance.
We're seeking someone to join our team to work in the technology audit team, within Internal Audit, to manage/execute risk based assurance activities for Firm s use of GenAI or Artificial Intelligence in general.
Internal Audit
- Conduct Model Audits: Execute a wide range of assurance activities focused on the controls, governance, and risk management of generative AI models used within the organisation.
- Model Security & Privacy Reviews: Review and assess privacy controls, data protection measures, and security protocols applied to AI models, including data handling, access management, and compliance with regulatory standards.
- Familiarity with GenAI Model: Good understanding of current and upcoming GenAI models.
- Adopt New Audit Tools: Stay current with and implement new audit tools and techniques relevant to AI/ML systems, including model interpretability, fairness, and robustness assessment tools.
- Risk Communication: Develop clear and concise messages regarding risks and business impact related to AI models, including model bias, drift, and security vulnerabilities.
- Data-Driven Analysis: Identify, collect, and analyse data relevant to model performance, privacy, and security, leveraging both structured and unstructured sources.
- Control Testing: Test controls over AI model development, deployment, monitoring, and lifecycle management, including data lineage, model versioning, and access controls.
- Issue Identification: Identify control gaps and open risks, raise insightful questions to identify root causes and business impact, and draw appropriate conclusions.
Required Skills:
- Experience: At least 3-4 years relevant experience in technology audit, AI/ML, data privacy, or information security.
- Audit Knowledge: Understanding of audit principles, tools, and processes (risk assessments, planning, testing, reporting, and continuous monitoring), with a focus on AI/ML systems.
- Communication: Ability to communicate clearly and concisely, adapting messages for technical and non-technical audiences.
- Analytical Skills: Ability to identify patterns, anomalies, and risks in model behaviour and data.
- Education: Master s or bachelor s degree (Computer Science, Data Science, Information Security, or related field preferred).
- Certifications: CISA, CISSP, or relevant AI/ML certifications (preferred, not required).
Technical Knowledge: Strong understanding of:
- AI/ML model development and deployment processes
- Model interpretability, fairness, and robustness concepts
- Privacy frameworks (e.g., GDPR, CCPA)
- Security standards (e.g., NIST, ISO 27001/02)
- Data governance and protection practices
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.