Senior AI Security Engineer

Overview

On Site
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - long term

Skills

Software Development
Leadership
Software Security
Incident Management
Test Plans
Test Scenarios
Test Methods
Security Controls
Security Architecture
Data Science
Security Analysis
Collaboration
Research
Computer Science
Electrical Engineering
Computer Engineering
Statistics
Econometrics
Cyber Security
Information Security
Python
Torch
NumPy
Pandas
Cloud Computing
Microsoft Azure
Amazon Web Services
Google Cloud
Google Cloud Platform
Deep Learning
Natural Language Processing
Computer Vision
Generative Artificial Intelligence (AI)
Large Language Models (LLMs)
TensorFlow
PyTorch
Keras
Test Scripts
Test Cases
Programming Languages
Communication
Articulate
Software Testing
Conflict Resolution
Problem Solving
Critical Thinking
Attention To Detail
Publications
Operations Support Systems
Machine Learning Operations (ML Ops)
Functional Testing
Regression Testing
Performance Testing
Usability Testing
Algorithms
Training
Testing
IT Management
Artificial Intelligence
Machine Learning (ML)
EXT
IMG

Job Details

Senior AI Security Engineer

Brooklyn, New York

Hybrid with 3 days in Office and 2 days remote: 2 MetroTech Center, Brooklyn, NY 11201

On camera interview required

Contract

The needed resource skill set is specialized in working with software development teams to ensure the security and responsible use of AI applications by providing guidance at various stages of planning and implementing security design, processes, and solutions, and, testing and validation. The resource will have significant interaction with NYC Cyber Command leadership, its engineering, architecture, and application security teams, incident response and other cyber security practitioners.

TASKS: Design, implement, and execute test approaches to GenAI systems (MyCity Chatbot) to identify security flaws, particularly those impacting confidentiality, integrity, or availability of information.

Perform various types of tests such as functional testing, regression testing, performance testing, and usability testing to evaluate the behavior and performance of the AI algorithms and models.

Create, implement, and execute test plans and strategies for evaluating AI systems, including defining test objectives, selecting suitable testing methods, and identifying test scenarios.

Document test methods, results, and suggestions in clear and brief reports to stakeholders.

Perform security assessments including creating updating and maintaining threat models and security integration of Gen AI platforms. Ensure that security design and controls are consistent with OTI's security architecture principals.

Design security reference architectures and implement/configure security controls with an emphasis on GenAI technologies. Provide AI security architecture and design guidance as well as conduct full-stack architecture reviews of software for GenAI systems and platforms.

Serve as a subject matter expert on information security for GenAI systems and applications in cloud/vendor and on-prem environments.

Discuss AI/ML concepts proficiently with data science and ML teams to identify and develop solutions for security issues.

Collaborate with engineering teams to perform advanced security analysis on complex GenAI systems, identifying gaps and contributing to design solutions and security requirements.

Identify and document defects, irregularities or inconsistencies in AI systems and working closely with developers to rectify and resolve them.

Ensure the quality, consistency and relevance of data used for training and testing AI models (includes collecting, preprocessing and validating data)

Assess AI systems for ethical considerations and potential biases to make sure they follow ethical standards and encourage inclusivity and diversity.

Collaborate with diverse teams including developers, data scientists, and domain experts to understand requirements validate assumptions and align testing efforts with project goals.

Conducting research to identify vulnerabilities and potential failures in AI systems.

Design and implement mitigations, detections, and protections to enhance the security and reliability of AI systems.

Perform model input and output security including prompt injection and security assurance.

MANDATORY SKILLS/EXPERIENCE Note: Candidates who do not have the mandatory skills will not be considered.

Bachelor's degree in computer science, electrical or computer engineering, statistics, econometrics, or related field, or equivalent work experience

12+ years of hands-on experience in cybersecurity or information security.

4+ years of experience programming with demonstrated advanced skills with Python and the standard ML stack (TensorFlow/Torch, NumPy, Pandas, etc.)

4+ years of experience with Natural Language Processing (NLP) and Large Language Models (LLM) desired

4+ years of experience working in cloud environment (Azure, AWS, Google Cloud Platform)

Demonstrated proficiency with AI/ML fundamental concepts and technologies including ML, deep learning, NLP, and computer vision.

Demonstrated ability (expertise preferred) in attacking GenAI products and platforms.

Demonstrated recent experience with large language models.

Demonstrated experience with using AI testing frameworks and tools such as TensorFlow or PyTorch, or Keras

Demonstrated ability to write test scripts, automate test cases, and analyze test results using programming languages and testing frameworks listed above.

Demonstrated ability to Identify and document defects, irregularities or inconsistencies in AI systems and working closely with developers to rectify and resolve them.

Ability to work independently to learn new technologies, methods, processes, frameworks/platforms, and systems.

Excellent written and verbal communication skills to articulate challenging technical concepts to both lay and expert audiences.

Ability to stay updated on the latest developments, trends, and best practices in both software testing and artificial intelligence.

DESIRABLE SKILLS/EXPERIENCE:

Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

Background in designing and implementing security mitigations and protections and/or publications in the space

Ability to work collaboratively in an interdisciplinary team environment

Participated or currently participating in CTF/GRT/AI Red Teaming events and/or bug bounties developing or contributing to OSS projects.

Understanding of ML lifecycle and MLOps.

Perform various types of tests such as functional testing, regression testing, performance testing, and usability testing to evaluate the behavior and performance of the AI algorithms and models Ability to ensure the quality, consistency and relevance of data used for training and testing AI models (includes collecting, preprocessing and validating data)

Ability to assess AI systems for ethical considerations and potential biases to make sure they follow ethical standards and encourage inclusivity and diversity Ability work in and provide technical leadership to cross-functional teams to develop and implement AI/ML solutions, including capabilities that leverage LLM technology

Highly flexible/willing to learn new technologies

Ayush Sharma Sr. US Technical Recruiter

| Ext:149

| G-talk:

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.