AI Security Analyst (Remote)
The AI Security Analyst serves as the organization's dedicated subject matter expert at the intersection of artificial intelligence and cybersecurity within a regulated healthcare environment. This role is responsible for evaluating AI vendors and technologies, establishing and enforcing secure AI implementation standards, and providing hands-on guidance to development and engineering teams adopting AI platforms such as Microsoft Copilot Studio, Azure AI Foundry, Snowflake Cortex, Claude Code, and other large language model (LLM)-powered tooling.
Operating within the HIPAA-regulated landscape, this analyst will ensure AI integrations including Model Context Protocol (MCP) servers, agentic workflows, command-line interfaces (CLIs), APIs, and third-party AI extensions are architected and deployed in a manner consistent with NIST AI RMF, HITRUST, and organizational security policies. The role acts as a trusted advisor, security gatekeeper, and enabler for responsible AI adoption across the enterprise.
As part of our process after applying, you may receive an invitation from our AI Recruiter Avery for a short conversation that lets you share more about your background beyond your resume. For questions, contact .
- Job Type: Contract to hire
- Location: Remote working Pacific hours
- Compensation: This job is expected to pay about $65 - $95 per hour plus benefits
- No Visa Sponsorship Available for this role
What You ll Do:
AI Vendor & Technology Evaluation
- Lead security assessments of AI vendors and platforms prior to adoption or renewal
- Evaluate data handling, model transparency, and platform security controls
- Produce vendor risk reports with ratings, controls, and recommendations
- Maintain AI technology inventory with risk classifications and review cycles
Secure AI Implementation Guidance
- Advise engineering and data teams on secure AI adoption and architecture
- Define and enforce secure configurations and least-privilege access
- Review AI integrations for authentication, encryption, and prompt injection risks
- Establish security standards for AI development tools and conduct code reviews
- Develop reference architectures, templates, and best practices
AI Risk Management & Compliance
- Maintain AI risk register aligned to NIST AI RMF
- Ensure compliance with HIPAA and applicable privacy regulations
- Conduct threat modeling and AI-focused security testing (e.g., prompt injection, data leakage)
- Monitor emerging AI threats and contribute to governance policies
Security Integration Reviews
- Assess AI architectures for data flow, segmentation, and trust boundaries
- Ensure proper handling of sensitive data (e.g., PHI) in AI systems
- Evaluate RAG and agentic workflows for access and escalation risks
- Provide security approval through change management processes
Training, Awareness & Policy
- Deliver AI security training across technical and clinical teams
- Develop and maintain AI security policies and usage standards
- Publish internal guidance and threat intelligence updates
What Gets You the Job:
- Bachelor s degree in Cybersecurity, Computer Science, Information Systems, or a closely related field
- Master s degree preferred; equivalent professional experience considered
- 5+ years of progressive experience in information security, with a minimum of 2 years focused on AI/ML security or applied AI technology evaluation
- Must have demonstrated hands-on experience with Copilot Studio and Azure AI Foundry including a deep understanding of backend functionalities including Plugin manifest security review, connector authentication, sensitivity label enforcement, identity configuration, private endpoints, content filtering policy management, model deployment governance, etc.
- Demonstrated hands-on experience with one or more of the following is a plus: Claude / Anthropic APIs, OpenAI API, GitHub Copilot, or LLM agentic frameworks (LangChain, AutoGen, Semantic Kernel)
- Experience working in a regulated environment; healthcare industry background strongly preferred
- Proven track record conducting vendor risk assessments and producing executive-level risk documentation
- Strong background in security fundamentals including grounding in IAM (OAuth 2.0, OIDC, SAML, managed identities, workload identity federation), API security, network security; SIEM/SOAR integration for AI audit log ingestion, anomaly detection, and automated response; and threat modeling methodologies such as STRIDE, PASTA, or application of MITRE ATT&CK and ATLAS frameworks
- Certifications (CISSP, CSSLP, OSCP/OSWE, CEH, AWS/Azure AI Security, Microsoft SC-100, Google PCSAE, CCSP, HCISPP, HITRUST CCSFP, CIPP/US, CRISC) are a plus
Irvine Technology Corporation (ITC) connects top talent with exceptional opportunities in IT, Security, Engineering, and Design. From startups to Fortune 500s, we partner with leading companies nationwide. Our AI recruiter, Avery helps streamline the first step of your journey so we can focus on what matters most: helping you grow. Join us. Let us ELEVATE your career!
Irvine Technology Corporation provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Irvine Technology Corporation complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.
![]()