REQUIRED SKILLS | Role overview You will build and integrate the security guardrails that make AI usable at scale: policy as code, proxy layers for model access, prompt/content filtering, evaluation harnesses, secrets & key management, telemetry, and automation in CI/CD. You'll prototype quickly (PoCs), harden what works, and partner with platform, data, and product teams to get controls into production on Google Cloud with modern DevOps practices. What you'll do - Build secure AI access layers (Python) for internal use and service to service scenarios: request/response inspectors, output redaction, rate limiting, and audit logging. Integrate with sensitivity labels/DLP and identity controls where applicable.
- Develop agent safety patterns (for orchestration frameworks) including tool use allow lists, function sandboxing, constrained retrieval, and memory hygiene; create reusable modules for product teams.
- Implement and operate evaluation pipelines (red team prompts, jailbreak detection, toxicity/PII checks, hallucination/grounding scores) as part of CI/CD-gating releases on eval thresholds; capture artifacts for 5Rs evidence.
- Engineer Google Cloud Platform security controls for AI workloads: VPC SC, private service connect, service account hygiene, Workload Identity Federation, CMEK, Secret Manager, Cloud Build/Artifact Registry policies, Cloud Logging/Monitoring/SCC alerting.
- Harden data pipelines feeding models (poisoning/tamper detection, provenance/lineage, RBAC/ABAC, DLP), working with data engineering teams.
- Automate controls (policy as code) to enforce least privilege, environment isolation, egress controls, and artifact signing; integrate with existing SAST/DAST/SCA and threat modeling workflows.
- Contribute to Copilot security enablement: configure Purview sensitivity, Copilot DLP, Restricted Access sites, and Conditional Access for AI apps; validate via test plans.
- Ingest architecture diagrams, data flow specs and service metadata to produce LLM assisted Security use-cases (leveraging AI for security).
- Engineer autonomoassisted SOC agents to ingest alerts from Defender XDR/Sentinel and approved third party sources, perform enrichment
What you'll bring - Strong software engineering in Python (frameworks, testing, packaging) with experience building secure services/middle tiers and AI agent integrations.
- Hands on Google Cloud expertise (IAM, GKE/Cloud Run, Cloud Build, Artifact Registry, Secret Manager, VPC SC, SCC) and DevOps (IaC, CI/CD, policy as code).
- Practical knowledge of AI threats & mitigations (prompt injection filters, content moderation, output redaction, token level guardrails, secrets hygiene, model endpoint hardening).
- Familiarity with enterprise collaboration controls (Purview labels, DLP for Copilot, restricted access sites) and how to test their efficacy.
Nice to have - Experience wiring evaluations/red team harnesses into CI (e.g., blocking merges on eval regressions); exposure to EU AI Act/GDPR implications for logging/telemetry and DPIAs.
- Knowledge of SAST/DAST/SCA and dependency governance aligned to our SDLC standards.
|