Job Summary
As a Senior Lead AI Security Engineer in our Cybersecurity team, you will design and deliver secure artificial intelligence solutions that support critical cyber use cases. You will play a key role in shaping platform standards and governance, collaborating with cross-functional teams, and driving innovation in secure AI. Together, we will build foundational capabilities and create lasting impact for our organization and the wider community.
Job responsibilities
- Lead end-to-end design and delivery of AI solutions for cyber use cases, from problem framing and data integration to model development, evaluation, deployment, and monitoring.
- Build secure LLM/RAG services and ML pipelines that integrate with SIEM/XDR, EDR, SOAR, IAM, ITSM, CMDB, code repos, and cloud telemetry.
- Establish engineering standards for secure AI: prompt security, tool/function calling patterns, input/output validation, PII masking, secrets handling, and deterministic fallbacks.
- Create evaluation harnesses with offline/online metrics, golden datasets, adversarial prompt sets, jailbreak tests, and safety/quality KPIs.
- Partner with platform teams to stand up reusable AI components: LLM gateways, vector stores, feature stores, evaluation/observability, and governance workflows.
- Implement drift and quality monitoring; define SLAs/SLOs; build incident response runbooks for AI-enabled services.
- Collaborate with risk and MRGR-style governance partners to meet documentation, validation, and attestations; maintain model/AT inventories, monitoring plans, and change logs.
- Deliver measurable impact: reduce MTTR, improve detection precision, automate control evidence collection, and accelerate secure engineering.
- Mentor engineers and analysts; publish playbooks, templates, and safe prompt libraries; lead brown-bags and office hours for adoption.
- Drive a roadmap of 2–3 flagship capabilities per year (e.g., SOC triage assistant, controls automation agent, DevSecOps code copilot).
Required qualifications, capabilities, and skills
- Minimum 7 years of software/security engineering, including hands-on experience in one or more of: detection engineering, SecOps, AppSec/DevSecOps, or cloud security.
- Minimum 3 years building and operating applied ML/LLM systems in production (RAG pipelines, embeddings, fine-tuning/specialization, vector databases, model serving).
- Proficiency in Python and at least one of: Java, Scala, or TypeScript; experience with microservices, APIs, containers, and Kubernetes.
- Familiarity with SIEM, EDR, SOAR, IAM, and ITSM integrations; streaming/data engineering with Kafka or similar.
- Experience with LLM orchestration and guardrails (prompt engineering, injection defense, tool calling, safety filters).
- Hands-on with ML/LLM ecosystems: PyTorch or TensorFlow; scikit-learn; LangChain/LlamaIndex; ONNX/Triton/Ray
- Strong understanding of secure SDLC, privacy, and data protection; ability to partner with governance to meet documentation and monitoring requirements.
- Demonstrated ability to ship secure, reliable AI features with clear metrics and post-deployment monitoring.
Preferred qualifications, capabilities and skills
- Experience building developer copilots for AppSec/DevSecOps (IaC scanning, secrets detection, SAST/DAST triage).
- Cloud security engineering across one or more major providers; IaC and policy-as-code.
- Experience or exposure to Cyber operations, Adversarial ML and LLM red teaming experience (prompt injection, data exfiltration, model abuse, poisoning defenses).
- Graph ML for identity/threat detection; anomaly detection over telemetry.
- GPU optimization, model quantization/distillation, and on-prem/private model deployment.
- Familiarity with governance for AI/ML systems in regulated environments.