Our client is seeking an AI Security Engineer to own the security of how we adopt and integrate third-party artificial intelligence and large language model (LLM) services across the enterprise. This is a practitioner role for someone with a strong security engineering foundation who has developed meaningful expertise in AI/ML security risks — or who is actively building that expertise and ready to own it as their primary charter.
As an enterprise consumer of AI services, our risk surface centers on how we connect to and use external AI providers — securing API integrations, controlling data exposure, governing adoption of AI tools across the organization, and ensuring AI usage aligns with our regulatory obligations. This role is not focused on building or training AI models.
Direct Hire
Hybrid - Sandy Springs, GA
Targets:
• Strong background in security engineering with exposure to application security and cloud
security
• Ability to influence architecture and design decisions across engineering teams
• Hands-on experience is still important — this is not a purely advisory or governance role
• Direct experience with AI security, such as:
o AI or ML security risk assessments
o Threat modeling for AI systems
o Model lifecycle, data pipeline, or access-control risks
o Familiarity with AI security frameworks (e.g., NIST AI RMF, OWASP guidance)
This role is best suited for candidates who:
• Have operated at a principal or architect level
• Are comfortable translating security risk into practical, implementable controls
• Can work in a product/SaaS environment rather than only large, highly structured enterprises
• Are comfortable speaking at a conference in front of 2000+ people
This role is not:
o A network-only role
o Pure governance or compliance
o A people-manager position