Job Summary This role leads the organization’s enterprise-wide Responsible AI strategy, ensuring AI technologies are designed, deployed, and monitored in a manner that is safe, fair, transparent, and privacy conscious. The position establishes and operationalizes governance frameworks to assess and manage bias and model risk, while proactively identifying and mitigating AI-related risks. Acting as a key connector across product, engineering, legal, and risk teams, this role enables innovation at scale while maintaining strong oversight and accountability.
The leader will define and implement Responsible AI‑by‑Design principles, conduct impact assessments and model validations, and establish go/no-go criteria for enterprise AI product launches. Responsibilities include developing risk scoring and model safety methodologies, overseeing adversarial testing and control validation, monitoring evolving AI regulations, and leading training, awareness, and external reporting efforts to embed responsible AI practices across the enterprise.
Job Requirements / Qualifications
- 8+ years of experience in any combination of AI ethics, data science, machine learning, or AI compliance.
- Prior experience developing model risk management frameworks and risk taxonomies.
- Experience developing governance processes for risk assessments, preferably for AI.
- Deep understanding of AI models and ethical frameworks.
- Action oriented individual with prior experience implementing governance and privacy frameworks through technology and automation.
- Deep understanding of AI Safety Controls and industry standards like NIST SP 800-54, AI RMF.
- Bachelor’s degree in Data Science, Computer Science, or a related technical field required; advanced degree preferred.
Job Duties / Responsibilities
- Develop and implement responsible AI frameworks for risk calibration, remediation, and AI event management.
- Lead initiatives to test internally AI models and solutions.
- Establish Responsible AI By Design principles.
- Conduct ethical impact assessments and model validations.
- Collaborate with Legal, Risk, and Technology teams.
- Monitor regulatory developments in AI.
- Lead training and awareness initiatives.
- Support external reporting and stakeholder engagement.
- Define go/no-go launch criteria for enterprise AI products.
- Build risk scoring methodologies for model safety.
- Oversee adversarial testing such as red teaming and control validation.
- Develop governance and privacy approaches for AI use cases and sensitive data.
Marriott International is an equal opportunity employer. We believe in hiring a diverse workforce and sustaining an inclusive, people-first culture. We are committed to non-discrimination on any protected basis, such as disability and veteran status, or any other basis covered under applicable law.