Join our dynamic team at JPMorgan Chase as the Vice President and Lead for Applied AI/ML. You'll spearhead the development of cutting-edge GenAI applications, including Search, Chatbots, and Agents, to tackle exciting challenges in commercial banking and financial services. Be part of a vibrant technology team dedicated to pushing the boundaries of AI and enhancing business success.
As Vice President AI/ML Ops Engineer, you will drive the end-to-end development and deployment of GenAI-powered applications, leveraging your expertise in AI, NLP, LLM, and deep learning to deliver enterprise-level solutions. You will collaborate across teams to optimize system architecture, ensure operational stability, and foster a culture of innovation and inclusion.
Job responsibilities
- Lead the end-to-end development and deployment of GenAI-powered applications (search, chatbots, agents).
- Leverage expertise in AI, NLP, LLM, and deep learning to improve business outcomes and processes.
- Design enterprise-level solutions for LLM and GenAI use cases on cloud platforms (AWS/Azure).
- Drive development, testing, deployment, monitoring, and continuous operations for high-performant, high-volume cloud applications.
- Implement innovative software solutions and troubleshoot complex problems beyond routine methodologies.
- Utilize tools like Terraform IaC, Splunk, Dynatrace, Grafana, Prometheus, and Datadog to build scalable and maintainable systems.
- Optimize system architecture, operational stability, coding hygiene, and troubleshooting workflows.
- Advance production engineering and automation practices.
- Contribute to engineering communities, fostering a culture of innovation, diversity, and inclusion.
- Participate in SRE and production support activities.
Communicate complex technical concepts and solutions to both technical and non-technical stakeholders.
Required qualifications, capabilities, and skills
- Formal training in software engineering with 12+ years of applied technical experience.
- Strong hands-on expertise in system design, application development, debugging, fine-tuning, and operational support.
- Deep knowledge of Python (including frameworks like FastAPI), enterprise-level application microservices architecture and APIs with hands-on experience in deploying to AWS, for scalable AI workflows.
- Proficiency in AWS services (EC2, ECS, EKS, Lambda, DynamoDB, RDS, Redshift, EMR, OpenSearch, Stepfunctions, Kinesis).
- Experience developing and maintaining production-grade multi-agent systems and GenAI/LLM-specific applications.
- Good understanding of one or more AI agentic frameworks (e.g., Google ADK, LangGraph, LangChain, CrewAI).
- Solid grasp of CI/CD (Spinnake), agile development, application resiliency, monitoring and security best practices.
Preferred qualifications, capabilities, and skills
- Deep understanding of LLM and GenAI concepts and implementations.
- Good understanding of Google ADK and A2A protocol.
- Strong skills in SQL, relational databases, and data modeling (e.g., Aurora RDS, Postgres, DynamoDB).
- Familiarity with Jupyter model training environments.