What You Will Own:
What You Will Not Own:
Shape of the Work:
This is a role that lives at three altitudes at once:
With program teams (hands-on advisory).Partner with program owners early, before evaluations are designed, to shape study approach, sample size, stratification, gold-standard definition, and decision thresholds. Translate ambiguous failure modes into concrete, defensible evaluation designs. Coach teams through the technical work so that what arrives at governance review is rigorous, not performative.
With the evaluation toolkit (hands-on build).Design and operate the reusable assets that let evaluation scale: LLM-as-Judge rubrics and calibration methods, golden sets, simulation harnesses, A/B and shadow-mode study templates, subgroup fairness analyses, and drift monitors. Keep a pragmatic eye on what actually works in a clinical environment versus what works in a paper.
With the analyst team (technical leadership).Set technical direction, assign work across active evaluations, review analysis code and study designs, and raise the technical bar. Mentor analysts on methodology, statistical rigor, and the domain knowledge that makes evaluation credible. Grow them from execution into independent evaluation design.
Methods You'll Use:
Work is typically performed in an office or remote environment. Accountable for satisfying all job specific obligations and complying with all organization policies and procedures. The specific statements in this profile are not intended to be all-inclusive. They represent typical elements considered necessary to successfully perform the job.
*Relevant experience may be a combination of related work experience and degree obtained (Master's Degree = 2 years; PHD = 4 years ).
Required Skills & Qualifications:
6+ years in data science, statistics, ML engineering, or applied quantitative research, with demonstrated experience as the senior technical voice on cross-functional projects
Strong foundation in experimental design and causal inference — and judgment about which method fits which situation
Hands-on experience designing and running model evaluation studies in real production settings
Experience evaluating LLM or generative AI systems, or comparable experience evaluating complex ML systems where ground truth is messy
Proven ability to translate ambiguous failure modes into concrete, defensible evaluation designs and monitoring metrics
Strong fluency in Python and SQL; working comfort with modern ML tooling and cloud-native data environments
Experience with fairness and equity evaluation for ML systems
Track record of providing technical leadership and mentorship without formal people-management authority
Clear written communication — the role produces evaluation memos and specifications that non-technical decision-makers rely on
Healthcare, clinical, or regulated-industry experience strongly preferred
MS or PhD in a quantitative field preferred; equivalent experience accepted