Applied AI SDET (Dallas, TX)
The InRhythm Opportunity
InRhythm is establishing a modern software engineering practice built around an AI-driven Development Lifecycle (AI-DLC) methodology — spec-driven, agentic, and powered by Claude. As an Applied AI SDET, you will be the quality authority of an elite pod delivering enterprise software at agentic velocity. You are not validating handcrafted code written line by line. You are owning the quality bar for a pipeline where Claude generates significant portions of production code — and where the consequences of undetected errors are magnified by the speed of delivery.
Your Impact
AI-generated code introduces quality risks that traditional QA processes were not designed to catch. Subtle logic errors, incomplete edge case handling, and systematically misapplied patterns can pass human review when reviewers are moving fast. Your role goes beyond building a safety net — you will design and implement AI-driven quality engineering processes, tools, and agents that make quality intrinsic to the delivery pipeline rather than a gate at the end of it. You will feed intelligence back into the practice so the pipeline continuously improves, and you will build the automation infrastructure that allows the pod to ship with confidence at a pace that manual QA cannot sustain.
What You Will Do
- Own the Test Strategy: Define and maintain the end-to-end test strategy for AI-generated code across unit, integration, contract, and E2E layers, calibrated to the risks introduced by agentic delivery.
- Build AI-Driven Quality Agents: Design and implement AI-powered quality agents and tooling — using Claude or equivalent — that automate test generation, coverage gap analysis, regression triage, and defect pattern detection within the CI/CD pipeline.
- Validate AI-Generated Tests: Critically assess Claude-generated unit and integration tests for completeness, correctness, and meaningful coverage. Identify gaps, redundancies, and tests that pass without actually validating behavior.
- Build and Maintain Automation Suites: Design, implement, and own automated test suites that run as quality gates in the CI/CD pipeline, including regression safety nets that protect the codebase from agentic regressions.
- TestContainers & Isolated Test Environments: Design and manage containerized, isolated test environments using TestContainers and Docker to ensure backend service tests run against production-equivalent dependencies — databases, message queues, and third-party service stubs — without shared state or environment bleed.
- Synthetic Data Engineering: Design and maintain synthetic data strategies that produce realistic, consistent, and constraint-safe test data for backend service testing, ensuring coverage of edge cases, boundary conditions, and stateful workflows without reliance on production data.
- Establish AI Quality Engineering Processes: Define and document repeatable quality engineering processes tailored to the AI-DLC model — covering how tests are generated, reviewed, validated, and evolved alongside AI-generated features.
- Identify AI Failure Patterns: Detect and document systematic patterns in Claude’s output quality — recurring anti-patterns, common security misses, or edge cases Claude consistently overlooks — and feed these back to the AI Solution Owner to improve specs and prompt context.
- Partner on Spec Quality: Review feature specifications before agentic generation begins, flagging ambiguities or missing acceptance criteria that will produce untestable or unverifiable output.
- Own Your Delivery: Take full responsibility for test coverage and quality sign-off on every feature delivered by the pod, from spec handoff through production deployment and post-deploy verification.
Must Have Experience:
- SDET / QA Engineering Proficiency: 5+ years in a software development engineer in test or senior QA role, with a track record of building and owning test automation in an enterprise software delivery context.
- Python & FastAPI Testing: Proven experience testing Python/FastAPI backend services including async endpoint testing, dependency injection overrides, and integration test patterns with pytest and httpx.
- TypeScript & React Testing: Experience writing and maintaining tests for TypeScript/React frontends using React Testing Library, Jest, and component-level testing patterns.
- Node.js Testing: Proven experience testing Node.js backend services using Jest, Supertest, or Mocha — including middleware testing, async handler validation, and integration testing of REST and GraphQL APIs built with Express, Fastify, or NestJS.
- TestContainers & Isolated Environments: Hands-on experience with TestContainers (Python or Java) to spin up isolated, containerized dependencies — PostgreSQL, Redis, Kafka, or equivalent — for reliable, repeatable integration tests with zero shared state.
- Synthetic Data Engineering: Demonstrated ability to design synthetic data pipelines that generate realistic, constraint-safe test data for stateful backend services, covering edge cases and boundary conditions without using production data.
- API & Contract Testing: Proven experience with REST and GraphQL API testing, contract testing (Pact or equivalent), and validating service boundaries in microservices or distributed systems.
- AI-Augmented Development Experience: Demonstrable experience using AI coding agents (Claude Code, GitHub Copilot, Cursor, or equivalent) as a delivery tool, including the ability to critically evaluate and extend AI-generated test output.
- CI/CD Pipeline Integration: Experience embedding automated test suites as quality gates in CI/CD pipelines with GitHub Actions, ArgoCD, or equivalent, including test parallelization, flakiness management, and coverage reporting.
- Kubernetes & AWS Fundamentals: Working knowledge of Kubernetes (EKS) and AWS services relevant to test environment management including ECR, S3, RDS, and CloudWatch for test observability.
- SAST & Security Testing Awareness: Familiarity with SAST tooling (SonarQube, Checkmarx, or equivalent) and the ability to interpret and act on security scan findings in AI-generated code.
- BDD & Spec-Driven Testing: Experience with Gherkin/Cucumber or equivalent BDD frameworks and their integration with agentic spec pipelines.
- E2E Testing: Experience with Cypress or Playwright for end-to-end testing of React applications.
Nice to Have Experience:
- Performance & Load Testing: Experience with k6, Gatling, or equivalent tools for validating service performance under load.
- Rust Familiarity: Exposure to testing Rust services or understanding of Rust’s testing model in a polyglot microservices context.
- Familiarity with prompt engineering concepts and how spec quality influences AI-generated test completeness.
Compensation
This role aligns with InRhythm’s standard salary bands for Senior Engineers. Specific compensation will be calibrated based on technical depth and experience level.
The InRhythm Consultant
At InRhythm, we are more than technical executors. As a member of the AI-DLC practice, every consultant is expected to demonstrate:
- Strategic Thinking:: The ability to see quality not as a gate at the end of delivery, but as a discipline embedded into every stage of the agentic pipeline.
- Clarity and Ownership:: Taking full responsibility for quality outcomes and communicating test coverage trade-offs with transparency.
- Stakeholder Alignment:: Building trust with client counterparts by demonstrating that AI-generated code can be shipped confidently when the right quality infrastructure is in place.
- Bias to Measurable Outcomes:: A relentless focus on quality metrics — defect escape rate, coverage thresholds, regression frequency — that prove the safety and reliability of agentic delivery.
- Consultant Mindset:: The ability to operate as a trusted advisor, not just a technical executor. You bring a point of view, challenge assumptions constructively, and actively contribute to the client’s success beyond the scope of your immediate role.