Job Title: Site Reliability Engineer (SRE) – DataHub & GraphQL
Location: Austin, TX & Sunnyvale, CA '
Looking For Only Independent Visa
Role Overview
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in DataHub ingestion pipelines and GraphQL APIs. The ideal candidate will be responsible for designing, building, and maintaining scalable data ingestion frameworks, ensuring reliability and performance of enterprise data platforms, and enabling seamless integration with downstream applications. This role requires a balance of software engineering, systems reliability, and data platform knowledge.
Key Responsibilities
- Design, implement, and optimize DataHub ingestion pipelines for large-scale enterprise data systems.
- Develop and maintain GraphQL APIs to support data discovery, metadata management, and integration.
- Ensure high availability, scalability, and performance of data services across cloud and on-prem environments.
- Collaborate with data engineering, product, and infrastructure teams to deliver reliable data solutions.
- Automate monitoring, alerting, and incident response processes to improve system resilience.
- Drive best practices in observability, logging, and distributed system reliability.
- Troubleshoot complex production issues and implement long-term fixes.
Must-Have Skills
- 5+ years of experience as an SRE, DevOps Engineer, or Software Engineer with a focus on reliability and scalability.
- Strong hands-on experience with DataHub ingestion frameworks and metadata pipelines.
- Proficiency in GraphQL API design and implementation.
- Solid understanding of cloud platforms (AWS, GCP, or Azure) and container orchestration (Kubernetes, Docker).
- Expertise in monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.).
- Strong programming skills in Python, Java, or Go.
- Experience with CI/CD pipelines and infrastructure-as-code (Terraform, Ansible).
Good-to-Have Skills
- Familiarity with data governance and metadata management tools.
- Experience integrating with data platforms like Kafka, Spark, or Snowflake.
- Knowledge of REST APIs and microservices architecture.
- Exposure to security and compliance practices in data systems.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- Proven track record of delivering reliable, scalable data infrastructure solutions.