Job Summary:As an individual contributor, you will take ownership of the most challenging technical initiatives-solving problems at scale, driving improvements in performance and reliability, and setting the standards that others follow. You will partner closely with Data Engineering, Platform Engineering, Data Science, Architecture, and other technical teams to evaluate solution designs, influence engineering decisions, and mentor developers in modern data engineering practices.
Responsibilities and Accountabilities:
- Develop & Maintain Scalable Data Pipelines:Architect, build, and optimize ETL/ELT pipelines using PySpark, Spark SQL, Auto Loader, and Delta Live Tables to support end to end ingestion and transformation.
- Implement Robust Lakehouse Architecture:Design and enhance Medallion Layers (Bronze/Silver/Gold) data models, applying Delta Lake features such as schema evolution, CDF, Optimize, and Z Ordering to deliver performant, reliable, and cost-efficient data layers.
- Integrate Data Across Cloud Platforms:Ingest and harmonize structured, semi structured, and unstructured data from multiple cloud environments including Azure and enterprise object storage.
- Develop Reusable Engineering Frameworks:Create and maintain reusable Python, PySpark, and YAML based libraries and patterns to standardize ingestion, transformation, automation, and engineering workflows across teams.
- Enforce Data Quality & Governance:Implement and operationalize automated data validation frameworks (DLT expectations, data contracts) while applying Unity Catalog governance covering permissions, lineage, external locations, and PII/PHI controls.
- Optimize Performance & Cost Efficiency:Tune Spark workloads by applying partitioning, caching, and join optimization strategies; leverage Photon, serverless SQL, and cluster right sizing to improve runtime performance and reduce compute costs.
- Collaborate with Data & Platform Teams:Partner closely with Data Scientists, Analysts, SMEs, and Platform Engineering teams to translate requirements into scalable data solutions and align on architectural, governance, and operational standards.
- Operationalize Data Science Workflows:Convert prototype notebooks into production ready pipelines, support feature engineering and batch/real time scoring and manage MLflow tracking and model registry operations.
- Lead cross-team alignmenton Databricks standards (CI/CD, governance, data quality, and operational readiness), ensuring consistent adoption across domains and delivery teams.
This individual contributor is primarily responsible for translating business requirements and functional specifications into software solutions, for developing, configuring or modifying integrated business and/or enterprise application solutions, and for facilitating the implementation and maintenance of software solutions.
Essential Responsibilities:- Completes work assignments by applying up-to-date knowledge in subject area to meet deadlines; following procedures and policies, and applying data and resources to support projects or initiatives; collaborating with others, often cross-functionally, to solve business problems; supporting the completion of priorities, deadlines, and expectations; communicating progress and information; identifying and recommending ways to address improvement opportunities when possible; and escalating issues or risks as appropriate.
- Pursues self-development and effective relationships with others by sharing resources, information, and knowledge with coworkers and customers; listening, responding to, and seeking performance feedback; acknowledging strengths and weaknesses; assessing and responding to the needs of others; and adapting to and learning from change, difficulties, and feedback.
- As part of the IT Engineering job family, this position is responsible for leveraging DEVOPS, and both Waterfall and Agile practices, to design, develop, and deliver resilient, secure, multi-channel, high-volume, high-transaction, on/off-premise, cloud-based solutions.
- Provides insight into recommendations for technical solutions that meet design and functional needs.
- Provides systems incident support and troubleshooting for basic to moderately complex issues.
- Assists in identification of specific interfaces, methods, parameters, procedures, and functions, as required, to support technical solutions.
- Supports collaboration between team members, architects, and/or software consultants to ensure functional specifications are converted into flexible, scalable, and maintainable solution designs.
- Assists in translating business requirements and functional specifications into code modules and software solutions, with guidance from senior colleagues, by providing insight into recommendations for technical solutions that meet design and functional needs.
- Assists in the implementation and post-implementation triage and support of business software solutions, with guidance from senior colleagues, by programming and/or configuring enhancements to new or packaged-based systems and applications.
- Develops and executes unit testing to identify application errors and ensure software solutions meet functional specifications.
- Supports component integration testing (CIT) and user acceptance testing (UAT) for application initiatives by providing triage, attending test team meetings, keeping the QC up-to-date, performing fixes and unit testing, providing insight to testing teams in order to ensure the appropriate depth of test coverage, and supporting the development of proper documentation.
- Assists in the development, configuration, or modification of integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using programming languages.
- Writes technical specifications and documentation.
- Assists with efforts to ensure new and existing software solutions are developed with insight into industry best practices, strategies, and architectures.
- Assists in building partnerships with IT teams and vendors to ensure written code adheres to company architectural standards, design patterns, and technical specifications.
- Participates in some aspects of software development lifecycle phases by applying an understanding of company methodology, policies, standards, and internal and external controls.
- Works with vendors (e.g., offshore, application, service).
Knowledge, Skills and Abilities: (Core)- Ambiguity/Uncertainty Management
- Attention to Detail
- Business Knowledge
- Communication
- Critical Thinking
- Cross-Group Collaboration
- Decision Making
- Dependability
- Diversity, Equity, and Inclusion Support
- Drives Results
- Facilitation Skills
- Health Care Industry
- Influencing Others
- Integrity
- Learning Agility
- Organizational Savvy
- Problem Solving
- Short- and Long-term Learning & Recall
- Teamwork
- Topic-Specific Communication
Knowledge, Skills and Abilities: (Functional)- Acts with Compassion
- Analytical Skills
- Application Delivery Process
- Application Maintenance
- Application Testing
- Client Focus
- Crisis Incident Management
- Debugging and Troubleshooting
- Demonstrating Personal Flexibility
- Innovative Mindset
- Managing Diverse Relationships
- Microsoft Office
- Organizational Skills
- Prioritization
- Relationship Building
- SQL/SAS
Minimum Qualifications:- Bachelors degree in Computer Science, CIS, or related field and Minimum three (3) years experience in software development or a related field. Additional equivalent work experience may be substituted for the degree requirement.
Preferred Qualifications:- 6+ years of Data Engineering experience, including 4+ years working on Databricks.
- Proven experience designing enterprise-scale data architectures and distributed systems
- Deep expertise in Delta Lake internals (file pruning, compaction, metadata management, and CDF tuning).
- Experience leading complex migrations (legacy ETL, cloud migrations, warehouse consolidation).
- Experience developing reusable engineering frameworks, libraries, and standards.
- Strong proficiency in Python, SQL, and PySpark for building scalable data pipelines.
- Experience with cloud platforms such as Azure, AWS, or GCP, including working with object storage.
- Hands-on experience with warehouse/Lakehouse technologies, including Synapse, Snowflake, or Redshift.
- Knowledge of traditional ETL tools, such as Informatica, Talend, or equivalent.
- Proficiency with Git-based version control and DevOps tooling (Azure DevOps, GitHub, Bitbucket).
- Experience with Databricks Workflows and orchestration tools for automated data processing.