Design and maintain scalable data pipelines leveraging event-based and streaming technologies (e.g., Kafka, NATS) to ingest, transform, and process structured and unstructured data.
Develop and manage production-quality datasets within Databricks and related data platforms to support analytics, reporting, and downstream applications.
Build and automate standardized metrics calculations and product performance reports, using Python, Power BI, Power Apps, and Sigma.
Architect reporting systems that deliver detailed insight into drilling performance across individual wells and offset wells, enabling comparative and trend-based analysis.
Develop and maintain enterprise data environments including SQL Server, Databricks, Dataverse, and related systems to ensure data integrity, accessibility, and governance.
Translate large-scale operational and market data into actionable insights that inform product strategy, commercial positioning, and customer value realization.
Collaborate with product, engineering, marketing, and operations teams to ensure alignment between technical data assets and business objectives.
Requirements:
Bachelor’s degree in Engineering, Computer Science, Data Science, Information Systems, or related technical field.
Strong experience building and maintaining scalable data pipelines within modern data lake or lakehouse architectures (e.g., Databricks, Delta Lake).
3-5 Years of Hands-on experience with event-driven and streaming technologies such as Kafka, NATS, or comparable messaging systems.
Advanced proficiency in Python and SQL for data transformation, automation, and metric standardization.
Experience developing production-grade dashboards and reporting solutions using Power BI, Sigma, Power Apps, or similar BI platforms.