Role: AWS Databricks Engineer
Location: Irvine CA / Los Angeles CA
Type: Contract
Job Summary:
We are seeking an experienced AWS Databricks Engineer to design, develop, and optimize scalable data solutions on AWS. The ideal candidate will have strong expertise in Databricks, big data processing, and cloud-based data engineering.
Key Responsibilities:
- Design and implement data pipelines using AWS and Databricks
- Develop and optimize ETL/ELT workflows for large-scale data processing
- Work with Spark (PySpark/Scala) for data transformation and analytics
- Integrate data from multiple sources including APIs, databases, and streaming platforms
- Ensure data quality, governance, and performance optimization
- Collaborate with cross-functional teams including data scientists and analysts
Required Skills:
- Strong experience with AWS services (S3, EC2, Lambda, Glue, Redshift, etc.)
- Hands-on experience with Databricks and Apache Spark
- Proficiency in Python and/or Scala
- Experience with SQL and data warehousing concepts
- Familiarity with CI/CD pipelines and DevOps practices
Preferred Qualifications:
- Experience with real-time data streaming (Kafka, Kinesis)
- Knowledge of Delta Lake architecture
- Strong problem-solving and communication skills