Lead Software Engineer Python , PySpark, System Design , AWS Cloud , Databricks

JPMC Candidate Experience page
Bengaluru, IN

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorganChase within the Corporate Technology  you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Solid understanding of cloud platforms (AWS, GCP, Azure) and hands on experience in cloud-based solutions (Preferably AWS)

     

  • Working exposure on data integration and handling projects that involves processing huge volume of data.
  • Hands-on experience in preparing/integrating the datasets to match to the reporting requirements.
  • Working knowledge on Data Management/Data Quality Rules development will be a plus.
  • Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
  • High level of expertise in SQL, Data Warehousing & Business Intelligence concepts
  • Good working knowledge using any database system (both SQL and NOSQL) and experience in creating/maintaining scalable database load process with hands-on experience in framing up Complex SQL Queries and ensuring optimal data storage and retrieval.

     

  • Expertise in working with agile projects and exposure to automated testing / dev ops kind of environments.

     

     

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 7+ years applied experience
  • Requires experience of 10-14 years of Data Management, Data Integration, Data Quality, Data Monitoring  and Analytics experience
  • At least 2-4 years’ experience leading teams
  • Strong hands-on experience in developing and maintaining robust ETL processes for data integration and strong understanding of data transformation and cleansing techniques.
  • Knowledge of big data technologies such Apache Spark(Preferably Pyspark
  • Strong Hands-on experience in Python
  • Proficiency in SQL and Database development.
  • Passion for building scalable, global, complex systems to solve problems with proven ability to deliver high quality software.
  • Experience with building Cloud native applications using cloud platforms such as AWS, Azure, GCP (Preferably AWS) and experience in leveraging cloud services for data storage, processing and analytics.
  • Strong hands-on experience with containerization technologies like Docker and Kubernetes (EKS).
  • Solid understanding of Object-Oriented design and concepts.
  • Strong fundamentals in data structures, caching, multithreading, messaging and asynchronous communication.
  • Strong collaboration skills to work effectively with cross functional teams.

     

     

     

Preferred qualifications, capabilities, and skills 
  • Experience in using Databricks for big data analytics and processing.
  • Experience with Data Orchestrator tool like Airflow.
  • Experience with Data Integration tools like Apache NIFI
  • Familiarity with data governance and metadata management.
  • Experience with messaging technologies like Kafka, Kinesis.
  • Exposure to No-SQL databases like MongoDB.
// // //