Software Engineer III - Python, Spark, AWS

JPMC Candidate Experience page
Bengaluru, IN

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Develops and optimizes our data pipeline architecture.
  • Optimizes data flow and collection across cross functional teams
  • Builds data pipelines and enjoys optimizing data systems and building them from the scratch.
  • Supports the data needs of multiple teams, systems and products
  • Optimizes and re-designes the data architecture to support our next generation of products and data initiatives.

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Experience as a Data Engineer
  • Experience with Python, Spark and AWS. 

    Strong Hands-on with AWS cloud services: EMR, Terraform, Cloudwatch, Redshift

  • Experience with relational SQL and NoSQL databases. 

    Familiarity with Hadoop or suitable equivalent

     

  • Create and maintain optimal data workflow architecture. 

    Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python, Spark and AWS.
  • Build data pipeline for batch as well as real time client data. 

    Keep our data separated and secure in AWS regions.

  • Superb interpersonal, communication, and collaboration skills. 

    Exceptional analytical and problem-solving aptitude.

  • Outstanding organizational and time management skills.

 

 

Preferred qualifications, capabilities, and skills

 

  • Good to have knowledge in Data visualization tools like Microsoft BI/Qliksense
  • Superb interpersonal, communication, and collaboration skills.
  • Exceptional analytical and problem-solving aptitude.
  • Outstanding organizational and time management skills.
// // //