Python Engineer

Overview

On Site
Full Time

Skills

Data Architecture
SaaS
Healthcare Information Technology
Data Integration
Extract
Transform
Load
Real-time
JSON
XML
Computerized System Validation
Apache Parquet
Amazon S3
FOCUS
Sprint
Amazon Web Services
Step-Functions
Apache Spark
PySpark
Cloud Computing
Scala
Python
Object-Oriented Programming
Design Patterns
Git
Unit Testing
Continuous Integration
Continuous Delivery
Workflow
Effective Communication
Collaboration
Agile
Financial Services
Life Insurance
Health Care
DevOps
NoSQL
Relational Databases
Amazon DynamoDB
Amazon Redshift
Remote Desktop Services
Amazon RDS
Documentation
Mentorship

Job Details

Our client is seeking a Python Engineer to join their high-impact Data Architecture team. This role is critical to designing and implementing real-time and batch data integrations within our cloud-native data ecosystem. You will work hands-on with AWS services, build scalable data pipelines, and code with Spark to enable seamless connectivity between enterprise systems, data providers, and SaaS partners.

The ideal candidate is a strong developer who can hit the ground running with AWS, is highly proficient in Spark (PySpark or Scala), and has deep experience in constructing data pipelines that process varied data formats. This is a highly collaborative, agile environment where problem-solvers thrive.

Key Responsibilities:
  • Design and implement robust data integration pipelines using AWS Glue ETL and Apache Spark (PySpark or Scala).
  • Build both real-time and batch pipelines to move and transform data across systems.
  • Work with diverse data formats including JSON, XML, CSV, Parquet, and fixed-width files.
  • Collaborate with architects, data engineers, and business stakeholders to ensure seamless ingestion and processing.
  • Utilize AWS services such as Lambda, Step Functions, S3, CloudWatch, IAM, and more to deliver secure, scalable solutions.
  • Troubleshoot, optimize, and maintain data workflows with a focus on performance and resilience.
  • Participate in Agile ceremonies and contribute to sprint planning, prioritization, and delivery.


Requirements:
  • Expert-level experience with AWS, including hands-on work with Glue, Lambda, Step Functions, and associated services.
  • Strong coding proficiency in Spark (using PySpark or Scala).
  • Proven experience building end-to-end data pipelines in cloud environments.
  • Ability to handle and transform a variety of structured and semi-structured data formats.
  • Proficiency inJava, Scala, and/or Python - ideally with experience across multiple.
  • Strong understanding of object-oriented programming, design patterns, and best practices.
  • Familiarity with Git, unit testing, and CI/CD workflows.
  • Effective communication and collaboration skills in cross-functional agile teams.


Preferred Experience:
  • Prior experience in highly regulated industries (e.g., financial services, life insurance, or healthcare).
  • Exposure to DevOps pipelines, monitoring, and infrastructure-as-code (e.g., CloudFormation).
  • Experience working with both NoSQL and relational databases (DynamoDB, Redshift, RDS).
  • Strong documentation skills and mentoring mindset.


#SoniTech1
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Soni Resources Group