Data Engineer with Spark and Kafka

  • Charlotte, NC
  • Posted 17 hours ago | Updated 12 hours ago

Overview

Hybrid
63 - 63
Full Time
No Travel Required
Unable to Provide Sponsorship

Skills

data engineer
sql
python
spark
kafka
etl
ely

Job Details

Please send your resume ONLY if you can interview IN-PERSON in Charlotte, NC
Only accepting direct candidates to work on $63/hour on W2 only

Job Title: Data Engineer with Python, Spark and Kafka Location: Phoenix, Arizona Work Schedule: 3 days in office, 2 days remotely Hybrid
Summary
The Software Engineer, Data will be responsible for designing, developing, and maintaining robust data solutions that enable efficient storage, processing, and analysis of large-scale datasets. This role focuses on building scalable data pipelines, optimizing data workflows, and ensuring data integrity across systems . The engineer collaborates closely with cross-functional teams including data scientists, analysts, and business stakeholders to translate requirements into technical solutions that support strategic decision-making. A successful candidate will have strong programming skills, deep knowledge of data architecture, and experience with modern cloud and big data technologies , while adhering to best practices in security, governance, and performance optimization.

Principal Duties And Responsibilities:
Design and implement scalable, reliable, and secure data solutions.
Proficient in SQL and data modeling concepts. Develop data models and schemas optimized for performance and maintainability . Build and maintain ETL/ELT pipelines to ingest, transform, and load data from multiple sources .
Optimize data workflows for efficiency and cost-effectiveness.

Collaborate closely with data analysts and business teams to understand their requirements.
Build frameworks for data ingestion pipelines that handle various data sources, including batch and real-time data.
Participate in technical decision-making.
Design, develop, test, deploy, and maintain data processing pipelines.
Design and build scalable, reliable infrastructure with a strong focus on quality, security, and privacy techniques.

Communicate complex technical concepts effectively to diverse audiences.
Understand business KPIs and translate them into technical solutions.
Create detailed technical documentation covering processes, frameworks, best practices, and operational support.
Provide constructive feedback during code reviews.

Position Specifications:
Bachelor's degree in Computer Science, Computer Engineering, or Information Systems Technology
5+ years of overall experience in software development using Python Strong experience with SQL and database technologies ( Relational and NoSQL).
H ands-on experience with data pipeline frameworks (e.g., Apache Spark, Airflow, Kafka ).
Familiarity with cloud platforms (Google Cloud Platform) and data services.
Knowledge of data modeling, ETL/ELT processes, and performance optimization.
Strong analytical and communication skills: verbal and written

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.