Lead Data Engineer

  • Dallas, TX
  • Posted 6 hours ago | Updated 6 hours ago

Overview

Hybrid
Depends on Experience
Contract - W2

Skills

AWS
Apache
SQL

Job Details

Lead Data Engineer

Location: Miramar, Dallas, or Remote

Duration: 6 months, Temp Only

Job Description:


________________________________________
Summary
Are you passionate about building and supporting modern data platforms in the cloud? We re looking for a Sr. Data Platform Engineer who thrives in a hybrid role 60% administration and 40% development/support to help us scale our data and DataOps infrastructure. You ll work with cutting-edge technologies like Databricks, Apache Spark, Delta Lake, and AWS CloudOps, Cloud Security, while supporting mission-critical data pipelines and integrations. If you re a hands-on engineer with strong Python skills, deep AWS experience, and a knack for solving complex data challenges, we want to hear from you.
________________________________________
Key Responsibilities
Design, develop, and maintain scalable ETL pipelines and integration frameworks.
Administer and optimize Databricks and Apache Spark environments for data engineering workloads.
Build and manage data workflows using AWS services such as Lambda, Glue, Redshift, SageMaker, and S3.
Support and troubleshoot DataOps pipelines, ensuring reliability and performance across environments.
Automate platform operations using Python, PySpark, and infrastructure-as-code tools.
Collaborate with cross-functional teams to support data ingestion, transformation, and deployment.
Provide technical leadership and mentorship to junior developers and third-party teams.
Create and maintain technical documentation and training materials.
Troubleshoot recurring issues and implement long-term resolutions.
________________________________________
Minimum Qualifications
Bachelor s or Master s degree in Computer Science or related field.
5+ years of experience in data engineering or platform administration.
3+ years of experience in integration framework development with a strong emphasis on Databricks, AWS, and ETL.
________________________________________
Required Technical Skills
Strong programming skills in Python and PySpark.
Expertise in Databricks, Apache Spark, and Delta Lake.
Proficiency in AWS CloudOps, Cloud Security, including configuration, deployment, and monitoring.
Strong SQL skills and hands-on experience with Amazon Redshift.
Experience with ETL development, data transformation, and orchestration tools.
________________________________________
Nice to Have / Working Knowledge
Kafka for real-time data streaming and integration.
Fivetran and DBT for data ingestion and transformation.
Familiarity with DataOps practices and open-source data tooling.
Experience with integration tools such as Apache Camel and MuleSoft.
Understanding of RESTful APIs, message queuing, and event-driven architectures.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Akshaya Inc