Overview
Skills
Job Details
Title: Lead Data Platform Engineer
Location: Remote
Duration: 9+ months contract
Compensation: $75.00 - 80.00/hr
Work Requirements: , Holders or Authorized to Work in the U.S.
Lead Data Platform Engineer
Summary:
Are you passionate about building and supporting modern, cloud-based data platforms? We're seeking a Sr. Data Platform Engineer to scale our DataOps infrastructure in a hybrid role (60% administration, 40% development/support). You'll work with cutting-edge technologies such as Databricks, Apache Spark, Delta Lake, and AWS, while supporting mission-critical data pipelines and integrations. If you are a hands-on engineer with strong Python skills, deep AWS expertise, and a talent for solving complex data challenges, we want to hear from you.
Key Responsibilities:
-
Design, develop, and maintain scalable ETL pipelines and integration frameworks.
-
Administer and optimize Databricks and Apache Spark environments for data workloads.
-
Build and manage data workflows using AWS services: Lambda, Glue, Redshift, SageMaker, and S3.
-
Support and troubleshoot DataOps pipelines, ensuring reliability and performance.
-
Automate platform operations with Python, PySpark, and infrastructure-as-code tools.
-
Collaborate with cross-functional teams on data ingestion, transformation, and deployment.
-
Provide technical leadership and mentor junior developers and third-party teams.
-
Create and maintain technical documentation and training materials.
-
Troubleshoot recurring issues and implement long-term solutions.
Minimum Qualifications:
-
Bachelor's or Master's degree in Computer Science or related field.
-
5+ years of experience in data engineering or platform administration.
-
3+ years of experience in integration framework development, with strong emphasis on Databricks, AWS, and ETL.
Required Technical Skills:
-
Strong programming in Python and PySpark.
-
Expertise in Databricks, Apache Spark, and Delta Lake.
-
Proficiency in AWS CloudOps and Cloud Security, including configuration, deployment, and monitoring.
-
Experience with: Kafka, Pandas, Airflow, Neo4j, GraphDB, MongoDB, PostgreSQL, OWL, Python functions, New Relic, Grafana, OpenLineage, Apache Atlas, Databricks Unity Catalog, DLT, Great Expectations, Databricks Delta-sharing.
-
Familiarity with distributed data processing, real-time streaming pipelines, unstructured/semi-structured/structured data, flexible data modeling, semantic engineering, knowledge graphs, data-as-a-service, data observability, quality frameworks, data syndication, data fabric and marketplace development, cognitive search engine development, master data management, data governance, and data migration.
Information collected and processed through your application with INSPYR Solutions (including any job applications you choose to submit) is subject to INSPYR Solutions Privacy Policy and INSPYR Solutions AI and Automated Employment Decision Tool Policy: . By submitting an application, you are consenting to being contacted by INSPYR Solutions through phone, email, or text.