Overview
On Site
$65.82 - $77.92 hourly
Contract - W2
Contract - Temp
Skills
Roadmaps
Coaching
RabbitMQ
Orchestration
Step-Functions
Dashboard
Collaboration
DevOps
Management
Testing
Continuous Integration and Development
Leadership
Team Leadership
Apache Spark
PySpark
Python
SQL
Amazon Web Services
Electronic Health Record (EHR)
Amazon EC2
Amazon S3
Apache Kafka
GitLab
Terraform
Continuous Integration
Continuous Delivery
JSON
Data Engineering
Workflow
Machine Learning (ML)
Advanced Analytics
Extract
Transform
Load
Lifecycle Management
Artificial Intelligence
Messaging
Job Details
RESPONSIBILITIES:
Kforce has a client in Greenwood Village, CO that is seeking a Lead Data Engineer to drive the technical planning and delivery roadmap for a data engineering initiative centered around AWS infrastructure, Spark-based transformations, and orchestration tools. This role requires a hands-on leader who can manage workflow across a team of engineers while contributing technically to build out pipelines, endpoints, and ETL workflows. The ideal candidate has a background in corporate data engineering, strong Spark/PySpark expertise, and experience working with JSON-based messaging systems and AWS-native services.
Responsibilities:
* Lead and plan the data engineering delivery roadmap (approx. 70% leadership, 30% execution)
* Manage a team of engineers, providing technical guidance and coaching
* Build and optimize data pipelines to process JSON messages using PySpark and SparkSQL
* Create and manage new endpoints with specific schemas, orchestrated through RabbitMQ and Kafka
* Work with JSON objects, parse messages, and send MDK messages via Kafka to S3
* Execute transformations from JSON to RDDs using Spark on AWS EMR/EC2
* Support orchestration through AWS Step Functions, with future transition to Airflow
* Query data into dashboards using SQL; participate in AI engineering workflows
* Collaborate with DevOps to maintain GitLab CI/CD pipelines; manage code branches, testing, deployment, and destruction workflows via Terraform
* Host ETL code in GitLab and coordinate delivery through their existing CI/CD structure
* Work closely with other CICD experts to align on best practices and delivery timelines
REQUIREMENTS:
* 7+ years of experience in data engineering, with recent experience in a leadership or team lead capacity
* Demonstrated experience with Spark and PySpark, including working with SparkSQL
* Strong Python and SQL skills
* Experience working with AWS services, particularly EMR, EC2, and S3
* Proven ability to work with JSON-based messaging systems
* Familiarity with Kafka, MSK, or similar messaging technologies
* Hands-on experience interacting with GitLab and Terraform in a CI/CD environment
* Ability to parse complex JSON objects and transform data as needed
* Comfortable working onsite 2-3 days per week (flexible based on team needs)
* Must be local and open to conversion to full-time
Preferred Skills:
* Prior experience working in corporate/enterprise data engineering teams
* Experience orchestrating workflows with Airflow
* Background in building AI/ML or advanced analytics pipelines
* Understanding of end-to-end ETL code lifecycle management, including staging, deployment, and destruction phases
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking ?Apply Today? you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Kforce has a client in Greenwood Village, CO that is seeking a Lead Data Engineer to drive the technical planning and delivery roadmap for a data engineering initiative centered around AWS infrastructure, Spark-based transformations, and orchestration tools. This role requires a hands-on leader who can manage workflow across a team of engineers while contributing technically to build out pipelines, endpoints, and ETL workflows. The ideal candidate has a background in corporate data engineering, strong Spark/PySpark expertise, and experience working with JSON-based messaging systems and AWS-native services.
Responsibilities:
* Lead and plan the data engineering delivery roadmap (approx. 70% leadership, 30% execution)
* Manage a team of engineers, providing technical guidance and coaching
* Build and optimize data pipelines to process JSON messages using PySpark and SparkSQL
* Create and manage new endpoints with specific schemas, orchestrated through RabbitMQ and Kafka
* Work with JSON objects, parse messages, and send MDK messages via Kafka to S3
* Execute transformations from JSON to RDDs using Spark on AWS EMR/EC2
* Support orchestration through AWS Step Functions, with future transition to Airflow
* Query data into dashboards using SQL; participate in AI engineering workflows
* Collaborate with DevOps to maintain GitLab CI/CD pipelines; manage code branches, testing, deployment, and destruction workflows via Terraform
* Host ETL code in GitLab and coordinate delivery through their existing CI/CD structure
* Work closely with other CICD experts to align on best practices and delivery timelines
REQUIREMENTS:
* 7+ years of experience in data engineering, with recent experience in a leadership or team lead capacity
* Demonstrated experience with Spark and PySpark, including working with SparkSQL
* Strong Python and SQL skills
* Experience working with AWS services, particularly EMR, EC2, and S3
* Proven ability to work with JSON-based messaging systems
* Familiarity with Kafka, MSK, or similar messaging technologies
* Hands-on experience interacting with GitLab and Terraform in a CI/CD environment
* Ability to parse complex JSON objects and transform data as needed
* Comfortable working onsite 2-3 days per week (flexible based on team needs)
* Must be local and open to conversion to full-time
Preferred Skills:
* Prior experience working in corporate/enterprise data engineering teams
* Experience orchestrating workflows with Airflow
* Background in building AI/ML or advanced analytics pipelines
* Understanding of end-to-end ETL code lifecycle management, including staging, deployment, and destruction phases
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking ?Apply Today? you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.