Overview
On Site
$55 - $73 hourly
Contract - W2
Contract - Temp
Skills
Analytics
Business Intelligence
Data Science
Backend Development
Data Quality
Dashboard
Technical Drafting
Documentation
Reporting
Extract
Transform
Load
Writing
Cloud Computing
Computer Hardware
Python
Data Structure
SQL
Performance Tuning
Stored Procedures
Apache Spark
Data Processing
Amazon Web Services
Amazon S3
Amazon Redshift
Electronic Health Record (EHR)
Problem Solving
Conflict Resolution
Data Integrity
Communication
Collaboration
Data Visualization
Tableau
Continuous Integration
Continuous Delivery
Version Control
Git
Agile
JIRA
Confluence
Artificial Intelligence
Messaging
Job Details
RESPONSIBILITIES:
Kforce has a client in Greenwood Village, CO that is seeking an ETL Data Engineer.
Job Summary:
We are seeking a highly skilled and motivated ETL Developer/ Data Engineer to design, build, and optimize robust data pipelines that support analytics and reporting across the organization. The ideal candidate will have a strong background in SQL, Spark, and Python, hands-on experience with AWS cloud services, and a working knowledge of data visualization using Tableau. This role is critical in transforming raw data into reliable, analytics-ready datasets to support business intelligence and data science efforts.
Responsibilities:
* Design, develop, and maintain scalable ETL pipelines using Spark and Python to ingest and transform data from various sources
* Focused on back-end development, responsible for coding all data jobs and ensuring efficient execution and automation
* Write optimized SQL queries to extract, clean, and manipulate large datasets
* Deploy and monitor data pipelines in AWS (e.g., S3, Glue, Lambda, EMR, Redshift)
* Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and ensure timely data delivery
* Implement and enforce data quality checks, lineage tracking, and performance tuning
* Create or support Tableau dashboards that visualize trends and key business metrics using clean, structured datasets
* Participate in code reviews, technical design discussions, and documentation of processes and architecture
* Key responsibilities include creating and maintaining metrics-for example, tracking how often users interact with specific features like the rewind button
* Writes code to capture user interaction data and builds automated pipelines to process and report this information
* Participates in stakeholder interactions to gather requirements, share progress, and align on metrics and data insights
REQUIREMENTS:
* 7+ years of experience in ETL development and data pipeline engineering
* Advanced proficiency in SQL (senior level or above)
* Functional knowledge of Spark and Python
* Experience with AWS cloud services and cloud computing in general
* Strong communication and writing skills
* Proven ability to work independently and troubleshoot technical issues
* Familiarity with integrating AI into cloud-based services
* Knowledge of performance hardware tuning
* Strong programming skills in Python with experience handling data structures and libraries
* Expertise in SQL for complex queries, performance optimization, and stored procedures
* Experience using Apache Spark for large-scale data processing
* Proficient in AWS data services such as S3, Glue, Lambda, Athena, Redshift, or EMR
* Strong problem-solving skills and attention to data integrity and accuracy
* Known for exceptional communication skills, both written and verbal, to clearly convey technical concepts and collaborate effectively across teams
Preferred Qualifications:
* Experience working with data visualization tools, preferably Tableau
* Exposure to CI/CD pipelines and version control systems like Git
* Familiarity with Agile methodologies and tools like Jira or Confluence
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking ?Apply Today? you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Kforce has a client in Greenwood Village, CO that is seeking an ETL Data Engineer.
Job Summary:
We are seeking a highly skilled and motivated ETL Developer/ Data Engineer to design, build, and optimize robust data pipelines that support analytics and reporting across the organization. The ideal candidate will have a strong background in SQL, Spark, and Python, hands-on experience with AWS cloud services, and a working knowledge of data visualization using Tableau. This role is critical in transforming raw data into reliable, analytics-ready datasets to support business intelligence and data science efforts.
Responsibilities:
* Design, develop, and maintain scalable ETL pipelines using Spark and Python to ingest and transform data from various sources
* Focused on back-end development, responsible for coding all data jobs and ensuring efficient execution and automation
* Write optimized SQL queries to extract, clean, and manipulate large datasets
* Deploy and monitor data pipelines in AWS (e.g., S3, Glue, Lambda, EMR, Redshift)
* Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and ensure timely data delivery
* Implement and enforce data quality checks, lineage tracking, and performance tuning
* Create or support Tableau dashboards that visualize trends and key business metrics using clean, structured datasets
* Participate in code reviews, technical design discussions, and documentation of processes and architecture
* Key responsibilities include creating and maintaining metrics-for example, tracking how often users interact with specific features like the rewind button
* Writes code to capture user interaction data and builds automated pipelines to process and report this information
* Participates in stakeholder interactions to gather requirements, share progress, and align on metrics and data insights
REQUIREMENTS:
* 7+ years of experience in ETL development and data pipeline engineering
* Advanced proficiency in SQL (senior level or above)
* Functional knowledge of Spark and Python
* Experience with AWS cloud services and cloud computing in general
* Strong communication and writing skills
* Proven ability to work independently and troubleshoot technical issues
* Familiarity with integrating AI into cloud-based services
* Knowledge of performance hardware tuning
* Strong programming skills in Python with experience handling data structures and libraries
* Expertise in SQL for complex queries, performance optimization, and stored procedures
* Experience using Apache Spark for large-scale data processing
* Proficient in AWS data services such as S3, Glue, Lambda, Athena, Redshift, or EMR
* Strong problem-solving skills and attention to data integrity and accuracy
* Known for exceptional communication skills, both written and verbal, to clearly convey technical concepts and collaborate effectively across teams
Preferred Qualifications:
* Experience working with data visualization tools, preferably Tableau
* Exposure to CI/CD pipelines and version control systems like Git
* Familiarity with Agile methodologies and tools like Jira or Confluence
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking ?Apply Today? you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.