Overview
On Site
$35 - $40 hourly
Contract - W2
Contract - Temp
Skills
Data Architecture
Analytics
Decision-making
Data Flow
API
Management
Extract
Transform
Load
Microsoft SQL Server
Cloud Computing
SQL Tuning
Data Modeling
Workflow
Data Quality
Snow Flake Schema
Performance Tuning
Modeling
PySpark
Apache Spark
Streaming
Apache Kafka
Real-time
Big Data
Apache Sqoop
Apache Hive
MapReduce
Apache HBase
Python
Scala
Scripting
Orchestration
Artificial Intelligence
Messaging
Job Details
RESPONSIBILITIES:
Kforce has a client in Plano, TX that is in search of a talented Data Engineer to help design and implement next-gen data architecture and infrastructure. In this role, the Data Engineer will work on end-to-end ETL and streaming data solutions, ingesting, transforming, and delivering high-quality data that powers analytics and decision-making across a forward-thinking enterprise.
Responsibilities:
* Design and implement end-to-end ETL pipelines integrating diverse data sources into Snowflake using SnowSQL and Python
* Build real-time data streaming applications using Apache Spark Streaming and Kafka, ensuring low-latency data flow and processing
* Leverage PySpark and Structured Streaming API to orchestrate complex workflows across data lakes, warehouses, and live-streamed platforms
* Utilize Scala and its collection framework to process complex datasets effectively
* Use tools like Sqoop, Hive, MapReduce, and HBase for ingestion and transformation of enterprise data within big data ecosystems
* Automate and manage cloud ETL workflows using StreamSets and migrate data from on-premise systems (e.g., SQL Server) to cloud-based warehouses
* Optimize Snowflake performance through thoughtful SQL tuning, partitioning, and materialized views
* Apply best practices for data modeling and schema design to support efficient and scalable query performance
* Deploy and maintain data transformation pipelines using DBT (Data Build Tool) to ensure reliable and automated data workflows
* Work cross-functionally to ensure data quality, governance, and seamless access to actionable insights
REQUIREMENTS:
* Strong experience with Snowflake, including performance tuning, transformation, and advanced modeling
* Hands-on expertise in PySpark, Spark Streaming, and Kafka for both batch and real-time processing
* Familiarity with big data tools like Sqoop, Hive, MapReduce, and HBase
* Solid background in Python and/or Scala for data scripting and processing
* Experience with StreamSets, DBT, and other modern data orchestration platforms
* A data-driven mindset and a passion for clean, maintainable code and scalable architecture
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking ?Apply Today? you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Kforce has a client in Plano, TX that is in search of a talented Data Engineer to help design and implement next-gen data architecture and infrastructure. In this role, the Data Engineer will work on end-to-end ETL and streaming data solutions, ingesting, transforming, and delivering high-quality data that powers analytics and decision-making across a forward-thinking enterprise.
Responsibilities:
* Design and implement end-to-end ETL pipelines integrating diverse data sources into Snowflake using SnowSQL and Python
* Build real-time data streaming applications using Apache Spark Streaming and Kafka, ensuring low-latency data flow and processing
* Leverage PySpark and Structured Streaming API to orchestrate complex workflows across data lakes, warehouses, and live-streamed platforms
* Utilize Scala and its collection framework to process complex datasets effectively
* Use tools like Sqoop, Hive, MapReduce, and HBase for ingestion and transformation of enterprise data within big data ecosystems
* Automate and manage cloud ETL workflows using StreamSets and migrate data from on-premise systems (e.g., SQL Server) to cloud-based warehouses
* Optimize Snowflake performance through thoughtful SQL tuning, partitioning, and materialized views
* Apply best practices for data modeling and schema design to support efficient and scalable query performance
* Deploy and maintain data transformation pipelines using DBT (Data Build Tool) to ensure reliable and automated data workflows
* Work cross-functionally to ensure data quality, governance, and seamless access to actionable insights
REQUIREMENTS:
* Strong experience with Snowflake, including performance tuning, transformation, and advanced modeling
* Hands-on expertise in PySpark, Spark Streaming, and Kafka for both batch and real-time processing
* Familiarity with big data tools like Sqoop, Hive, MapReduce, and HBase
* Solid background in Python and/or Scala for data scripting and processing
* Experience with StreamSets, DBT, and other modern data orchestration platforms
* A data-driven mindset and a passion for clean, maintainable code and scalable architecture
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking ?Apply Today? you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.