Overview
Hybrid
Depends on Experience
Contract - W2
Contract - Independent
Contract - 12 Month(s)
Skills
Data Engineering
Kafka
Cloud
Kubernetes
NiFi
ETL
Job Details
Sr. Data Engineer
Location: Dallas, TX 75219 (3 days onsite, 2 days remote)
Duration: 12-18 Months Contract
Summary:
Location: Dallas, TX 75219 (3 days onsite, 2 days remote)
Duration: 12-18 Months Contract
Summary:
- Supporting a new high-scale data initiative
- Kafka is the top priority for real-time streaming
- Additional tools: Ansible, Spark, OpenStack, Flink, Airflow, NiFi, Databricks, Hadoop, S3
- Operating in a hybrid/multi-cloud environment with Kubernetes-based infrastructure.
- We are seeking an experienced Data Engineer with deep expertise in modern data engineering frameworks and cloud-based ecosystems.
- The ideal candidate will design, build, and optimize scalable data pipelines and architectures using tools such as Kafka, Apache Airflow, Apache NiFi, Databricks, Hadoop, Apache Flink, and Amazon S3.
- You ll collaborate with cross-functional teams including data science, analytics, and infrastructure to ensure the delivery of reliable, secure, and high-quality data solutions that power data-driven decision-making across the organization.
- Design, develop, and maintain scalable and reliable data pipelines using Apache Airflow, NiFi, and Databricks to automate data ingestion, transformation, and delivery.
- Implement real-time data streaming and event-driven architectures leveraging Apache Kafka and Flink.
- Develop and optimize data storage solutions using Hadoop and Amazon S3, ensuring high performance, cost efficiency, and scalability.
- Ensure data quality, governance, and security through best practices and robust monitoring frameworks.
- Manage ETL/ELT processes and orchestrate complex workflows across hybrid and multi-cloud environments.
- Work with various data formats (e.g., Parquet, Avro, JSON) to optimize data accessibility and interoperability.
- Troubleshoot and optimize performance across distributed data processing systems.
- Mentor junior engineers and foster a culture of technical excellence and continuous learning.
- Bachelor s degree in Computer Science, Engineering, or a related field.
- 7+ years of experience in data engineering, with strong hands-on experience in data pipeline design, development, and orchestration.
- Expertise in Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and S3.
- Proficiency in Python, Scala, or Java for data transformation and automation tasks.
- Strong SQL skills and familiarity with relational and NoSQL data stores.
- Experience with On-Prem, Kubernetes based Data Engineering
- Familiarity with data modeling, data governance, and data quality frameworks.
- Excellent analytical, problem-solving, and communication skills.
- Experience operating in a multi-cloud or hybrid data environment preferred.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.