Overview
On Site
Hybrid
$60,000 - $80,000
Contract - W2
Contract - Independent
Contract - 12 Month(s)
100% Travel
Skills
Analytical Skill
Business Intelligence Dashboard
Data Analysis
Microsoft Excel
Microsoft Power BI
SQL
Tableau
Data Visualization
Data Cleaning
Data Warehousing
Data Management
Cloud Platform
Data Ethics
Basic Machine Learning
A/B Testing
Soft Skills
Advanced Analytics
Amazon
Kinesis
Amazon Web Services
Apache
Kafka
Apache Spark
Big Data
Cloud Computing
Continuous Delivery
Continuous Integration
Customer Facing
Data Engineering
Data Governance
Data Modeling
Data Processing
Data Quality
Data Warehouse
Extract Transform Load
Microsoft Azure
Python
Scalability
Job Details
Job Details
Data Engineer
Eligibility: Open to U.S. Citizens and s only. We do not offer visa sponsorship. Position Overview
We are seeking a Data Engineer with 3 4 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines, robust data models, and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark, Snowflake, and Python, to deliver clean, reliable, and timely data for advanced analytics and reporting.
Key Responsibilities
- Design and develop scalable Data Pipelines to support batch and real-time processing
- Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt
- Develop and optimize queries using SQL for data analysis and warehousing
- Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery
- Collaborate with business and technical teams to gather requirements and create accurate Data Models
- Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation
- Ensure end-to-end Data Processing integrity, scalability, and performance
- Follow best practices for data governance, security, and compliance
Required Skills & Experience
- 3 4 years of experience in Data Engineering or a similar role
- Strong proficiency in SQL and Python (Programming Language)
- Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines
- Solid understanding of Data Warehousing concepts and architecture
- Hands-on experience with Snowflake, Apache Spark, or similar big data technologies
- Proven experience in Data Modeling and data schema design
- Exposure to Data Processing frameworks and performance optimization techniques
- Familiarity with cloud platforms like AWS, Google Cloud Platform, or Azure
Nice to Have
- Experience with streaming data pipelines (e.g., Kafka, Kinesis)
- Exposure to CI/CD practices in data development
- Prior work in consulting or multi-client environments
- Understanding of data quality frameworks and monitoring strategies
- Seniority Level
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.