Overview
Hybrid
Depends on Experience
Full Time
No Travel Required
Skills
Data Engineer
AWS
Redshift
ETL
Python
Athena
RDS
Data Lake
SQL
Job Details
Position Title: Data Engineer
Type of Employment: Full time
Role Overview:
1. Architect & Design
- Define and lead the design of enterprise-scale data platforms, including ingestion pipelines, transformation frameworks, and data models for structured and unstructured data.
- Drive modernization initiatives by migrating legacy systems to cloud-native architectures (AWS).
- Architect scalable, secure, and high-performance solutions leveraging distributed systems, data lakes, and data warehouses.
- Establish standards for data modeling, metadata management, and semantic layers to enable advanced analytics and reporting.
2. Provide Advisory and Consulting Services
- Act as a trusted advisor to business and technology leaders on data strategy, governance, and architecture.
- Lead consulting engagements for data platform modernization, cloud adoption, and advanced analytics.
- Support pre-sales activities: solution scoping, technical workshops, proposal development, and client presentations.
- Collaborate with partners and vendors to align solutions with industry best practices and emerging technologies.
3. Team Collaboration and Leadership
- Mentor and guide engineering teams on best practices for data engineering, ETL design, and performance optimization.
- Oversee onshore-offshore delivery models ensuring quality, timelines, and adherence to architectural principles.
- Conduct technical reviews, workshops, and stakeholder meetings to align deliverables with business objectives.
- Foster a culture of innovation and continuous improvement within the data engineering team.
4. Develop and Maintain Offerings
- Define and implement reusable frameworks, accelerators, and templates for data ingestion, transformation, and reporting.
- Lead pilot implementations of new offerings and ensure scalability for enterprise adoption.
- Continuously enhance methodologies and processes to improve delivery efficiency and solution robustness.
Work and Technical Experience
Must-Have Skill Set
- Expertise in designing and implementing large-scale data architectures (data lakes, warehouses, streaming platforms).
- Deep understanding of cloud data services (AWS) and migration strategies.
- Strong proficiency in ETL/ELT pipelines and framework development using Python.
- Skilled in building and optimizing data pipelines for batch and real-time processing.
- Strong knowledge of data governance, security (row-level, column-level), and compliance frameworks.
- Advanced SQL and experience with multiple relational and NoSQL databases (Must have - Redshift, RDS and Athena).
- Experience with data visualization platforms like Denodo.
- Experience with building scheduled jobs (data load, cache and individual query) in Denodo.
- Ability to interpret complex stored procedures and optimize queries for performance.
- Familiarity with Agile methodologies and DevOps practices for data engineering.
- Exceptional communication skills for engaging executives and non-technical stakeholders.
- Knowledge of containerization (Docker, Kubernetes) and orchestration for data workloads.
- Experience with BI tools integration with enterprise data platforms.
Good-to-Have Skill Set
- Certifications in cloud platforms (AWS) and data engineering.
- Experience with advanced analytics and machine learning pipelines.
- Prior consulting experience or leading large-scale data transformation programs.
- Knowledge of data extraction from SAP OData Services.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.