Overview
Full Time
Skills
Logical Data Model
Informatica
SQL
Reporting
Microsoft Power BI
Tableau
Warehouse
Teradata
PL/SQL
Amazon Redshift
Snow Flake Schema
Microsoft SSIS
Microsoft Azure
Amazon Web Services
Talend
FOCUS
Data Flow
Data Lake
Data Management
Migration
Database Security
Analytical Skill
Cloud Computing
Storage
Streaming
Extract
Transform
Load
ELT
Database
Scripting
Algorithms
Prototyping
Data Quality
Data Processing
Data Warehouse
Big Data
Job Details
Knowledge, Skills, and Abilities
Ability to translate a logical data model into a relational or non-relational solution
Expert in one or more of the following ETL tools: SSIS, Azure Data Factory, AWS Glue, Matillion, Talend, Informatica, Fivetran
Hands on experience in setting up End to End cloud based data lakes
Hands-on experience in database development using views, SQL scripts and transformations
Ability to translate complex business problems into data-driven solutions
Working knowledge of reporting tools like Power BI , Tableau etc
Ability to identify data quality issues that could affect business outcomes
Flexibility in working across different database technologies and propensity to learn new platforms on-the-fly
Strong interpersonal skills
Team player prepared to lead or support depending on situation"
The Consulting Data Engineer role requires experience in both traditional warehousing technologies (e.g. Teradata, Oracle, SQL Server) and modern database/data warehouse technologies (e.g., AWS Redshift, Azure Synapse, Google Big Query, Snowflake), as well as expertise in ETL tools and frameworks (e.g. SSIS, Azure Data Factory, AWS Glue, Matillion, Talend), with a focus on how these technologies affect business outcomes. This person should have experience with both on-premise and cloud deployments of these technologies and in transforming data to adhere to logical and physical data models, data architectures, and engineering a dataflow to meet business needs. This role will support engagements such as data lake design, data management, migrations of data warehouses to the cloud, and database security models, and ideally should have experience in a large enterprise in these areas.
Ability to translate a logical data model into a relational or non-relational solution
Expert in one or more of the following ETL tools: SSIS, Azure Data Factory, AWS Glue, Matillion, Talend, Informatica, Fivetran
Hands on experience in setting up End to End cloud based data lakes
Hands-on experience in database development using views, SQL scripts and transformations
Ability to translate complex business problems into data-driven solutions
Working knowledge of reporting tools like Power BI , Tableau etc
Ability to identify data quality issues that could affect business outcomes
Flexibility in working across different database technologies and propensity to learn new platforms on-the-fly
Strong interpersonal skills
Team player prepared to lead or support depending on situation"
The Consulting Data Engineer role requires experience in both traditional warehousing technologies (e.g. Teradata, Oracle, SQL Server) and modern database/data warehouse technologies (e.g., AWS Redshift, Azure Synapse, Google Big Query, Snowflake), as well as expertise in ETL tools and frameworks (e.g. SSIS, Azure Data Factory, AWS Glue, Matillion, Talend), with a focus on how these technologies affect business outcomes. This person should have experience with both on-premise and cloud deployments of these technologies and in transforming data to adhere to logical and physical data models, data architectures, and engineering a dataflow to meet business needs. This role will support engagements such as data lake design, data management, migrations of data warehouses to the cloud, and database security models, and ideally should have experience in a large enterprise in these areas.
- Develops high performance distributed data warehouses, distributed analytic systems and cloud architectures
- Participates in developing relational and non-relational data models designed for optimal storage and retrieval
- Develops, tests, and debugs batch and streaming data pipelines (ETL/ELT) to populate databases and object stores from multiple data sources using a variety of scripting languages; provide recommendations to improve data reliability, efficiency and quality
- Works along-side data scientists, supporting the development of high-performance algorithms, models and prototypes
- Implements data quality metrics, standards, guidelines; automates data quality checks / routines as part of data processing frameworks; validates flow of information
- Ensures that Data Warehousing and Big Data systems meet business requirements and industry practices including but not limited to automation of system builds, security requirements, performance requirements and logging/monitoring requirements
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.